WorldWideScience

Sample records for model utilized measures

  1. Utility of Monte Carlo Modelling for Holdup Measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Belian, Anthony P.; Russo, P. A. (Phyllis A.); Weier, Dennis R. (Dennis Ray),

    2005-01-01

    Non-destructive assay (NDA) measurements performed to locate and quantify holdup in the Oak Ridge K25 enrichment cascade used neutron totals counting and low-resolution gamma-ray spectroscopy. This facility housed the gaseous diffusion process for enrichment of uranium, in the form of UF{sub 6} gas, from {approx} 20% to 93%. Inventory of {sup 235}U inventory in K-25 is all holdup. These buildings have been slated for decontaminatino and decommissioning. The NDA measurements establish the inventory quantities and will be used to assure criticality safety and meet criteria for waste analysis and transportation. The tendency to err on the side of conservatism for the sake of criticality safety in specifying total NDA uncertainty argues, in the interests of safety and costs, for obtaining the best possible value of uncertainty at the conservative confidence level for each item of process equipment. Variable deposit distribution is a complex systematic effect (i.e., determined by multiple independent variables) on the portable NDA results for very large and bulk converters that contributes greatly to total uncertainty for holdup in converters measured by gamma or neutron NDA methods. Because the magnitudes of complex systematic effects are difficult to estimate, computational tools are important for evaluating those that are large. Motivated by very large discrepancies between gamma and neutron measurements of high-mass converters with gamma results tending to dominate, the Monte Carlo code MCNP has been used to determine the systematic effects of deposit distribution on gamma and neutron results for {sup 235}U holdup mass in converters. This paper details the numerical methodology used to evaluate large systematic effects unique to each measurement type, validates the methodology by comparison with measurements, and discusses how modeling tools can supplement the calibration of instruments used for holdup measurements by providing realistic values at well

  2. [Home health resource utilization measures using a case-mix adjustor model].

    Science.gov (United States)

    You, Sun-Ju; Chang, Hyun-Sook

    2005-08-01

    The purpose of this study was to measure home health resource utilization using a Case-Mix Adjustor Model developed in the U.S. The subjects of this study were 484 patients who had received home health care more than 4 visits during a 60-day episode at 31 home health care institutions. Data on the 484 patients had to be merged onto a 60-day payment segment. Based on the results, the researcher classified home health resource groups (HHRG). The subjects were classified into 34 HHRGs in Korea. Home health resource utilization according to clinical severity was in order of Minimum (C0) service utilization moderate), and the lowest 97,000 won in group C2F3S1, so the former was 5.82 times higher than the latter. Resource utilization in home health care has become an issue of concern due to rising costs for home health care. The results suggest the need for more analytical attention on the utilization and expenditures for home care using a Case-Mix Adjustor Model.

  3. Measurement of utility.

    Science.gov (United States)

    Thavorncharoensap, Montarat

    2014-05-01

    The Quality Adjusted Life Year (QALY) is the most widely recommended health outcome measure for use in economic evaluations. The QALY gives a value to the effect of a given health intervention in terms of both quantity and quality. QALYs are calculated by multiplying the duration of time spent in a given health state, in years, by the quality of life weighted, known as utility. Utility can range from 0 (the worst health state-the equivalent of death) to 1 (the best health state-full health). This paper provides an overview of the various methods that can be used to measure utility and outlines the recommended protocol for measuring utility, as described in the Guidelines for Health Technology Assessment in Thailand (second edition). The recommendations are as follows: Wherever possible, primary data collection using EQ-5D-3L in patients using Thai value sets generated from the general public should be used. Where the EQ-5D-3L is considered inappropriate, other methods such as Standard gamble (SG), Time-trade-off (TTO), Visual analogue scale (VAS), Health Utilities Index (HUI), SF-6D, or Quality of well being (QWB) can be used. However, justification and full details on the chosen instrument should always be provided.

  4. Modeling strategy to identify patients with primary immunodeficiency utilizing risk management and outcome measurement.

    Science.gov (United States)

    Modell, Vicki; Quinn, Jessica; Ginsberg, Grant; Gladue, Ron; Orange, Jordan; Modell, Fred

    2017-06-01

    This study seeks to generate analytic insights into risk management and probability of an identifiable primary immunodeficiency defect. The Jeffrey Modell Centers Network database, Jeffrey Modell Foundation's 10 Warning Signs, the 4 Stages of Testing Algorithm, physician-reported clinical outcomes, programs of physician education and public awareness, the SPIRIT® Analyzer, and newborn screening, taken together, generates P values of less than 0.05%. This indicates that the data results do not occur by chance, and that there is a better than 95% probability that the data are valid. The objectives are to improve patients' quality of life, while generating significant reduction of costs. The advances of the world's experts aligned with these JMF programs can generate analytic insights as to risk management and probability of an identifiable primary immunodeficiency defect. This strategy reduces the uncertainties related to primary immunodeficiency risks, as we can screen, test, identify, and treat undiagnosed patients. We can also address regional differences and prevalence, age, gender, treatment modalities, and sites of care, as well as economic benefits. These tools support high net benefits, substantial financial savings, and significant reduction of costs. All stakeholders, including patients, clinicians, pharmaceutical companies, third party payers, and government healthcare agencies, must address the earliest possible precise diagnosis, appropriate intervention and treatment, as well as stringent control of healthcare costs through risk assessment and outcome measurement. An affected patient is entitled to nothing less, and stakeholders are responsible to utilize tools currently available. Implementation offers a significant challenge to the entire primary immunodeficiency community.

  5. Utilizing Operational and Improved Remote Sensing Measurements to Assess Air Quality Monitoring Model Forecasts

    Science.gov (United States)

    Gan, Chuen-Meei

    Air quality model forecasts from Weather Research and Forecast (WRF) and Community Multiscale Air Quality (CMAQ) are often used to support air quality applications such as regulatory issues and scientific inquiries on atmospheric science processes. In urban environments, these models become more complex due to the inherent complexity of the land surface coupling and the enhanced pollutants emissions. This makes it very difficult to diagnose the model, if the surface parameter forecasts such as PM2.5 (particulate matter with aerodynamic diameter less than 2.5 microm) are not accurate. For this reason, getting accurate boundary layer dynamic forecasts is as essential as quantifying realistic pollutants emissions. In this thesis, we explore the usefulness of vertical sounding measurements on assessing meteorological and air quality forecast models. In particular, we focus on assessing the WRF model (12km x 12km) coupled with the CMAQ model for the urban New York City (NYC) area using multiple vertical profiling and column integrated remote sensing measurements. This assessment is helpful in probing the root causes for WRF-CMAQ overestimates of surface PM2.5 occurring both predawn and post-sunset in the NYC area during the summer. In particular, we find that the significant underestimates in the WRF PBL height forecast is a key factor in explaining this anomaly. On the other hand, the model predictions of the PBL height during daytime when convective heating dominates were found to be highly correlated to lidar derived PBL height with minimal bias. Additional topics covered in this thesis include mathematical method using direct Mie scattering approach to convert aerosol microphysical properties from CMAQ into optical parameters making direct comparisons with lidar and multispectral radiometers feasible. Finally, we explore some tentative ideas on combining visible (VIS) and mid-infrared (MIR) sensors to better separate aerosols into fine and coarse modes.

  6. Decision model incorporating utility theory and measurement of social values applied to nuclear waste management

    International Nuclear Information System (INIS)

    Litchfield, J.W.; Hansen, J.V.; Beck, L.C.

    1975-07-01

    A generalized computer-based decision analysis model was developed and tested. Several alternative concepts for ultimate disposal have already been developed; however, significant research is still required before any of these can be implemented. To make a choice based on technical estimates of the costs, short-term safety, long-term safety, and accident detection and recovery requires estimating the relative importance of each of these factors or attributes. These relative importance estimates primarily involve social values and therefore vary from one individual to the next. The approach used was to sample various public groups to determine the relative importance of each of the factors to the public. These estimates of importance weights were combined in a decision analysis model with estimates, furnished by technical experts, of the degree to which each alternative concept achieves each of the criteria. This model then integrates the two separate and unique sources of information and provides the decision maker with information as to the preferences and concerns of the public as well as the technical areas within each concept which need further research. The model can rank the alternatives using sampled public opinion and techno-economic data. This model provides a decision maker with a structured approach to subdividing complex alternatives into a set of more easily considered attributes, measuring the technical performance of each alternative relative to each attribute, estimating relevant social values, and assimilating quantitative information in a rational manner to estimate total value for each alternative. Because of the explicit nature of this decision analysis, the decision maker can select a specific alternative supported by clear documentation and justification for his assumptions and estimates. (U.S.)

  7. A psychometric evaluation of the Swedish version of the Research Utilization Questionnaire using a Rasch measurement model.

    Science.gov (United States)

    Lundberg, Veronica; Boström, Anne-Marie; Malinowsky, Camilla

    2017-07-30

    Evidence-based practice and research utilisation has become a commonly used concept in health care. The Research Utilization Questionnaire (RUQ) has been recognised to be a widely used instrument measuring the perception of research utilisation among nursing staff in clinical practice. Few studies have however analysed the psychometric properties of the RUQ. The aim of this study was to examine the psychometric properties of the Swedish version of the three subscales in RUQ using a Rasch measurement model. This study has a cross-sectional design using a sample of 163 staff (response rate 81%) working in one nursing home in Sweden. Data were collected using the Swedish version of RUQ in 2012. The three subscales Attitudes towards research, Availability of and support for research use and Use of research findings in clinical practice were investigated. Data were analysed using a Rasch measurement model. The results indicate presence of multidimensionality in all subscales. Moreover, internal scale validity and person response validity also provide some less satisfactory results, especially for the subscale Use of research findings. Overall, there seems to be a problem with the negatively worded statements. The findings suggest that clarification and refining of items, including additional psychometric evaluation of the RUQ, are needed before using the instrument in clinical practice and research studies among staff in nursing homes. © 2017 Nordic College of Caring Science.

  8. Measurement of Online Student Engagement: Utilization of Continuous Online Student Behavior Indicators as Items in a Partial Credit Rasch Model

    Science.gov (United States)

    Anderson, Elizabeth

    2017-01-01

    Student engagement has been shown to be essential to the development of research-based best practices for K-12 education. It has been defined and measured in numerous ways. The purpose of this research study was to develop a measure of online student engagement for grades 3 through 8 using a partial credit Rasch model and validate the measure…

  9. The utility target market model

    International Nuclear Information System (INIS)

    Leng, G.J.; Martin, J.

    1994-01-01

    A new model (the Utility Target Market Model) is used to evaluate the economic benefits of photovoltaic (PV) power systems located at the electrical utility customer site. These distributed PV demand-side generation systems can be evaluated in a similar manner to other demand-side management technologies. The energy and capacity values of an actual PV system located in the service area of the New England Electrical System (NEES) are the two utility benefits evaluated. The annual stream of energy and capacity benefits calculated for the utility are converted to the installed cost per watt that the utility should be willing to invest to receive this benefit stream. Different discount rates are used to show the sensitivity of the allowable installed cost of the PV systems to a utility's average cost of capital. Capturing both the energy and capacity benefits of these relatively environmentally friendly distributed generators, NEES should be willing to invest in this technology when the installed cost per watt declines to ca $2.40 using NEES' rated cost of capital (8.78%). If a social discount rate of 3% is used, installation should be considered when installed cost approaches $4.70/W. Since recent installations in the Sacramento Municipal Utility District have cost between $7-8/W, cost-effective utility applications of PV are close. 22 refs., 1 fig., 2 tabs

  10. Additive conjoint measurement for multiattribute utility

    NARCIS (Netherlands)

    Maas, A.; Wakker, P.P.

    1994-01-01

    This paper shows that multiattribute utility can be simplified by methods from additive conjoint measurement. Given additive conjoint measurability under certainty, axiomatizations can be simplified, and implementation and reliability of elicitation can be improved. This also contributes to the

  11. New Energy Utility Business Models

    International Nuclear Information System (INIS)

    Potocnik, V.

    2016-01-01

    Recently a lot of big changes happened in the power sector: energy efficiency and renewable energy sources are quickly progressing, distributed or decentralised generation of electricity is expanding, climate change requires reduction of greenhouse gas emissions and price volatility and incertitude of fossil fuel supply is common. Those changes have led to obsolescence of vertically integrated business models which have dominated in energy utility organisations for a hundred years and new business models are being introduced. Those models take into account current changes in the power sector and enable a wider application of energy efficiency and renewable energy sources, especially for consumers, with the decentralisation of electricity generation and complying with the requirements of climate and environment preservation. New business models also solve the questions of financial compensations for utilities because of the reduction of centralised energy generation while contributing to local development and employment.(author).

  12. Clinical utility of measures of breathlessness.

    Science.gov (United States)

    Cullen, Deborah L; Rodak, Bernadette

    2002-09-01

    The clinical utility of measures of dyspnea has been debated in the health care community. Although breathlessness can be evaluated with various instruments, the most effective dyspnea measurement tool for patients with chronic lung disease or for measuring treatment effectiveness remains uncertain. Understanding the evidence for the validity and reliability of these instruments may provide a basis for appropriate clinical application. Evaluate instruments designed to measure breathlessness, either as single-symptom or multidimensional instruments, based on psychometrics foundations such as validity, reliability, and discriminative and evaluative properties. Classification of each dyspnea measurement instrument will recommend clinical application in terms of exercise, benchmarking patients, activities of daily living, patient outcomes, clinical trials, and responsiveness to treatment. Eleven dyspnea measurement instruments were selected. Each instrument was assessed as discriminative or evaluative and then analyzed as to its psychometric properties and purpose of design. Descriptive data from all studies were described according to their primary patient application (ie, chronic obstructive pulmonary disease, asthma, or other patient populations). The Borg Scale and the Visual Analogue Scale are applicable to exertion and thus can be applied to any cardiopulmonary patient to determine dyspnea. All other measures were determined appropriate for chronic obstructive pulmonary disease, whereas the Shortness of Breath Questionnaire can be applied to cystic fibrosis and lung transplant patients. The most appropriate utility for all instruments was measuring the effects on activities of daily living and for benchmarking patient progress. Instruments that quantify function and health-related quality of life have great utility for documenting outcomes but may be limited as to documenting treatment responsiveness in terms of clinically important changes. The dyspnea

  13. A sequential model for the structure of health care utilization.

    NARCIS (Netherlands)

    Herrmann, W.J.; Haarmann, A.; Baerheim, A.

    2017-01-01

    Traditional measurement models of health care utilization are not able to represent the complex structure of health care utilization. In this qualitative study, we, therefore, developed a new model to represent the health care utilization structure. In Norway and Germany, we conducted episodic

  14. Risk measures on networks and expected utility

    International Nuclear Information System (INIS)

    Cerqueti, Roy; Lupi, Claudio

    2016-01-01

    In reliability theory projects are usually evaluated in terms of their riskiness, and often decision under risk is intended as the one-shot-type binary choice of accepting or not accepting the risk. In this paper we elaborate on the concept of risk acceptance, and propose a theoretical framework based on network theory. In doing this, we deal with system reliability, where the interconnections among the random quantities involved in the decision process are explicitly taken into account. Furthermore, we explore the conditions to be satisfied for risk-acceptance criteria to be consistent with the axiomatization of standard expected utility theory within the network framework. In accordance with existing literature, we show that a risk evaluation criterion can be meaningful even if it is not consistent with the standard axiomatization of expected utility, once this is suitably reinterpreted in the light of networks. Finally, we provide some illustrative examples. - Highlights: • We discuss risk acceptance and theoretically develop this theme on the basis of network theory. • We propose an original framework for describing the algebraic structure of the set of the networks, when they are viewed as risks. • We introduce the risk measures on networks, which induce total orders on the set of networks. • We state conditions on the risk measures on networks to let the induced risk-acceptance criterion be consistent with a new formulation of the expected utility theory.

  15. Job stress and mental health of permanent and fixed-term workers measured by effort-reward imbalance model, depressive complaints, and clinic utilization.

    Science.gov (United States)

    Inoue, Mariko; Tsurugano, Shinobu; Yano, Eiji

    2011-01-01

    The number of workers with precarious employment has increased globally; however, few studies have used validated measures to investigate the relationship of job status to stress and mental health. Thus, we conducted a study to compare differential job stress experienced by permanent and fixed-term workers using an effort-reward imbalance (ERI) model questionnaire, and by evaluating depressive complaints and clinic utilization. Subjects were permanent or fixed-term male workers at a Japanese research institute (n=756). Baseline data on job stress and depressive complaints were collected in 2007. We followed up with the same population over a 1-year period to assess their utilization of the company clinic for mental health concerns. The ERI ratio was higher among permanent workers than among fixed-term workers. More permanent workers presented with more than two depressive complaints, which is the standard used for the diagnosis of depression. ERI scores indicated that the effort component of permanent work was associated with distress, whereas distress in fixed-term work was related to job promotion and job insecurity. Moreover, over the one-year follow-up period, fixed-term workers visited the on-site clinic for mental concerns 4.04 times more often than permanent workers even after adjusting for age, lifestyle, ERI, and depressive complaints. These contrasting findings reflect the differential workloads and working conditions encountered by permanent and fixed-term workers. The occupational setting where employment status was intermingled, may have contributed to the high numbers of mental health-related issues experienced by workers with different employment status.

  16. Neutron flux measurement utilizing Campbell technique

    International Nuclear Information System (INIS)

    Kropik, M.

    2000-01-01

    Application of the Campbell technique for the neutron flux measurement is described in the contribution. This technique utilizes the AC component (noise) of a neutron chamber signal rather than a usually used DC component. The Campbell theorem, originally discovered to describe noise behaviour of valves, explains that the root mean square of the AC component of the chamber signal is proportional to the neutron flux (reactor power). The quadratic dependence of the reactor power on the root mean square value usually permits to accomplish the whole current power range of the neutron flux measurement by only one channel. Further advantage of the Campbell technique is that large pulses of the response to neutrons are favoured over small pulses of the response to gamma rays in the ratio of their mean square charge transfer and thus, the Campbell technique provides an excellent gamma rays discrimination in the current operational range of a neutron chamber. The neutron flux measurement channel using state of the art components was designed and put into operation. Its linearity, accuracy, dynamic range, time response and gamma discrimination were tested on the VR-1 nuclear reactor in Prague, and behaviour under high neutron flux (accident conditions) was tested on the TRIGA nuclear reactor in Vienna. (author)

  17. Beyond Bentham – Measuring Procedural Utility

    OpenAIRE

    Bruno S. Frey; Alois Stutzer

    2001-01-01

    We propose that outcome utility and process utility can be distinguished and empirically measured. People gain procedural utility from participating in the political decision-making process itself, irrespective of the outcome. Nationals enjoy both outcome and process utility, while foreigners are excluded from political decision-making and therefore cannot enjoy the corresponding procedural utility. Utility is measured by individuals’ reported subjective well-being or happiness. We find tha...

  18. Utilities for high performance dispersion model PHYSIC

    International Nuclear Information System (INIS)

    Yamazawa, Hiromi

    1992-09-01

    The description and usage of the utilities for the dispersion calculation model PHYSIC were summarized. The model was developed in the study of developing high performance SPEEDI with the purpose of introducing meteorological forecast function into the environmental emergency response system. The procedure of PHYSIC calculation consists of three steps; preparation of relevant files, creation and submission of JCL, and graphic output of results. A user can carry out the above procedure with the help of the Geographical Data Processing Utility, the Model Control Utility, and the Graphic Output Utility. (author)

  19. Deriving the expected utility of a predictive model when the utilities are uncertain.

    Science.gov (United States)

    Cooper, Gregory F; Visweswaran, Shyam

    2005-01-01

    Predictive models are often constructed from clinical databases with the goal of eventually helping make better clinical decisions. Evaluating models using decision theory is therefore natural. When constructing a model using statistical and machine learning methods, however, we are often uncertain about precisely how the model will be used. Thus, decision-independent measures of classification performance, such as the area under an ROC curve, are popular. As a complementary method of evaluation, we investigate techniques for deriving the expected utility of a model under uncertainty about the model's utilities. We demonstrate an example of the application of this approach to the evaluation of two models that diagnose coronary artery disease.

  20. Resolving inconsistencies in utility measurement under risk: Tests of generalizations of expected utility

    OpenAIRE

    Han Bleichrodt; José María Abellán-Perpiñan; JoséLuis Pinto; Ildefonso Méndez-Martínez

    2005-01-01

    This paper explores inconsistencies that occur in utility measurement under risk when expected utility theory is assumed and the contribution that prospect theory and some other generalizations of expected utility can make to the resolution of these inconsistencies. We used five methods to measure utilities under risk and found clear violations of expected utility. Of the theories studied, prospect theory was the most consistent with our data. The main improvement of prospect theory over expe...

  1. mathematical models for estimating radio channels utilization

    African Journals Online (AJOL)

    2017-08-08

    Aug 8, 2017 ... Mathematical models for radio channels utilization assessment by real-time flows transfer in ... data transmission networks application having dynamic topology ..... Journal of Applied Mathematics and Statistics, 56(2): 85–90.

  2. Risk measurement with equivalent utility principles

    NARCIS (Netherlands)

    Denuit, M.; Dhaene, J.; Goovaerts, M.; Kaas, R.; Laeven, R.

    2006-01-01

    Risk measures have been studied for several decades in the actuarial literature, where they appeared under the guise of premium calculation principles. Risk measures and properties that risk measures should satisfy have recently received considerable attention in the financial mathematics

  3. An absolute scale for measuring the utility of money

    Science.gov (United States)

    Thomas, P. J.

    2010-07-01

    Measurement of the utility of money is essential in the insurance industry, for prioritising public spending schemes and for the evaluation of decisions on protection systems in high-hazard industries. Up to this time, however, there has been no universally agreed measure for the utility of money, with many utility functions being in common use. In this paper, we shall derive a single family of utility functions, which have risk-aversion as the only free parameter. The fact that they return a utility of zero at their low, reference datum, either the utility of no money or of one unit of money, irrespective of the value of risk-aversion used, qualifies them to be regarded as absolute scales for the utility of money. Evidence of validation for the concept will be offered based on inferential measurements of risk-aversion, using diverse measurement data.

  4. The predictive validity of prospect theory versus expected utility in health utility measurement.

    Science.gov (United States)

    Abellan-Perpiñan, Jose Maria; Bleichrodt, Han; Pinto-Prades, Jose Luis

    2009-12-01

    Most health care evaluations today still assume expected utility even though the descriptive deficiencies of expected utility are well known. Prospect theory is the dominant descriptive alternative for expected utility. This paper tests whether prospect theory leads to better health evaluations than expected utility. The approach is purely descriptive: we explore how simple measurements together with prospect theory and expected utility predict choices and rankings between more complex stimuli. For decisions involving risk prospect theory is significantly more consistent with rankings and choices than expected utility. This conclusion no longer holds when we use prospect theory utilities and expected utilities to predict intertemporal decisions. The latter finding cautions against the common assumption in health economics that health state utilities are transferable across decision contexts. Our results suggest that the standard gamble and algorithms based on, should not be used to value health.

  5. Radiotracers in the study of marine food chains. The use of compartmental analysis and analog modelling in measuring utilization rates of particulate organic matter by benthic invertebrates

    International Nuclear Information System (INIS)

    Gremare, A.; Amouroux, J.M.; Charles, F.

    1991-01-01

    The present study assesses the problem of recycling when using radiotracers to quantify ingestion and assimilation rates of particulate organic matter by benthic invertebrates. The rapid production of dissolved organic matter and its subsequent utilization by benthic invertebrates constitutes a major bias in this kind of study. However recycling processes may also concern POM through the production and reingestion of faeces. The present paper shows that compartmental analysis of the diffusion kinetics of the radiotracer between the different compartments of the system studied and the analog modelling of the exchanges of radioactivity between compartments may be used in order to determine ingestion and assimilation rates. This method is illustrated by the study of a system composed of the bacteria Lactobacillus sp. and the filter-feeding bivalve Venerupis decussata. The advantages and drawbacks of this approach relative to other existing methods are briefly discussed. (Author)

  6. Utility measurement in healthcare: the things I never got to.

    Science.gov (United States)

    Torrance, George W

    2006-01-01

    The present article provides a brief historical background on the development of utility measurement and cost-utility analysis in healthcare. It then outlines a number of research ideas in this field that the author never got to. The first idea is extremely fundamental. Why is health economics the only application of economics that does not use the discipline of economics? And, more importantly, what discipline should it use? Research ideas are discussed to investigate precisely the underlying theory and axiom systems of both Paretian welfare economics and the decision-theoretical utility approach. Can the two approaches be integrated or modified in some appropriate way so that they better reflect the needs of the health field? The investigation is described both for the individual and societal levels. Constructing a 'Robinson Crusoe' society of only a few individuals with different health needs, preferences and willingness to pay is suggested as a method for gaining insight into the problem. The second idea concerns the interval property of utilities and, therefore, QALYs. It specifically concerns the important requirement that changes of equal magnitude anywhere on the utility scale, or alternatively on the QALY scale, should be equally desirable. Unfortunately, one of the original restrictions on utility theory states that such comparisons are not permitted by the theory. It is shown, in an important new finding, that while this restriction applies in a world of certainty, it does not in a world of uncertainty, such as healthcare. Further research is suggested to investigate this property under both certainty and uncertainty. Other research ideas that are described include: the development of a precise axiomatic basis for the time trade-off method; the investigation of chaining as a method of preference measurement with the standard gamble or time trade-off; the development and training of a representative panel of the general public to improve the completeness

  7. A New Filtering Algorithm Utilizing Radial Velocity Measurement

    Institute of Scientific and Technical Information of China (English)

    LIU Yan-feng; DU Zi-cheng; PAN Quan

    2005-01-01

    Pulse Doppler radar measurements consist of range, azimuth, elevation and radial velocity. Most of the radar tracking algorithms in engineering only utilize position measurement. The extended Kalman filter with radial velocity measureneut is presented, then a new filtering algorithm utilizing radial velocity measurement is proposed to improve tracking results and the theoretical analysis is also given. Simulation results of the new algorithm, converted measurement Kalman filter, extended Kalman filter are compared. The effectiveness of the new algorithm is verified by simulation results.

  8. Blood lipid measurements. Variations and practical utility.

    Science.gov (United States)

    Cooper, G R; Myers, G L; Smith, S J; Schlant, R C

    1992-03-25

    To describe the magnitude and impact of the major biological and analytical sources of variation in serum lipid and lipoprotein levels on risk of coronary heart disease; to present a way to qualitatively estimate the total intraindividual variation; and to demonstrate how to determine the number of specimens required to estimate, with 95% confidence, the "true" underlying total cholesterol value in the serum of a patient. Representative references on each source of variation were selected from more than 300 reviewed publications, most published within the past 5 years, to document current findings and concepts. Most articles reviewed were in English. Studies on biological sources of variation were selected using the following criteria: representative of published findings, clear statement of either significant or insignificant results, and acquisition of clinical and laboratory data under standardized conditions. Representative results for special populations such as women and children are reported when results differ from those of adult men. References were selected based on acceptable experimental design and use of standardized laboratory lipid measurements. The lipid levels considered representative for a selected source of variation arose from quantitative measurements by a suitably standardized laboratory. Statistical analysis of data was examined to assure reliability. The proposed method of estimating the biological coefficient of variation must be considered to give qualitative results, because only two or three serial specimens are collected in most cases for the estimation. Concern has arisen about the magnitude, impact, and interpretation of preanalytical as well as analytical sources of variation on reported results of lipid measurements of an individual. Preanalytical sources of variation from behavioral, clinical, and sampling sources constitute about 60% of the total variation in a reported lipid measurement of an individual. A technique is presented

  9. Measuring the potential utility of seasonal climate predictions

    Science.gov (United States)

    Tippett, Michael K.; Kleeman, Richard; Tang, Youmin

    2004-11-01

    Variation of sea surface temperature (SST) on seasonal-to-interannual time-scales leads to changes in seasonal weather statistics and seasonal climate anomalies. Relative entropy, an information theory measure of utility, is used to quantify the impact of SST variations on seasonal precipitation compared to natural variability. An ensemble of general circulation model (GCM) simulations is used to estimate this quantity in three regions where tropical SST has a large impact on precipitation: South Florida, the Nordeste of Brazil and Kenya. We find the yearly variation of relative entropy is strongly correlated with shifts in ensemble mean precipitation and weakly correlated with ensemble variance. Relative entropy is also found to be related to measures of the ability of the GCM to reproduce observations.

  10. Network Bandwidth Utilization Forecast Model on High Bandwidth Network

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wucherl; Sim, Alex

    2014-07-07

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2percent. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.

  11. Network bandwidth utilization forecast model on high bandwidth networks

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wuchert (William) [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-03-30

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2%. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.

  12. Rank dependent expected utility models of tax evasion.

    OpenAIRE

    Erling Eide

    2001-01-01

    In this paper the rank-dependent expected utility theory is substituted for the expected utility theory in models of tax evasion. It is demonstrated that the comparative statics results of the expected utility, portfolio choice model of tax evasion carry over to the more general rank dependent expected utility model.

  13. Development of a Neutron Spectroscopic System Utilizing Compressed Sensing Measurements

    Directory of Open Access Journals (Sweden)

    Vargas Danilo

    2016-01-01

    Full Text Available A new approach to neutron detection capable of gathering spectroscopic information has been demonstrated. The approach relies on an asymmetrical arrangement of materials, geometry, and an ability to change the orientation of the detector with respect to the neutron field. Measurements are used to unfold the energy characteristics of the neutron field using a new theoretical framework of compressed sensing. Recent theoretical results show that the number of multiplexed samples can be lower than the full number of traditional samples while providing the ability to have some super-resolution. Furthermore, the solution approach does not require a priori information or inclusion of physics models. Utilizing the MCNP code, a number of candidate detector geometries and materials were modeled. Simulations were carried out for a number of neutron energies and distributions with preselected orientations for the detector. The resulting matrix (A consists of n rows associated with orientation and m columns associated with energy and distribution where n < m. The library of known responses is used for new measurements Y (n × 1 and the solver is able to determine the system, Y = Ax where x is a sparse vector. Therefore, energy spectrum measurements are a combination of the energy distribution information of the identified elements of A. This approach allows for determination of neutron spectroscopic information using a single detector system with analog multiplexing. The analog multiplexing allows the use of a compressed sensing solution similar to approaches used in other areas of imaging. A single detector assembly provides improved flexibility and is expected to reduce uncertainty associated with current neutron spectroscopy measurement.

  14. Utilization of Multispectral Images for Meat Color Measurements

    DEFF Research Database (Denmark)

    Trinderup, Camilla Himmelstrup; Dahl, Anders Lindbjerg; Carstensen, Jens Michael

    2013-01-01

    This short paper describes how the use of multispectral imaging for color measurement can be utilized in an efficient and descriptive way for meat scientists. The basis of the study is meat color measurements performed with a multispectral imaging system as well as with a standard colorimeter...... of color and color variance than what is obtained by the standard colorimeter....

  15. Insider Models with Finite Utility in Markets with Jumps

    International Nuclear Information System (INIS)

    Kohatsu-Higa, Arturo; Yamazato, Makoto

    2011-01-01

    In this article we consider, under a Lévy process model for the stock price, the utility optimization problem for an insider agent whose additional information is the final price of the stock blurred with an additional independent noise which vanishes as the final time approaches. Our main interest is establishing conditions under which the utility of the insider is finite. Mathematically, the problem entails the study of a “progressive” enlargement of filtration with respect to random measures. We study the jump structure of the process which leads to the conclusion that in most cases the utility of the insider is finite and his optimal portfolio is bounded. This can be explained financially by the high risks involved in models with jumps.

  16. Regression models of discharge and mean velocity associated with near-median streamflow conditions in Texas: utility of the U.S. Geological Survey discharge measurement database

    Science.gov (United States)

    Asquith, William H.

    2014-01-01

    A database containing more than 16,300 discharge values and ancillary hydraulic attributes was assembled from summaries of discharge measurement records for 391 USGS streamflow-gauging stations (streamgauges) in Texas. Each discharge is between the 40th- and 60th-percentile daily mean streamflow as determined by period-of-record, streamgauge-specific, flow-duration curves. Each discharge therefore is assumed to represent a discharge measurement made for near-median streamflow conditions, and such conditions are conceptualized as representative of midrange to baseflow conditions in much of the state. The hydraulic attributes of each discharge measurement included concomitant cross-section flow area, water-surface top width, and reported mean velocity. Two regression equations are presented: (1) an expression for discharge and (2) an expression for mean velocity, both as functions of selected hydraulic attributes and watershed characteristics. Specifically, the discharge equation uses cross-sectional area, water-surface top width, contributing drainage area of the watershed, and mean annual precipitation of the location; the equation has an adjusted R-squared of approximately 0.95 and residual standard error of approximately 0.23 base-10 logarithm (cubic meters per second). The mean velocity equation uses discharge, water-surface top width, contributing drainage area, and mean annual precipitation; the equation has an adjusted R-squared of approximately 0.50 and residual standard error of approximately 0.087 third root (meters per second). Residual plots from both equations indicate that reliable estimates of discharge and mean velocity at ungauged stream sites are possible. Further, the relation between contributing drainage area and main-channel slope (a measure of whole-watershed slope) is depicted to aid analyst judgment of equation applicability for ungauged sites. Example applications and computations are provided and discussed within a real-world, discharge-measurement

  17. Animal Models Utilized in HTLV-1 Research

    Directory of Open Access Journals (Sweden)

    Amanda R. Panfil

    2013-01-01

    Full Text Available Since the isolation and discovery of human T-cell leukemia virus type 1 (HTLV-1 over 30 years ago, researchers have utilized animal models to study HTLV-1 transmission, viral persistence, virus-elicited immune responses, and HTLV-1-associated disease development (ATL, HAM/TSP. Non-human primates, rabbits, rats, and mice have all been used to help understand HTLV-1 biology and disease progression. Non-human primates offer a model system that is phylogenetically similar to humans for examining viral persistence. Viral transmission, persistence, and immune responses have been widely studied using New Zealand White rabbits. The advent of molecular clones of HTLV-1 has offered the opportunity to assess the importance of various viral genes in rabbits, non-human primates, and mice. Additionally, over-expression of viral genes using transgenic mice has helped uncover the importance of Tax and Hbz in the induction of lymphoma and other lymphocyte-mediated diseases. HTLV-1 inoculation of certain strains of rats results in histopathological features and clinical symptoms similar to that of humans with HAM/TSP. Transplantation of certain types of ATL cell lines in immunocompromised mice results in lymphoma. Recently, “humanized” mice have been used to model ATL development for the first time. Not all HTLV-1 animal models develop disease and those that do vary in consistency depending on the type of monkey, strain of rat, or even type of ATL cell line used. However, the progress made using animal models cannot be understated as it has led to insights into the mechanisms regulating viral replication, viral persistence, disease development, and, most importantly, model systems to test disease treatments.

  18. RDT&E Laboratory Capacity Utilization and Productivity Measurement Methods for Financial Decision-Making within DON

    National Research Council Canada - National Science Library

    Haupt, Jeffrey

    1998-01-01

    .... Industry capacity utilization and productivity measurement techniques and models were evaluated for their potential application to the Naval Air Warfare Center Aircraft Division (NAWCAD) RDT&E organization...

  19. Awareness and utilization of abattoir safety measures in Katsina ...

    African Journals Online (AJOL)

    The study assessed utilization of abattoir safety measures in Katsina South and Central senatorial districts, Nigeria. Information was obtained from a total of 80 abattoir workers in each district, while frequency counts, percentages and independent sample t-test were used to analyze data. The majority, in the respective ...

  20. An Examination of Organizatinal Performance Measurement System Utilization

    OpenAIRE

    DeBusk, Gerald Kenneth

    2003-01-01

    This dissertation provides results of three studies, which examine the utilization of organizational performance measurement systems. Evidence gathered in the first study provides insight into the number of perspectives or components found in the evaluation of an organization's performance and the relative weight placed on those components. The evidence suggests that the number of performance measurement components and their relative composition is situational. Components depend heavily on th...

  1. Modeling regulated water utility investment incentives

    Science.gov (United States)

    Padula, S.; Harou, J. J.

    2014-12-01

    This work attempts to model the infrastructure investment choices of privatized water utilities subject to rate of return and price cap regulation. The goal is to understand how regulation influences water companies' investment decisions such as their desire to engage in transfers with neighbouring companies. We formulate a profit maximization capacity expansion model that finds the schedule of new supply, demand management and transfer schemes that maintain the annual supply-demand balance and maximize a companies' profit under the 2010-15 price control process in England. Regulatory incentives for costs savings are also represented in the model. These include: the CIS scheme for the capital expenditure (capex) and incentive allowance schemes for the operating expenditure (opex) . The profit-maximizing investment program (what to build, when and what size) is compared with the least cost program (social optimum). We apply this formulation to several water companies in South East England to model performance and sensitivity to water network particulars. Results show that if companies' are able to outperform the regulatory assumption on the cost of capital, a capital bias can be generated, due to the fact that the capital expenditure, contrarily to opex, can be remunerated through the companies' regulatory capital value (RCV). The occurrence of the 'capital bias' or its entity depends on the extent to which a company can finance its investments at a rate below the allowed cost of capital. The bias can be reduced by the regulatory penalties for underperformances on the capital expenditure (CIS scheme); Sensitivity analysis can be applied by varying the CIS penalty to see how and to which extent this impacts the capital bias effect. We show how regulatory changes could potentially be devised to partially remove the 'capital bias' effect. Solutions potentially include allowing for incentives on total expenditure rather than separately for capex and opex and allowing

  2. The Health Utilities Index (HUI®: concepts, measurement properties and applications

    Directory of Open Access Journals (Sweden)

    Horsman John

    2003-10-01

    Full Text Available Abstract This is a review of the Health Utilities Index (HUI® multi-attribute health-status classification systems, and single- and multi-attribute utility scoring systems. HUI refers to both HUI Mark 2 (HUI2 and HUI Mark 3 (HUI3 instruments. The classification systems provide compact but comprehensive frameworks within which to describe health status. The multi-attribute utility functions provide all the information required to calculate single-summary scores of health-related quality of life (HRQL for each health state defined by the classification systems. The use of HUI in clinical studies for a wide variety of conditions in a large number of countries is illustrated. HUI provides comprehensive, reliable, responsive and valid measures of health status and HRQL for subjects in clinical studies. Utility scores of overall HRQL for patients are also used in cost-utility and cost-effectiveness analyses. Population norm data are available from numerous large general population surveys. The widespread use of HUI facilitates the interpretation of results and permits comparisons of disease and treatment outcomes, and comparisons of long-term sequelae at the local, national and international levels.

  3. A Utility Model for Teaching Load Decisions in Academic Departments.

    Science.gov (United States)

    Massey, William F.; Zemsky, Robert

    1997-01-01

    Presents a utility model for academic department decision making and describes the structural specifications for analyzing it. The model confirms the class-size utility asymmetry predicted by the authors' academic rachet theory, but shows that marginal utility associated with college teaching loads is always negative. Curricular structure and…

  4. Modelling of biomass utilization for energy purpose

    Energy Technology Data Exchange (ETDEWEB)

    Grzybek, Anna [ed.

    2010-07-01

    the overall farms structure, farms land distribution on several separate subfields for one farm, villages' overpopulation and very high employment in agriculture (about 27% of all employees in national economy works in agriculture). Farmers have low education level. In towns 34% of population has secondary education and in rural areas - only 15-16%. Less than 2% inhabitants of rural areas have higher education. The structure of land use is as follows: arable land 11.5%, meadows and pastures 25.4%, forests 30.1%. Poland requires implementation of technical and technological progress for intensification of agricultural production. The reason of competition for agricultural land is maintenance of the current consumption level and allocation of part of agricultural production for energy purposes. Agricultural land is going to be key factor for biofuels production. In this publication research results for the Project PL0073 'Modelling of energetical biomass utilization for energy purposes' have been presented. The Project was financed from the Norwegian Financial Mechanism and European Economic Area Financial Mechanism. The publication is aimed at moving closer and explaining to the reader problems connected with cultivations of energy plants and dispelling myths concerning these problems. Exchange of fossil fuels by biomass for heat and electric energy production could be significant input in carbon dioxide emission reduction. Moreover, biomass crop and biomass utilization for energetical purposes play important role in agricultural production diversification in rural areas transformation. Agricultural production widening enables new jobs creation. Sustainable development is going to be fundamental rule for Polish agriculture evolution in long term perspective. Energetical biomass utilization perfectly integrates in the evolution frameworks, especially on local level. There are two facts. The fist one is that increase of interest in energy crops in Poland has been

  5. Modelling of biomass utilization for energy purpose

    Energy Technology Data Exchange (ETDEWEB)

    Grzybek, Anna (ed.)

    2010-07-01

    the overall farms structure, farms land distribution on several separate subfields for one farm, villages' overpopulation and very high employment in agriculture (about 27% of all employees in national economy works in agriculture). Farmers have low education level. In towns 34% of population has secondary education and in rural areas - only 15-16%. Less than 2% inhabitants of rural areas have higher education. The structure of land use is as follows: arable land 11.5%, meadows and pastures 25.4%, forests 30.1%. Poland requires implementation of technical and technological progress for intensification of agricultural production. The reason of competition for agricultural land is maintenance of the current consumption level and allocation of part of agricultural production for energy purposes. Agricultural land is going to be key factor for biofuels production. In this publication research results for the Project PL0073 'Modelling of energetical biomass utilization for energy purposes' have been presented. The Project was financed from the Norwegian Financial Mechanism and European Economic Area Financial Mechanism. The publication is aimed at moving closer and explaining to the reader problems connected with cultivations of energy plants and dispelling myths concerning these problems. Exchange of fossil fuels by biomass for heat and electric energy production could be significant input in carbon dioxide emission reduction. Moreover, biomass crop and biomass utilization for energetical purposes play important role in agricultural production diversification in rural areas transformation. Agricultural production widening enables new jobs creation. Sustainable development is going to be fundamental rule for Polish agriculture evolution in long term perspective. Energetical biomass utilization perfectly integrates in the evolution frameworks, especially on local level. There are two facts. The fist one is that increase of interest in energy crops in Poland

  6. Mathematical models for estimating radio channels utilization when ...

    African Journals Online (AJOL)

    Definition of the radio channel utilization indicator is given. Mathematical models for radio channels utilization assessment by real-time flows transfer in the wireless self-organized network are presented. Estimated experiments results according to the average radio channel utilization productivity with and without buffering of ...

  7. Measuring and modelling concurrency

    Science.gov (United States)

    Sawers, Larry

    2013-01-01

    This article explores three critical topics discussed in the recent debate over concurrency (overlapping sexual partnerships): measurement of the prevalence of concurrency, mathematical modelling of concurrency and HIV epidemic dynamics, and measuring the correlation between HIV and concurrency. The focus of the article is the concurrency hypothesis – the proposition that presumed high prevalence of concurrency explains sub-Saharan Africa's exceptionally high HIV prevalence. Recent surveys using improved questionnaire design show reported concurrency ranging from 0.8% to 7.6% in the region. Even after adjusting for plausible levels of reporting errors, appropriately parameterized sexual network models of HIV epidemics do not generate sustainable epidemic trajectories (avoid epidemic extinction) at levels of concurrency found in recent surveys in sub-Saharan Africa. Efforts to support the concurrency hypothesis with a statistical correlation between HIV incidence and concurrency prevalence are not yet successful. Two decades of efforts to find evidence in support of the concurrency hypothesis have failed to build a convincing case. PMID:23406964

  8. Development of the multi-attribute Adolescent Health Utility Measure (AHUM

    Directory of Open Access Journals (Sweden)

    Beusterien Kathleen M

    2012-08-01

    Full Text Available Abstract Objective Obtain utilities (preferences for a generalizable set of health states experienced by older children and adolescents who receive therapy for chronic health conditions. Methods A health state classification system, the Adolescent Health Utility Measure (AHUM, was developed based on generic health status measures and input from children with Hunter syndrome and their caregivers. The AHUM contains six dimensions with 4–7 severity levels: self-care, pain, mobility, strenuous activities, self-image, and health perceptions. Using the time trade off (TTO approach, a UK population sample provided utilities for 62 of 16,800 AHUM states. A mixed effects model was used to estimate utilities for the AHUM states. The AHUM was applied to trial NCT00069641 of idursulfase for Hunter syndrome and its extension (NCT00630747. Results Observations (i.e., utilities totaled 3,744 (12*312 participants, with between 43 to 60 for each health state except for the best and worst states which had 312 observations. The mean utilities for the best and worst AHUM states were 0.99 and 0.41, respectively. The random effects model was statistically significant (p  Discussion The AHUM health state classification system may be used in future research to enable calculation of quality-adjust life expectancy for applicable health conditions.

  9. Performance Measurement of Mining Equipments by Utilizing OEE

    Directory of Open Access Journals (Sweden)

    Sermin Elevli

    2010-10-01

    Full Text Available Over the past century, open pit mines have steadily increased their production rate by using larger equipments which requireintensive capital investment. Low commodity prices have forced companies to decrease their unit cost by improving productivity. Oneway to improve productivity is to utilize equipment as effectively as possible. Therefore, the accurate estimation of equipmenteffectiveness is very important so that it can be increased. Overall Equipment Effectiveness (OEE is a well-known measurementmethod, which combines availability, performance and quality, for the evaluation of equipment effectiveness in manufacturing industry.However, there isn’t any study in literature about how to use this metric for mining equipments such as shovel, truck, drilling machineetc. This paper will discuss the application of OEE to measure effectiveness of mining equipment. It identifies causes of time losses forshovel and truck operations and introduces procedure to record time losses. The procedure to estimate OEE of shovels and trucks hasalso been presented via numerical example.

  10. Dew point measurement technique utilizing fiber cut reflection

    Science.gov (United States)

    Kostritskii, S. M.; Dikevich, A. A.; Korkishko, Yu. N.; Fedorov, V. A.

    2009-05-01

    The fiber optical dew point hygrometer based on change of reflection coefficient for fiber cut has been developed and examined. We proposed and verified the model of condensation detector functioning principle. Experimental frost point measurements on air with different frost points have been performed.

  11. Quality measurement affecting surgical practice: Utility versus utopia.

    Science.gov (United States)

    Henry, Leonard R; von Holzen, Urs W; Minarich, Michael J; Hardy, Ashley N; Beachy, Wilbur A; Franger, M Susan; Schwarz, Roderich E

    2018-03-01

    The Triple Aim: improving healthcare quality, cost and patient experience has resulted in massive healthcare "quality" measurement. For many surgeons the origins, intent and strengths of this measurement barrage seems nebulous-though their shortcomings are noticeable. This article reviews the major organizations and programs (namely the Centers for Medicare and Medicaid Services) driving the somewhat burdensome healthcare quality climate. The success of this top-down approach is mixed, and far from convincing. We contend that the current programs disproportionately reflect the definitions of quality from (and the interests of) the national payer perspective; rather than a more balanced representation of all stakeholders interests-most importantly, patients' beneficence. The result is an environment more like performance management than one of valid quality assessment. Suggestions for a more meaningful construction of surgical quality measurement are offered, as well as a strategy to describe surgical quality from all of the stakeholders' perspectives. Our hope is to entice surgeons to engage in institution level quality improvement initiatives that promise utility and are less utopian than what is currently present. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. On measurements and models

    International Nuclear Information System (INIS)

    McDonald, J. C.

    2012-01-01

    The 2011 Nobel Prize for physics was awarded to three astronomers for their measurements that were interpreted as providing evidence of an acceleration in the expansion of the universe rather than a slowing, as had been expected. The subsequent theoretical explanation, or model, of the observed phenomenon led to the postulation of 'dark matter' or 'dark energy'. The implications of this new form of energy are startling to say the least. The following quote came from an article about the Nobel Prize award in the New York Times. If the universe continues accelerating, astronomers say, rather than coasting gently into the night, distant galaxies will eventually be moving apart so quickly that they cannot communicate with one another and all the energy will be sucked out of the universe. (author)

  13. A note on additive risk measures in rank-dependent utility

    NARCIS (Netherlands)

    Goovaerts, M.J.; Kaas, R.; Laeven, R.J.A.

    2010-01-01

    This note proves that risk measures obtained by applying the equivalent utility principle in rank-dependent utility are additive if and only if the utility function is linear or exponential and the probability weighting (distortion) function is the identity.

  14. Animal models of contraception: utility and limitations

    Directory of Open Access Journals (Sweden)

    Liechty ER

    2015-04-01

    Full Text Available Emma R Liechty,1 Ingrid L Bergin,1 Jason D Bell2 1Unit for Laboratory Animal Medicine, 2Program on Women's Health Care Effectiveness Research, Department of Obstetrics and Gynecology, University of Michigan, Ann Arbor, MI, USA Abstract: Appropriate animal modeling is vital for the successful development of novel contraceptive devices. Advances in reproductive biology have identified novel pathways for contraceptive intervention. Here we review species-specific anatomic and physiologic considerations impacting preclinical contraceptive testing, including efficacy testing, mechanistic studies, device design, and modeling off-target effects. Emphasis is placed on the use of nonhuman primate models in contraceptive device development. Keywords: nonhuman primate, preclinical, in vivo, contraceptive devices

  15. Animal models of asthma: utility and limitations

    Directory of Open Access Journals (Sweden)

    Aun MV

    2017-11-01

    Full Text Available Marcelo Vivolo Aun,1,2 Rafael Bonamichi-Santos,1,2 Fernanda Magalhães Arantes-Costa,2 Jorge Kalil,1 Pedro Giavina-Bianchi1 1Clinical Immunology and Allergy Division, Department of Internal Medicine, University of São Paulo School of Medicine, São Paulo, Brazil, 2Laboratory of Experimental Therapeutics (LIM20, Department of Internal Medicine, University of Sao Paulo, Sao Paulo, Brazil Abstract: Clinical studies in asthma are not able to clear up all aspects of disease pathophysiology. Animal models have been developed to better understand these mechanisms and to evaluate both safety and efficacy of therapies before starting clinical trials. Several species of animals have been used in experimental models of asthma, such as Drosophila, rats, guinea pigs, cats, dogs, pigs, primates and equines. However, the most common species studied in the last two decades is mice, particularly BALB/c. Animal models of asthma try to mimic the pathophysiology of human disease. They classically include two phases: sensitization and challenge. Sensitization is traditionally performed by intraperitoneal and subcutaneous routes, but intranasal instillation of allergens has been increasingly used because human asthma is induced by inhalation of allergens. Challenges with allergens are performed through aerosol, intranasal or intratracheal instillation. However, few studies have compared different routes of sensitization and challenge. The causative allergen is another important issue in developing a good animal model. Despite being more traditional and leading to intense inflammation, ovalbumin has been replaced by aeroallergens, such as house dust mites, to use the allergens that cause human disease. Finally, researchers should define outcomes to be evaluated, such as serum-specific antibodies, airway hyperresponsiveness, inflammation and remodeling. The present review analyzes the animal models of asthma, assessing differences between species, allergens and routes

  16. Utilizing Rapid Prototyping for Architectural Modeling

    Science.gov (United States)

    Kirton, E. F.; Lavoie, S. D.

    2006-01-01

    This paper will discuss our approach to, success with and future direction in rapid prototyping for architectural modeling. The premise that this emerging technology has broad and exciting applications in the building design and construction industry will be supported by visual and physical evidence. This evidence will be presented in the form of…

  17. Evaluation of Usability Utilizing Markov Models

    Science.gov (United States)

    Penedo, Janaina Rodrigues; Diniz, Morganna; Ferreira, Simone Bacellar Leal; Silveira, Denis S.; Capra, Eliane

    2012-01-01

    Purpose: The purpose of this paper is to analyze the usability of a remote learning system in its initial development phase, using a quantitative usability evaluation method through Markov models. Design/methodology/approach: The paper opted for an exploratory study. The data of interest of the research correspond to the possible accesses of users…

  18. A Model of Trusted Measurement Model

    OpenAIRE

    Ma Zhili; Wang Zhihao; Dai Liang; Zhu Xiaoqin

    2017-01-01

    A model of Trusted Measurement supporting behavior measurement based on trusted connection architecture (TCA) with three entities and three levels is proposed, and a frame to illustrate the model is given. The model synthesizes three trusted measurement dimensions including trusted identity, trusted status and trusted behavior, satisfies the essential requirements of trusted measurement, and unified the TCA with three entities and three levels.

  19. Aerial Measuring System Sensor Modeling

    International Nuclear Information System (INIS)

    Detwiler, R.S.

    2002-01-01

    This project deals with the modeling the Aerial Measuring System (AMS) fixed-wing and rotary-wing sensor systems, which are critical U.S. Department of Energy's National Nuclear Security Administration (NNSA) Consequence Management assets. The fixed-wing system is critical in detecting lost or stolen radiography or medical sources, or mixed fission products as from a commercial power plant release at high flying altitudes. The helicopter is typically used at lower altitudes to determine ground contamination, such as in measuring americium from a plutonium ground dispersal during a cleanup. Since the sensitivity of these instruments as a function of altitude is crucial in estimating detection limits of various ground contaminations and necessary count times, a characterization of their sensitivity as a function of altitude and energy is needed. Experimental data at altitude as well as laboratory benchmarks is important to insure that the strong effects of air attenuation are modeled correctly. The modeling presented here is the first attempt at such a characterization of the equipment for flying altitudes. The sodium iodide (NaI) sensors utilized with these systems were characterized using the Monte Carlo N-Particle code (MCNP) developed at Los Alamos National Laboratory. For the fixed wing system, calculations modeled the spectral response for the 3-element NaI detector pod and High-Purity Germanium (HPGe) detector, in the relevant energy range of 50 keV to 3 MeV. NaI detector responses were simulated for both point and distributed surface sources as a function of gamma energy and flying altitude. For point sources, photopeak efficiencies were calculated for a zero radial distance and an offset equal to the altitude. For distributed sources approximating an infinite plane, gross count efficiencies were calculated and normalized to a uniform surface deposition of 1 microCi/m 2 . The helicopter calculations modeled the transport of americium-241 ( 241 Am) as this is

  20. Utility of Social Modeling for Proliferation Assessment - Preliminary Findings

    International Nuclear Information System (INIS)

    Coles, Garill A.; Gastelum, Zoe N.; Brothers, Alan J.; Thompson, Sandra E.

    2009-01-01

    Often the methodologies for assessing proliferation risk are focused around the inherent vulnerability of nuclear energy systems and associated safeguards. For example an accepted approach involves ways to measure the intrinsic and extrinsic barriers to potential proliferation. This paper describes preliminary investigation into non-traditional use of social and cultural information to improve proliferation assessment and advance the approach to assessing nuclear material diversion. Proliferation resistance assessment, safeguard assessments and related studies typically create technical information about the vulnerability of a nuclear energy system to diversion of nuclear material. The purpose of this research project is to find ways to integrate social information with technical information by explicitly considering the role of culture, groups and/or individuals to factors that impact the possibility of proliferation. When final, this work is expected to describe and demonstrate the utility of social science modeling in proliferation and proliferation risk assessments.

  1. Modified Smith-predictor multirate control utilizing secondary process measurements

    Directory of Open Access Journals (Sweden)

    Rolf Ergon

    2007-01-01

    Full Text Available The Smith-predictor is a well-known control structure for industrial time delay systems, where the basic idea is to estimate the non-delayed process output by use of a process model, and to use this estimate in an inner feedback control loop combined with an outer feedback loop based on the delayed estimation error. The model used may be either mechanistic or identified from input-output data. The paper discusses improvements of the Smith-predictor for systems where also secondary process measurements without time delay are available as a basis for the primary output estimation. The estimator may then be identified also in the common case with primary outputs sampled at a lower rate than the secondary outputs. A simulation example demonstrates the feasibility and advantages of the suggested control structure.

  2. Aerial measuring system sensor modeling

    International Nuclear Information System (INIS)

    Detwiler, Rebecca

    2002-01-01

    The AMS fixed-wing and rotary-wing systems are critical National Nuclear Security Administration (NNSA) Emergency Response assets. This project is principally focused on the characterization of the sensors utilized with these systems via radiation transport calculations. The Monte Carlo N-Particle code (MCNP) which has been developed at Los Alamos National Laboratory was used to model the detector response of the AMS fixed wing and helicopter systems. To validate the calculations, benchmark measurements were made for simple source-detector configurations. The fixed-wing system is an important tool in response to incidents involving the release of mixed fission products (a commercial power reactor release), the threat or actual explosion of a Radiological Dispersal Device, and the loss or theft of a large industrial source (a radiography source). Calculations modeled the spectral response for the sensors contained, a 3-element NaI detector pod and HpGe detector, in the relevant energy range of 50 keV to 3 MeV. NaI detector responses were simulated for both point and distributed surface sources as a function of gamma energy and flying altitude. For point sources, photo-peak efficiencies were calculated for a zero radial distance and an offset equal to the altitude. For distributed sources approximating infinite plane, gross count efficiencies were calculated and normalized to a uniform surface deposition of 1 C i/m2

  3. Fiscal 1995 coal production/utilization technology promotion subsidy/clean coal technology promotion business/regional model survey. Study report on `Environmental load reduction measures: feasibility study of a coal utilization eco/energy supply system` (interim report); 1995 nendo sekitan seisan riyo gijutsu shinkohi hojokin clean coal technology suishin jigyo chiiki model chosa. `Kankyo fuka teigen taisaku: sekitan riyo eko energy kyokyu system no kanosei chosa` chosa hokokusho (chukan hokoku)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-03-01

    The coal utilization is expected to make substantial growth according to the long-term energy supply/demand plan. To further expand the future coal utilization, however, it is indispensable to reduce environmental loads in its total use with other energies, based on the coal use. In this survey, a regional model survey was conducted as environmental load reduction measures using highly cleaned coal which were taken in fiscal 1993 and 1994. Concretely, a model system was assumed which combined facilities for mixed combustion with coal and other energy (hull, bagasse, waste, etc.) and facilities for effective use of burned ash, and potential reduction in environmental loads of the model system was studied. The technology of mixed combustion between coal and other energy is still in a developmental stage with no novelties in the country. Therefore, the mixed combustion technology between coal and other energy is an important field which is very useful for the future energy supply/demand and environmental issues. 34 refs., 27 figs., 48 tabs.

  4. A workflow learning model to improve geovisual analytics utility.

    Science.gov (United States)

    Roth, Robert E; Maceachren, Alan M; McCabe, Craig A

    2009-01-01

    INTRODUCTION: This paper describes the design and implementation of the G-EX Portal Learn Module, a web-based, geocollaborative application for organizing and distributing digital learning artifacts. G-EX falls into the broader context of geovisual analytics, a new research area with the goal of supporting visually-mediated reasoning about large, multivariate, spatiotemporal information. Because this information is unprecedented in amount and complexity, GIScientists are tasked with the development of new tools and techniques to make sense of it. Our research addresses the challenge of implementing these geovisual analytics tools and techniques in a useful manner. OBJECTIVES: The objective of this paper is to develop and implement a method for improving the utility of geovisual analytics software. The success of software is measured by its usability (i.e., how easy the software is to use?) and utility (i.e., how useful the software is). The usability and utility of software can be improved by refining the software, increasing user knowledge about the software, or both. It is difficult to achieve transparent usability (i.e., software that is immediately usable without training) of geovisual analytics software because of the inherent complexity of the included tools and techniques. In these situations, improving user knowledge about the software through the provision of learning artifacts is as important, if not more so, than iterative refinement of the software itself. Therefore, our approach to improving utility is focused on educating the user. METHODOLOGY: The research reported here was completed in two steps. First, we developed a model for learning about geovisual analytics software. Many existing digital learning models assist only with use of the software to complete a specific task and provide limited assistance with its actual application. To move beyond task-oriented learning about software use, we propose a process-oriented approach to learning based on

  5. Simultaneous measurement of glucose transport and utilization in the human brain

    OpenAIRE

    Shestov, Alexander A.; Emir, Uzay E.; Kumar, Anjali; Henry, Pierre-Gilles; Seaquist, Elizabeth R.; Öz, Gülin

    2011-01-01

    Glucose is the primary fuel for brain function, and determining the kinetics of cerebral glucose transport and utilization is critical for quantifying cerebral energy metabolism. The kinetic parameters of cerebral glucose transport, KMt and Vmaxt, in humans have so far been obtained by measuring steady-state brain glucose levels by proton (1H) NMR as a function of plasma glucose levels and fitting steady-state models to these data. Extraction of the kinetic parameters for cerebral glucose tra...

  6. A mangrove creek restoration plan utilizing hydraulic modeling.

    Science.gov (United States)

    Marois, Darryl E; Mitsch, William J

    2017-11-01

    Despite the valuable ecosystem services provided by mangrove ecosystems they remain threatened around the globe. Urban development has been a primary cause for mangrove destruction and deterioration in south Florida USA for the last several decades. As a result, the restoration of mangrove forests has become an important topic of research. Using field sampling and remote-sensing we assessed the past and present hydrologic conditions of a mangrove creek and its connected mangrove forest and brackish marsh systems located on the coast of Naples Bay in southwest Florida. We concluded that the hydrology of these connected systems had been significantly altered from its natural state due to urban development. We propose here a mangrove creek restoration plan that would extend the existing creek channel 1.1 km inland through the adjacent mangrove forest and up to an adjacent brackish marsh. We then tested the hydrologic implications using a hydraulic model of the mangrove creek calibrated with tidal data from Naples Bay and water levels measured within the creek. The calibrated model was then used to simulate the resulting hydrology of our proposed restoration plan. Simulation results showed that the proposed creek extension would restore a twice-daily flooding regime to a majority of the adjacent mangrove forest and that there would still be minimal tidal influence on the brackish marsh area, keeping its salinity at an acceptable level. This study demonstrates the utility of combining field data and hydraulic modeling to aid in the design of mangrove restoration plans.

  7. Environmental Measurements and Modeling

    Science.gov (United States)

    Environmental measurement is any data collection activity involving the assessment of chemical, physical, or biological factors in the environment which affect human health. Learn more about these programs and tools that aid in environmental decisions

  8. Identification of human operator performance models utilizing time series analysis

    Science.gov (United States)

    Holden, F. M.; Shinners, S. M.

    1973-01-01

    The results of an effort performed by Sperry Systems Management Division for AMRL in applying time series analysis as a tool for modeling the human operator are presented. This technique is utilized for determining the variation of the human transfer function under various levels of stress. The human operator's model is determined based on actual input and output data from a tracking experiment.

  9. Subjective Expected Utility: A Model of Decision-Making.

    Science.gov (United States)

    Fischoff, Baruch; And Others

    1981-01-01

    Outlines a model of decision making known to researchers in the field of behavioral decision theory (BDT) as subjective expected utility (SEU). The descriptive and predictive validity of the SEU model, probability and values assessment using SEU, and decision contexts are examined, and a 54-item reference list is provided. (JL)

  10. Kinetic models of cell growth, substrate utilization and bio ...

    African Journals Online (AJOL)

    Bio-decolorization kinetic studies of distillery effluent in a batch culture were conducted using Aspergillus fumigatus. A simple model was proposed using the Logistic Equation for the growth, Leudeking-Piret kinetics for bio-decolorization, and also for substrate utilization. The proposed models appeared to provide a suitable ...

  11. Precision gravity measurement utilizing Accelerex vibrating beam accelerometer technology

    Science.gov (United States)

    Norling, Brian L.

    Tests run using Sundstrand vibrating beam accelerometers to sense microgravity are described. Lunar-solar tidal effects were used as a highly predictable signal which varies by approximately 200 billionths of the full-scale gravitation level. Test runs of 48-h duration were used to evaluate stability, resolution, and noise. Test results on the Accelerex accelerometer show accuracies suitable for precision applications such as gravity mapping and gravity density logging. The test results indicate that Accelerex technology, even with an instrument design and signal processing approach not optimized for microgravity measurement, can achieve 48-nano-g (1 sigma) or better accuracy over a 48-h period. This value includes contributions from instrument noise and random walk, combined bias and scale factor drift, and thermal modeling errors as well as external contributions from sampling noise, test equipment inaccuracies, electrical noise, and cultural noise induced acceleration.

  12. Utility of Social Modeling for Proliferation Assessment - Preliminary Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Coles, Garill A.; Gastelum, Zoe N.; Brothers, Alan J.; Thompson, Sandra E.

    2009-06-01

    This Preliminary Assessment draft report will present the results of a literature search and preliminary assessment of the body of research, analysis methods, models and data deemed to be relevant to the Utility of Social Modeling for Proliferation Assessment research. This report will provide: 1) a description of the problem space and the kinds of information pertinent to the problem space, 2) a discussion of key relevant or representative literature, 3) a discussion of models and modeling approaches judged to be potentially useful to the research, and 4) the next steps of this research that will be pursued based on this preliminary assessment. This draft report represents a technical deliverable for the NA-22 Simulations, Algorithms, and Modeling (SAM) program. Specifically this draft report is the Task 1 deliverable for project PL09-UtilSocial-PD06, Utility of Social Modeling for Proliferation Assessment. This project investigates non-traditional use of social and cultural information to improve nuclear proliferation assessment, including nonproliferation assessment, proliferation resistance assessments, safeguards assessments and other related studies. These assessments often use and create technical information about the State’s posture towards proliferation, the vulnerability of a nuclear energy system to an undesired event, and the effectiveness of safeguards. This project will find and fuse social and technical information by explicitly considering the role of cultural, social and behavioral factors relevant to proliferation. The aim of this research is to describe and demonstrate if and how social science modeling has utility in proliferation assessment.

  13. Simultaneous measurement of glucose transport and utilization in the human brain

    Science.gov (United States)

    Shestov, Alexander A.; Emir, Uzay E.; Kumar, Anjali; Henry, Pierre-Gilles; Seaquist, Elizabeth R.

    2011-01-01

    Glucose is the primary fuel for brain function, and determining the kinetics of cerebral glucose transport and utilization is critical for quantifying cerebral energy metabolism. The kinetic parameters of cerebral glucose transport, KMt and Vmaxt, in humans have so far been obtained by measuring steady-state brain glucose levels by proton (1H) NMR as a function of plasma glucose levels and fitting steady-state models to these data. Extraction of the kinetic parameters for cerebral glucose transport necessitated assuming a constant cerebral metabolic rate of glucose (CMRglc) obtained from other tracer studies, such as 13C NMR. Here we present new methodology to simultaneously obtain kinetic parameters for glucose transport and utilization in the human brain by fitting both dynamic and steady-state 1H NMR data with a reversible, non-steady-state Michaelis-Menten model. Dynamic data were obtained by measuring brain and plasma glucose time courses during glucose infusions to raise and maintain plasma concentration at ∼17 mmol/l for ∼2 h in five healthy volunteers. Steady-state brain vs. plasma glucose concentrations were taken from literature and the steady-state portions of data from the five volunteers. In addition to providing simultaneous measurements of glucose transport and utilization and obviating assumptions for constant CMRglc, this methodology does not necessitate infusions of expensive or radioactive tracers. Using this new methodology, we found that the maximum transport capacity for glucose through the blood-brain barrier was nearly twofold higher than maximum cerebral glucose utilization. The glucose transport and utilization parameters were consistent with previously published values for human brain. PMID:21791622

  14. Estimation of utility values from visual analog scale measures of health in patients undergoing cardiac surgery

    Directory of Open Access Journals (Sweden)

    Oddershede L

    2014-01-01

    Full Text Available Lars Oddershede,1,2 Jan Jesper Andreasen,1 Lars Ehlers2 1Department of Cardiothoracic Surgery, Center for Cardiovascular Research, Aalborg University Hospital, Aalborg, Denmark; 2Danish Center for Healthcare Improvements, Faculty of Social Sciences and Faculty of Health Sciences, Aalborg University, Aalborg East, Denmark Introduction: In health economic evaluations, mapping can be used to estimate utility values from other health outcomes in order to calculate quality adjusted life-years. Currently, no methods exist to map visual analog scale (VAS scores to utility values. This study aimed to develop and propose a statistical algorithm for mapping five dimensions of health, measured on VASs, to utility scores in patients suffering from cardiovascular disease. Methods: Patients undergoing coronary artery bypass grafting at Aalborg University Hospital in Denmark were asked to score their health using the five VAS items (mobility, self-care, ability to perform usual activities, pain, and presence of anxiety or depression and the EuroQol 5 Dimensions questionnaire. Regression analysis was used to estimate four mapping models from patients' age, sex, and the self-reported VAS scores. Prediction errors were compared between mapping models and on subsets of the observed utility scores. Agreement between predicted and observed values was assessed using Bland–Altman plots. Results: Random effects generalized least squares (GLS regression yielded the best results when quadratic terms of VAS scores were included. Mapping models fitted using the Tobit model and censored least absolute deviation regression did not appear superior to GLS regression. The mapping models were able to explain approximately 63%–65% of the variation in the observed utility scores. The mean absolute error of predictions increased as the observed utility values decreased. Conclusion: We concluded that it was possible to predict utility scores from VAS scores of the five

  15. Model plant Key Measurement Points

    International Nuclear Information System (INIS)

    Schneider, R.A.

    1984-01-01

    For IAEA safeguards a Key Measurement Point is defined as the location where nuclear material appears in such a form that it may be measured to determine material flow or inventory. This presentation describes in an introductory manner the key measurement points and associated measurements for the model plant used in this training course

  16. Measures to remove impediments to better utilization. Renewable energy sources

    International Nuclear Information System (INIS)

    Diekmann, J.; Eichelbroenner, M.; Langniss, O.

    1997-01-01

    The utilization of renewable energy sources meets with a number of obstacles created in particular by economic framework conditions, regulatory provisions, lengthy administrative procedures, insufficient information, and to some part also to the reluctance of bankers and utilities. This is why an action programme was put underway by the Forum fuer Zukunftsenergien, together with the Berlin-based DIW (German economic research institute) and the Stuttgart-based DLR (German aerospace research institute), financed from public funds of the Federal Ministry of Economics. Under this programme, almost 900 operators of systems for electricity generation from wind power, hydropower, biomass, ambient heat, solar thermal energy and by photovoltaic conversion have been interviewed. Based on the information obtained, the article reveals the existing impediments and proposed action for overcoming the obstacles. (orig.) [de

  17. The changing utility workforce and the emergence of building information modeling in utilities

    Energy Technology Data Exchange (ETDEWEB)

    Saunders, A. [Autodesk Inc., San Rafael, CA (United States)

    2010-07-01

    Utilities are faced with the extensive replacement of a workforce that is now reaching retirement age. New personnel will have varying skill levels and different expectations in relation to design tools. This paper discussed methods of facilitating knowledge transfer from the retiring workforce to new staff using rules-based design software. It was argued that while nothing can replace the experiential knowledge of long-term engineers, software with built-in validations can accelerate training and building information modelling (BIM) processes. Younger personnel will expect a user interface paradigm that is based on their past gaming and work experiences. Visualization, simulation, and modelling approaches were reviewed. 3 refs.

  18. The Sustainable Energy Utility (SEU) Model for Energy Service Delivery

    Science.gov (United States)

    Houck, Jason; Rickerson, Wilson

    2009-01-01

    Climate change, energy price spikes, and concerns about energy security have reignited interest in state and local efforts to promote end-use energy efficiency, customer-sited renewable energy, and energy conservation. Government agencies and utilities have historically designed and administered such demand-side measures, but innovative…

  19. Smartphone photography utilized to measure wrist range of motion.

    Science.gov (United States)

    Wagner, Eric R; Conti Mica, Megan; Shin, Alexander Y

    2018-02-01

    The purpose was to determine if smartphone photography is a reliable tool in measuring wrist movement. Smartphones were used to take digital photos of both wrists in 32 normal participants (64 wrists) at extremes of wrist motion. The smartphone measurements were compared with clinical goniometry measurements. There was a very high correlation between the clinical goniometry and smartphone measurements, as the concordance coefficients were high for radial deviation, ulnar deviation, wrist extension and wrist flexion. The Pearson coefficients also demonstrated the high precision of the smartphone measurements. The Bland-Altman plots demonstrated 29-31 of 32 smartphone measurements were within the 95% confidence interval of the clinical measurements for all positions of the wrists. There was high reliability between the photography taken by the volunteer and researcher, as well as high inter-observer reliability. Smartphone digital photography is a reliable and accurate tool for measuring wrist range of motion. II.

  20. Unified Model for Generation Complex Networks with Utility Preferential Attachment

    International Nuclear Information System (INIS)

    Wu Jianjun; Gao Ziyou; Sun Huijun

    2006-01-01

    In this paper, based on the utility preferential attachment, we propose a new unified model to generate different network topologies such as scale-free, small-world and random networks. Moreover, a new network structure named super scale network is found, which has monopoly characteristic in our simulation experiments. Finally, the characteristics of this new network are given.

  1. Maximizing the model for Discounted Stream of Utility from ...

    African Journals Online (AJOL)

    Osagiede et al. (2009) considered an analytic model for maximizing discounted stream of utility from consumption when the rate of production is linear. A solution was provided to a level where methods of solving order differential equations will be applied, but they left off there, as a result of the mathematical complexity ...

  2. Multiattribute health utility scoring for the computerized adaptive measure CAT-5D-QOL was developed and validated.

    Science.gov (United States)

    Kopec, Jacek A; Sayre, Eric C; Rogers, Pamela; Davis, Aileen M; Badley, Elizabeth M; Anis, Aslam H; Abrahamowicz, Michal; Russell, Lara; Rahman, Md Mushfiqur; Esdaile, John M

    2015-10-01

    The CAT-5D-QOL is a previously reported item response theory (IRT)-based computerized adaptive tool to measure five domains (attributes) of health-related quality of life. The objective of this study was to develop and validate a multiattribute health utility (MAHU) scoring method for this instrument. The MAHU scoring system was developed in two stages. In phase I, we obtained standard gamble (SG) utilities for 75 hypothetical health states in which only one domain varied (15 states per domain). In phase II, we obtained SG utilities for 256 multiattribute states. We fit a multiplicative regression model to predict SG utilities from the five IRT domain scores. The prediction model was constrained using data from phase I. We validated MAHU scores by comparing them with the Health Utilities Index Mark 3 (HUI3) and directly measured utilities and by assessing between-group discrimination. MAHU scores have a theoretical range from -0.842 to 1. In the validation study, the scores were, on average, higher than HUI3 utilities and lower than directly measured SG utilities. MAHU scores correlated strongly with the HUI3 (Spearman ρ = 0.78) and discriminated well between groups expected to differ in health status. Results reported here provide initial evidence supporting the validity of the MAHU scoring system for the CAT-5D-QOL. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Coupling model of energy consumption with changes in environmental utility

    International Nuclear Information System (INIS)

    He Hongming; Jim, C.Y.

    2012-01-01

    This study explores the relationships between metropolis energy consumption and environmental utility changes by a proposed Environmental Utility of Energy Consumption (EUEC) model. Based on the dynamic equilibrium of input–output economics theory, it considers three simulation scenarios: fixed-technology, technological-innovation, and green-building effect. It is applied to analyse Hong Kong in 1980–2007. Continual increase in energy consumption with rapid economic growth degraded environmental utility. First, energy consumption at fixed-technology was determined by economic outcome. In 1990, it reached a critical balanced state when energy consumption was 22×10 9 kWh. Before 1990 (x 1 9 kWh), rise in energy consumption improved both economic development and environmental utility. After 1990 (x 1 >22×10 9 kWh), expansion of energy consumption facilitated socio-economic development but suppressed environmental benefits. Second, technological-innovation strongly influenced energy demand and improved environmental benefits. The balanced state remained in 1999 when energy consumption reached 32.33×10 9 kWh. Technological-innovation dampened energy consumption by 12.99%, exceeding the fixed-technology condition. Finally, green buildings reduced energy consumption by an average of 17.5% in 1990–2007. They contributed significantly to energy saving, and buffered temperature fluctuations between external and internal environment. The case investigations verified the efficiency of the EUEC model, which can effectively evaluate the interplay of energy consumption and environmental quality. - Highlights: ► We explore relationships between metropolis energy consumption and environmental utility. ► An Environmental Utility of Energy Consumption (EUEC) model is proposed. ► Technological innovation mitigates energy consumption impacts on environmental quality. ► Technological innovation decreases demand of energy consumption more than fixed technology scenario

  4. Utilization of Customer Satisfaction Measurement in Czech Tourism

    OpenAIRE

    Tomas Sadilek

    2015-01-01

    The paper deals with describing the method of satisfaction measurement as a one of marketing techniques used for detecting visitors´ satisfaction in tourist regions in the Czech Republic. In the treatise, we try to analyse visitors´ satisfaction with twenty four partial factors affecting total satisfaction. In the theoretical part of the paper, there are described methodological approaches to satisfaction measurement and presented various methods for satisfaction measurement with focus on the...

  5. Measures for carbon dioxide problem and utilization of energy

    International Nuclear Information System (INIS)

    Kojima, Toshinori

    1992-01-01

    As global environment problems, there are water, expansion of deserts, weather, tropical forests, wild animals, ocean pollution, nuclear waste contamination, acid rain, ozone layer and so on, and population, foods, energy, and resources are the problems surrounding them. It is clear that these origins are attributed to the development and consumption largely dependent on the intention of developed countries and the population problem of developing countries. In this report, the discharge of carbon dioxide that causes greenhouse effect and its relation with energy are discussed. The increase of carbon dioxide concentration, its release from fossil fuel, the destruction of forests, the balance of carbon on the earth, the development of new energy such as solar energy, the transport of new energy, secondary energy system and the role of carbon dioxide, the transfer to low carbon fuel and the carbon reduction treatment of fuel, the utilization of unused energy and energy price, the efficiency of energy utilization, the heightening of efficiency of energy conversion, energy conservation and the breakaway from energy wasteful use culture, and the recovery, preservation and use of discharged carbon dioxide are described. (K.I.)

  6. Model franchise agreements with public utilities. Musterkonzessionsvertraege mit Energieversorgungsunternehmen

    Energy Technology Data Exchange (ETDEWEB)

    Menking, C. (Niedersaechsischer Staedte- und Gemeindebund, Hannover (Germany, F.R.))

    1989-01-01

    In 1987, the Committee of Town and Community Administrations of Lower Saxonia established the task force 'Franchise Agreements'. This is a forum where town and community officials interested in energy issues cooperate. The idea was to improve conditions and participation possibilities for local administrations in contracts with their present utilities, and to draw up, and coordinate with the utilities, a franchise agreement creating possibilities for the communities, inter alia, in the sectors power supply concept, advising on energy conservation, energy generation. A model of a franchise agreement for the electricity sector is presented in its full wording. (orig./HSCH).

  7. Sustainable geothermal utilization - Case histories; definitions; research issues and modelling

    International Nuclear Information System (INIS)

    Axelsson, Gudni

    2010-01-01

    Sustainable development by definition meets the needs of the present without compromising the ability of future generations to meet their own needs. The Earth's enormous geothermal resources have the potential to contribute significantly to sustainable energy use worldwide as well as to help mitigate climate change. Experience from the use of numerous geothermal systems worldwide lasting several decades demonstrates that by maintaining production below a certain limit the systems reach a balance between net energy discharge and recharge that may be maintained for a long time (100-300 years). Modelling studies indicate that the effect of heavy utilization is often reversible on a time-scale comparable to the period of utilization. Thus, geothermal resources can be used in a sustainable manner either through (1) constant production below the sustainable limit, (2) step-wise increase in production, (3) intermittent excessive production with breaks, and (4) reduced production after a shorter period of heavy production. The long production histories that are available for low-temperature as well as high-temperature geothermal systems distributed throughout the world, provide the most valuable data available for studying sustainable management of geothermal resources, and reservoir modelling is the most powerful tool available for this purpose. The paper presents sustainability modelling studies for the Hamar and Nesjavellir geothermal systems in Iceland, the Beijing Urban system in China and the Olkaria system in Kenya as examples. Several relevant research issues have also been identified, such as the relevance of system boundary conditions during long-term utilization, how far reaching interference from utilization is, how effectively geothermal systems recover after heavy utilization and the reliability of long-term (more than 100 years) model predictions. (author)

  8. Nondestructive measurement of esophageal biaxial mechanical properties utilizing sonometry

    Science.gov (United States)

    Aho, Johnathon M.; Qiang, Bo; Wigle, Dennis A.; Tschumperlin, Daniel J.; Urban, Matthew W.

    2016-07-01

    Malignant esophageal pathology typically requires resection of the esophagus and reconstruction to restore foregut continuity. Reconstruction options are limited and morbid. The esophagus represents a useful target for tissue engineering strategies based on relative simplicity in comparison to other organs. The ideal tissue engineered conduit would have sufficient and ideally matched mechanical tolerances to native esophageal tissue. Current methods for mechanical testing of esophageal tissues both in vivo and ex vivo are typically destructive, alter tissue conformation, ignore anisotropy, or are not able to be performed in fluid media. The aim of this study was to investigate biomechanical properties of swine esophageal tissues through nondestructive testing utilizing sonometry ex vivo. This method allows for biomechanical determination of tissue properties, particularly longitudinal and circumferential moduli and strain energy functions. The relative contribution of mucosal-submucosal layers and muscular layers are compared to composite esophagi. Swine thoracic esophageal tissues (n  =  15) were tested by pressure loading using a continuous pressure pump system to generate stress. Preconditioning of tissue was performed by pressure loading with the pump system and pre-straining the tissue to in vivo length before data was recorded. Sonometry using piezocrystals was utilized to determine longitudinal and circumferential strain on five composite esophagi. Similarly, five mucosa-submucosal and five muscular layers from thoracic esophagi were tested independently. This work on esophageal tissues is consistent with reported uniaxial and biaxial mechanical testing and reported results using strain energy theory and also provides high resolution displacements, preserves native architectural structure and allows assessment of biomechanical properties in fluid media. This method may be of use to characterize mechanical properties of tissue engineered esophageal

  9. Nonlinear Growth Models as Measurement Models: A Second-Order Growth Curve Model for Measuring Potential.

    Science.gov (United States)

    McNeish, Daniel; Dumas, Denis

    2017-01-01

    Recent methodological work has highlighted the promise of nonlinear growth models for addressing substantive questions in the behavioral sciences. In this article, we outline a second-order nonlinear growth model in order to measure a critical notion in development and education: potential. Here, potential is conceptualized as having three components-ability, capacity, and availability-where ability is the amount of skill a student is estimated to have at a given timepoint, capacity is the maximum amount of ability a student is predicted to be able to develop asymptotically, and availability is the difference between capacity and ability at any particular timepoint. We argue that single timepoint measures are typically insufficient for discerning information about potential, and we therefore describe a general framework that incorporates a growth model into the measurement model to capture these three components. Then, we provide an illustrative example using the public-use Early Childhood Longitudinal Study-Kindergarten data set using a Michaelis-Menten growth function (reparameterized from its common application in biochemistry) to demonstrate our proposed model as applied to measuring potential within an educational context. The advantage of this approach compared to currently utilized methods is discussed as are future directions and limitations.

  10. Utilization of minicomputer in the radiocarbon analysis measurements

    International Nuclear Information System (INIS)

    Szarka, J.; Krnac, S.

    1984-01-01

    Possibilities of minicomputer applications for radiocarbon analysis with multielement proportional counters are considered. Off-line on-line measuring system operation is possible. TPA-70 minicomputer and CAMAC electronics are used in on-line operation. Block-diagrams of data acquisition and data processing as well as the block-diagram of data evaluation program, which permits not only to increase the precision of the measurements, but also reduces the measuring time by 1/3, as compared with conventional methods, are given

  11. Improving surgeon utilization in an orthopedic department using simulation modeling

    Directory of Open Access Journals (Sweden)

    Simwita YW

    2016-10-01

    Full Text Available Yusta W Simwita, Berit I Helgheim Department of Logistics, Molde University College, Molde, Norway Purpose: Worldwide more than two billion people lack appropriate access to surgical services due to mismatch between existing human resource and patient demands. Improving utilization of existing workforce capacity can reduce the existing gap between surgical demand and available workforce capacity. In this paper, the authors use discrete event simulation to explore the care process at an orthopedic department. Our main focus is improving utilization of surgeons while minimizing patient wait time.Methods: The authors collaborated with orthopedic department personnel to map the current operations of orthopedic care process in order to identify factors that influence poor surgeons utilization and high patient waiting time. The authors used an observational approach to collect data. The developed model was validated by comparing the simulation output with the actual patient data that were collected from the studied orthopedic care process. The authors developed a proposal scenario to show how to improve surgeon utilization.Results: The simulation results showed that if ancillary services could be performed before the start of clinic examination services, the orthopedic care process could be highly improved. That is, improved surgeon utilization and reduced patient waiting time. Simulation results demonstrate that with improved surgeon utilizations, up to 55% increase of future demand can be accommodated without patients reaching current waiting time at this clinic, thus, improving patient access to health care services.Conclusion: This study shows how simulation modeling can be used to improve health care processes. This study was limited to a single care process; however the findings can be applied to improve other orthopedic care process with similar operational characteristics. Keywords: waiting time, patient, health care process

  12. Burnup verification measurements at a US nuclear utility using the FORK measurement system

    International Nuclear Information System (INIS)

    Ewing, R.I.; Bosler, G.E.; Walden, G.

    1993-01-01

    The FORK measurement system, designed at Los Alamos National Laboratory (LANL) for the International Atomic Energy Agency (IAEA) safeguards program, has been used to examine spent reactor fuel assemblies at Duke Power Company's Oconee Nuclear Station. The FORK system measures the passive neutron and gamma-ray emission from spent fuel assemblies while in the storage pool. These measurements can be correlated with burnup and cooling time, and can be used to verify the reactor site records. Verification measurements may be used to help ensure nuclear criticality safety when burnup credit is applied to spent fuel transport and storage systems. By taking into account the reduced reactivity of spent fuel due to its burnup in the reactor, burnup credit results in more efficient and economic transport and storage. The objectives of these tests are to demonstrate the applicability of the FORK system to verify reactor records and to develop optimal procedures compatible with utility operations. The test program is a cooperative effort supported by Sandia National Laboratories, the Electric Power Research Institute (EPRI), Los Alamos National Laboratory, and the Duke Power Company

  13. A catastrophe model for the prospect-utility theory question.

    Science.gov (United States)

    Oliva, Terence A; McDade, Sean R

    2008-07-01

    Anomalies have played a big part in the analysis of decision making under risk. Both expected utility and prospect theories were born out of anomalies exhibited by actual decision making behavior. Since the same individual can use both expected utility and prospect approaches at different times, it seems there should be a means of uniting the two. This paper turns to nonlinear dynamical systems (NDS), specifically a catastrophe model, to help suggest an 'out of the box' line of solution toward integration. We use a cusp model to create a value surface whose control dimensions are involvement and gains versus losses. By including 'involvement' as a variable the importance of the individual's psychological state is included, and it provides a rationale for how decision makers' changes from expected utility to prospect might occur. Additionally, it provides a possible explanation for what appears to be even more irrational decisions that individuals make when highly emotionally involved. We estimate the catastrophe model using a sample of 997 gamblers who attended a casino and compare it to the linear model using regression. Hence, we have actual data from individuals making real bets, under real conditions.

  14. Recent advances in modeling nutrient utilization in ruminants.

    Science.gov (United States)

    Kebreab, E; Dijkstra, J; Bannink, A; France, J

    2009-04-01

    Mathematical modeling techniques have been applied to study various aspects of the ruminant, such as rumen function, postabsorptive metabolism, and product composition. This review focuses on advances made in modeling rumen fermentation and its associated rumen disorders, and energy and nutrient utilization and excretion with respect to environmental issues. Accurate prediction of fermentation stoichiometry has an impact on estimating the type of energy-yielding substrate available to the animal, and the ratio of lipogenic to glucogenic VFA is an important determinant of methanogenesis. Recent advances in modeling VFA stoichiometry offer ways for dietary manipulation to shift the fermentation in favor of glucogenic VFA. Increasing energy to the animal by supplementing with starch can lead to health problems such as subacute rumen acidosis caused by rumen pH depression. Mathematical models have been developed to describe changes in rumen pH and rumen fermentation. Models that relate rumen temperature to rumen pH have also been developed and have the potential to aid in the diagnosis of subacute rumen acidosis. The effect of pH has been studied mechanistically, and in such models, fractional passage rate has a large impact on substrate degradation and microbial efficiency in the rumen and should be an important theme in future studies. The efficiency with which energy is utilized by ruminants has been updated in recent studies. Mechanistic models of N utilization indicate that reducing dietary protein concentration, matching protein degradability to the microbial requirement, and increasing the energy status of the animal will reduce the output of N as waste. Recent mechanistic P models calculate the P requirement by taking into account P recycled through saliva and endogenous losses. Mechanistic P models suggest reducing current P amounts for lactating dairy cattle to at least 0.35% P in the diet, with a potential reduction of up to 1.3 kt/yr. A model that

  15. Magnetic susceptibility measuring probe utilizing a compensation coil

    International Nuclear Information System (INIS)

    Bonnet, Jean; Fournet, Julien.

    1978-01-01

    This invention concerns a magnetic susceptibility measuring probe. It is used, inter alia, in logging, to wit continuous logging of the magnetic susceptibility of the ground throughout the length of a bore hole. The purpose of this invention is to increase the sensitivity of this type of probe by creating a side focusing effect . To this end, it provides for the use of a compensation winding, coaxial with the measurement winding and arranged symmetrically to the latter with respect to the centre of the induction windings [fr

  16. Stock Selection for Portfolios Using Expected Utility-Entropy Decision Model

    Directory of Open Access Journals (Sweden)

    Jiping Yang

    2017-09-01

    Full Text Available Yang and Qiu proposed and then recently improved an expected utility-entropy (EU-E measure of risk and decision model. When segregation holds, Luce et al. derived an expected utility term, plus a constant multiplies the Shannon entropy as the representation of risky choices, further demonstrating the reasonability of the EU-E decision model. In this paper, we apply the EU-E decision model to selecting the set of stocks to be included in the portfolios. We first select 7 and 10 stocks from the 30 component stocks of Dow Jones Industrial Average index, and then derive and compare the efficient portfolios in the mean-variance framework. The conclusions imply that efficient portfolios composed of 7(10 stocks selected using the EU-E model with intermediate intervals of the tradeoff coefficients are more efficient than that composed of the sets of stocks selected using the expected utility model. Furthermore, the efficient portfolio of 7(10 stocks selected by the EU-E decision model have almost the same efficient frontier as that of the sample of all stocks. This suggests the necessity of incorporating both the expected utility and Shannon entropy together when taking risky decisions, further demonstrating the importance of Shannon entropy as the measure of uncertainty, as well as the applicability of the EU-E model as a decision-making model.

  17. FY 2000 Feasibility study on the environmentally-friendly coal utilization systems as part of the international project for coal utilization measures. Feasibility study on supporting introduction of the environmentally-friendly coal utilization systems in Vietnam (Model project for introduction of advanced coal preparation systems); 2000 nendo kokusai sekitan riyo taisaku jigyo chosa hokokusho. Kankyo chowagata sekitan riyo system kanosei chosa jigyo Vietnam ni okeru kankyo chowagata sekitan riyo system donyu shien jigyo (kodo sentan system donyu model jigyo kanosei chosa jigyo)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-06-01

    The feasibility study was conducted on a model project in Vietnam, aimed at solving the environmental pollution problems resulting from use of coal by demonstrating and disseminating the Japan's environmental technologies in the Southeast Asian countries. The feasibility study was conducted for the Cua Ong Coal Preparation Enterprise, which has the largest coal preparation capacity in Vietnam and port facilities. It is treating raw coal from 10 coal mines for classification and preparation, and shipping coal of various types that meet the standards for domestic use and export. The survey results point out that unrecovered coal remains in waste water discharged from the coal preparation plants to pollute the sea area, and that quantity of the refuse increases because of the unrecovered coal it contains. The environmental technologies needed to introduce include modification to variable wave pattern type jigging separator, refuse height measuring instrument and automatic controller, circulating heavy medium gravimeter, highly functional settling pond, and flocculent facilities. (NEDO)

  18. The utility of resilience as a conceptual framework for understanding and measuring LGBTQ health.

    Science.gov (United States)

    Colpitts, Emily; Gahagan, Jacqueline

    2016-04-06

    Historically, lesbian, gay, bisexual, transgender and queer (LGBTQ) health research has focused heavily on the risks for poor health outcomes, obscuring the ways in which LGBTQ populations maintain and improve their health across the life course. In this paper we argue that informing culturally competent health policy and systems requires shifting the LGBTQ health research evidence base away from deficit-focused approaches toward strengths-based approaches to understanding and measuring LGBTQ health. We recently conducted a scoping review with the aim of exploring strengths-based approaches to LGBTQ health research. Our team found that the concept of resilience emerged as a key conceptual framework. This paper discusses a subset of our scoping review findings on the utility of resilience as a conceptual framework in understanding and measuring LGBTQ health. The findings of our scoping review suggest that the ways in which resilience is defined and measured in relation to LGBTQ populations remains contested. Given that LGBTQ populations have unique lived experiences of adversity and discrimination, and may also have unique factors that contribute to their resilience, the utility of heteronormative and cis-normative models of resilience is questionable. Our findings suggest that there is a need to consider further exploration and development of LGBTQ-specific models and measures of resilience that take into account structural, social, and individual determinants of health and incorporate an intersectional lens. While we fully acknowledge that the resilience of LGBTQ populations is central to advancing LGBTQ health, there remains much work to be done before the concept of resilience can be truly useful in measuring LGBTQ health.

  19. Canyon air flow measurement utilizing ASME standard pitot tube arrays

    International Nuclear Information System (INIS)

    Moncrief, B.R.

    1990-01-01

    The Savannah River Site produces nuclear materials for national defense. In addition to nuclear reactors, the site has separation facilities for reprocessing irradiated nuclear fuel. The chemical separation of highly radioactive materials takes place by remote control in large buildings called canyons. Personnel in these buildings are shielded from radiation by thick concrete walls. Contaminated air is exhausted from the canyons and contaminants are removed by sand filters prior to release to the atmosphere through a stack. When these facilities were built on a crash basis in the early 1950's, inadequate means were provided for pressure and air flow measurement. This presentation describes the challenge we faced in retrofitting a highly radioactive, heavily shielded facility with instrumentation to provide this capability

  20. Utility of proverb interpretation measures with cardiac transplant candidates.

    Science.gov (United States)

    Dugbartey, A T

    1998-12-01

    To assess metaphorical understanding and proverb interpretation in cardiac transplant candidates, the neuropsychological assessment records of 22 adults with end-stage cardiac disease under consideration for transplantation were analyzed. Neuropsychological tests consisted of the Controlled Oral Word Association Test, Halstead Category Test, Rey-Osterrieth Complex Figure Test (Copy), Trial Making Test, and summed scores for the proverb items of the WAIS-R Comprehension subtest. Analysis showed that the group tended to interpret proverbs literally. Proverb scores were significantly associated with scores on the Similarities and Picture Arrangement subtests of the WAIS-R. There was a moderate negative association between number of reported heart attacks and Proverb scores. The need for brief yet robust assessments including measures of inferential thinking and conceptualization in transplant candidates are highlighted.

  1. An approach for evaluating utility-financed energy conservation programs. The economic welfare model

    Energy Technology Data Exchange (ETDEWEB)

    Costello, K W; Galen, P S

    1985-09-01

    The main objective of this paper is to illustrate how the economic welfare model may be used to measure the economic efficiency effects of utility-financed energy conservation programs. The economic welfare model is the theoretical structure that was used in this paper to develop a cost/benefit test. This test defines the net benefit of a conservation program as the change in the sum of consumer and producer surplus. The authors advocate the operation of the proposed cost/benefit model as a screening tool to eliminate from more detailed review those programs where the expected net benefits are less than zero. The paper presents estimates of the net benefit derived from different specified cost/benefit models for four illustrative pilot programs. These models are representative of those which have been applied or are under review by utilities and public utility commissions. From the numerical results, it is shown that net benefit is greatly affected by the assumptions made about the nature of welfare gains to program participants. The main conclusion that emerges from the numerical results is that the selection of a cost/benefit model is a crucial element in evaluating utility-financed energy conservation programs. The paper also briefly addresses some of the major unresolved issues in utility-financed energy conservation programs. 2 figs., 3 tabs., 10 refs. (A.V.)

  2. Utility of Angle Correction for Hemodynamic Measurements with Doppler Echocardiography.

    Science.gov (United States)

    Sigurdsson, Martin I; Eoh, Eun J; Chow, Vinca W; Waldron, Nathan H; Cleve, Jayne; Nicoara, Alina; Swaminathan, Madhav

    2018-04-06

    The routine application angle correction (AnC) in hemodynamic measurements with transesophageal echocardiography currently is not recommended but potentially could be beneficial. The authors hypothesized that AnC can be applied reliably and may change grading of aortic stenosis (AS). Retrospective analysis. Single institution, university hospital. During phase I, use of AnC was assessed in 60 consecutive patients with intraoperative transesophageal echocardiography. During phase II, 129 images from a retrospective cohort of 117 cases were used to quantify AS by mean pressure gradient. A panel of observers used custom-written software in Java to measure intra-individual and inter-individual correlation in AnC application, correlation with preoperative transthoracic echocardiography gradients, and regrading of AS after AnC. For phase I, the median AnC was 21 (16-35) degrees, and 17% of patients required no AnC. For phase II, the median AnC was 7 (0-15) degrees, and 37% of assessed images required no AnC. The mean inter-individual and intra-individual correlation for AnC was 0.50 (95% confidence interval [CI] 0.49-0.52) and 0.87 (95% CI 0.82-0.92), respectively. AnC did not improve agreement with the transthoracic echocardiography mean pressure gradient. The mean inter-rater and intra-rater agreement for grading AS severity was 0.82 (95% CI 0.81-0.83) and 0.95 (95% CI 0.91-0.95), respectively. A total of 241 (7%) AS gradings were reclassified after AnC was applied, mostly when the uncorrected mean gradient was within 5 mmHg of the severity classification cutoff. AnC can be performed with a modest inter-rater and intra-rater correlation and high degree of inter-rater and intra-rater agreement for AS severity grading. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Modeling of ultrasonic processes utilizing a generic software framework

    Science.gov (United States)

    Bruns, P.; Twiefel, J.; Wallaschek, J.

    2017-06-01

    Modeling of ultrasonic processes is typically characterized by a high degree of complexity. Different domains and size scales must be regarded, so that it is rather difficult to build up a single detailed overall model. Developing partial models is a common approach to overcome this difficulty. In this paper a generic but simple software framework is presented which allows to coupe arbitrary partial models by slave modules with well-defined interfaces and a master module for coordination. Two examples are given to present the developed framework. The first one is the parameterization of a load model for ultrasonically-induced cavitation. The piezoelectric oscillator, its mounting, and the process load are described individually by partial models. These partial models then are coupled using the framework. The load model is composed of spring-damper-elements which are parameterized by experimental results. In the second example, the ideal mounting position for an oscillator utilized in ultrasonic assisted machining of stone is determined. Partial models for the ultrasonic oscillator, its mounting, the simplified contact process, and the workpiece’s material characteristics are presented. For both applications input and output variables are defined to meet the requirements of the framework’s interface.

  4. Direct estimates of unemployment rate and capacity utilization in macroeconometric models

    Energy Technology Data Exchange (ETDEWEB)

    Klein, L R [Univ. of Pennsylvania, Philadelphia; Su, V

    1979-10-01

    The problem of measuring resource-capacity utilization as a factor in overall economic efficiency is examined, and a tentative solution is offered. A macro-econometric model is applied to the aggregate production function by linking unemployment rate and capacity utilization rate. Partial- and full-model simulations use Wharton indices as a filter and produce direct estimates of unemployment rates. The simulation paths of durable-goods industries, which are more capital-intensive, are found to be more sensitive to business cycles than the nondurable-goods industries. 11 references.

  5. Risk Decision Making Model for Reservoir Floodwater resources Utilization

    Science.gov (United States)

    Huang, X.

    2017-12-01

    Floodwater resources utilization(FRU) can alleviate the shortage of water resources, but there are risks. In order to safely and efficiently utilize the floodwater resources, it is necessary to study the risk of reservoir FRU. In this paper, the risk rate of exceeding the design flood water level and the risk rate of exceeding safety discharge are estimated. Based on the principle of the minimum risk and the maximum benefit of FRU, a multi-objective risk decision making model for FRU is constructed. Probability theory and mathematical statistics method is selected to calculate the risk rate; C-D production function method and emergy analysis method is selected to calculate the risk benefit; the risk loss is related to flood inundation area and unit area loss; the multi-objective decision making problem of the model is solved by the constraint method. Taking the Shilianghe reservoir in Jiangsu Province as an example, the optimal equilibrium solution of FRU of the Shilianghe reservoir is found by using the risk decision making model, and the validity and applicability of the model are verified.

  6. Hedonic travel cost and random utility models of recreation

    Energy Technology Data Exchange (ETDEWEB)

    Pendleton, L. [Univ. of Southern California, Los Angeles, CA (United States); Mendelsohn, R.; Davis, E.W. [Yale Univ., New Haven, CT (United States). School of Forestry and Environmental Studies

    1998-07-09

    Micro-economic theory began as an attempt to describe, predict and value the demand and supply of consumption goods. Quality was largely ignored at first, but economists have started to address quality within the theory of demand and specifically the question of site quality, which is an important component of land management. This paper demonstrates that hedonic and random utility models emanate from the same utility theoretical foundation, although they make different estimation assumptions. Using a theoretically consistent comparison, both approaches are applied to examine the quality of wilderness areas in the Southeastern US. Data were collected on 4778 visits to 46 trails in 20 different forest areas near the Smoky Mountains. Visitor data came from permits and an independent survey. The authors limited the data set to visitors from within 300 miles of the North Carolina and Tennessee border in order to focus the analysis on single purpose trips. When consistently applied, both models lead to results with similar signs but different magnitudes. Because the two models are equally valid, recreation studies should continue to use both models to value site quality. Further, practitioners should be careful not to make simplifying a priori assumptions which limit the effectiveness of both techniques.

  7. Clinical Utility of the DSM-5 Alternative Model of Personality Disorders

    DEFF Research Database (Denmark)

    Bach, Bo; Markon, Kristian; Simonsen, Erik

    2015-01-01

    In Section III, Emerging Measures and Models, DSM-5 presents an Alternative Model of Personality Disorders, which is an empirically based model of personality pathology measured with the Level of Personality Functioning Scale (LPFS) and the Personality Inventory for DSM-5 (PID-5). These novel...... instruments assess level of personality impairment and pathological traits. Objective. A number of studies have supported the psychometric qualities of the LPFS and the PID-5, but the utility of these instruments in clinical assessment and treatment has not been extensively evaluated. The goal of this study...... was to evaluate the clinical utility of this alternative model of personality disorders. Method. We administered the LPFS and the PID-5 to psychiatric outpatients diagnosed with personality disorders and other nonpsychotic disorders. The personality profiles of six characteristic patients were inspected...

  8. Models of Credit Risk Measurement

    OpenAIRE

    Hagiu Alina

    2011-01-01

    Credit risk is defined as that risk of financial loss caused by failure by the counterparty. According to statistics, for financial institutions, credit risk is much important than market risk, reduced diversification of the credit risk is the main cause of bank failures. Just recently, the banking industry began to measure credit risk in the context of a portfolio along with the development of risk management started with models value at risk (VAR). Once measured, credit risk can be diversif...

  9. Internet advertising effectiveness measurement model

    OpenAIRE

    Marcinkevičiūtė, Milda

    2007-01-01

    The research object of the master thesis is internet advertising effectiveness measurement. The goal of the work is after making theoretical studies of internet advertising effectiveness measurement (theoretical articles, practical researches and cetera), formulate the conceptual IAEM model and examine it empirically. The main tasks of the work are: to analyze internet advertising, it’s features, purposes, spread formats, functions, advantages and disadvantages; present the effectiveness of i...

  10. A utility-theoretic model for QALYs and willingness to pay.

    Science.gov (United States)

    Klose, Thomas

    2003-01-01

    Despite the widespread use of quality-adjusted life years (QALY) in economic evaluation studies, their utility-theoretic foundation remains unclear. A model for preferences over health, money, and time is presented in this paper. Under the usual assumptions of the original QALY-model, an additive separable presentation of the utilities in different periods exists. In contrast to the usual assumption that QALY-weights do solely depend on aspects of health-related quality of life, wealth-standardized QALY-weights might vary with the wealth level in the presented extension of the original QALY-model resulting in an inconsistent measurement of QALYs. Further assumptions are presented to make the measurement of QALYs consistent with lifetime preferences over health and money. Even under these strict assumptions, QALYs and WTP (which also can be defined in this utility-theoretic model) are not equivalent preference-based measures of the effects of health technologies on an individual level. The results suggest that the individual WTP per QALY can depend on the magnitude of the QALY-gain as well as on the disease burden, when health influences the marginal utility of wealth. Further research seems to be indicated on this structural aspect of preferences over health and wealth and to quantify its impact. Copyright 2002 John Wiley & Sons, Ltd.

  11. Extension of the behavioral model of healthcare utilization with ethnically diverse, low-income women.

    Science.gov (United States)

    Keenan, Lisa A; Marshall, Linda L; Eve, Susan

    2002-01-01

    Psychosocial vulnerabilities were added to a model of healthcare utilization. This extension was tested among low-income women with ethnicity addressed as a moderator. Structured interviews were conducted at 2 points in time, approximately 1 year apart. The constructs of psychosocial vulnerability, demographic predisposing, barriers, and illness were measured by multiple indicators to allow use of Structural Equation Modeling to analyze results. The models were tested separately for each ethnic group. Community office. African-American (N = 266), Euro-American (N = 200), and Mexican-American (N = 210) women were recruited from the Dallas Metropolitan area to participate in Project Health Outcomes of Women, a multi-year, multi-wave study. Face-to-face interviews were conducted with this sample. Participants had been in heterosexual relationships for at least 1 year, were between 20 and 49 years of age, and had incomes less than 200% of the national poverty level. Healthcare utilization, defined as physician visits and general healthcare visits. Illness mediated the effect of psychosocial vulnerability on healthcare utilization for African Americans and Euro-Americans. The model for Mexican Americans was the most complex. Psychosocial vulnerability on illness was partially mediated by barriers, which also directly affected utilization. Psychosocial vulnerabilities were significant utilization predictors for healthcare use for all low-income women in this study. The final models for the 2 minority groups, African Americans and Mexican Americans, were quite different. Hence, women of color should not be considered a homogeneous group in comparison to Euro-Americans.

  12. Utility of Small Animal Models of Developmental Programming.

    Science.gov (United States)

    Reynolds, Clare M; Vickers, Mark H

    2018-01-01

    Any effective strategy to tackle the global obesity and rising noncommunicable disease epidemic requires an in-depth understanding of the mechanisms that underlie these conditions that manifest as a consequence of complex gene-environment interactions. In this context, it is now well established that alterations in the early life environment, including suboptimal nutrition, can result in an increased risk for a range of metabolic, cardiovascular, and behavioral disorders in later life, a process preferentially termed developmental programming. To date, most of the mechanistic knowledge around the processes underpinning development programming has been derived from preclinical research performed mostly, but not exclusively, in laboratory mouse and rat strains. This review will cover the utility of small animal models in developmental programming, the limitations of such models, and potential future directions that are required to fully maximize information derived from preclinical models in order to effectively translate to clinical use.

  13. Animal models of myasthenia gravis: utility and limitations

    Science.gov (United States)

    Mantegazza, Renato; Cordiglieri, Chiara; Consonni, Alessandra; Baggi, Fulvio

    2016-01-01

    Myasthenia gravis (MG) is a chronic autoimmune disease caused by the immune attack of the neuromuscular junction. Antibodies directed against the acetylcholine receptor (AChR) induce receptor degradation, complement cascade activation, and postsynaptic membrane destruction, resulting in functional reduction in AChR availability. Besides anti-AChR antibodies, other autoantibodies are known to play pathogenic roles in MG. The experimental autoimmune MG (EAMG) models have been of great help over the years in understanding the pathophysiological role of specific autoantibodies and T helper lymphocytes and in suggesting new therapies for prevention and modulation of the ongoing disease. EAMG can be induced in mice and rats of susceptible strains that show clinical symptoms mimicking the human disease. EAMG models are helpful for studying both the muscle and the immune compartments to evaluate new treatment perspectives. In this review, we concentrate on recent findings on EAMG models, focusing on their utility and limitations. PMID:27019601

  14. Novel design and sensitivity analysis of displacement measurement system utilizing knife edge diffraction for nanopositioning stages.

    Science.gov (United States)

    Lee, ChaBum; Lee, Sun-Kyu; Tarbutton, Joshua A

    2014-09-01

    This paper presents a novel design and sensitivity analysis of a knife edge-based optical displacement sensor that can be embedded with nanopositioning stages. The measurement system consists of a laser, two knife edge locations, two photodetectors, and axillary optics components in a simple configuration. The knife edge is installed on the stage parallel to its moving direction and two separated laser beams are incident on knife edges. While the stage is in motion, the direct transverse and diffracted light at each knife edge is superposed producing interference at the detector. The interference is measured with two photodetectors in a differential amplification configuration. The performance of the proposed sensor was mathematically modeled, and the effect of the optical and mechanical parameters, wavelength, beam diameter, distances from laser to knife edge to photodetector, and knife edge topography, on sensor outputs was investigated to obtain a novel analytical method to predict linearity and sensitivity. From the model, all parameters except for the beam diameter have a significant influence on measurement range and sensitivity of the proposed sensing system. To validate the model, two types of knife edges with different edge topography were used for the experiment. By utilizing a shorter wavelength, smaller sensor distance and higher edge quality increased measurement sensitivity can be obtained. The model was experimentally validated and the results showed a good agreement with the theoretically estimated results. This sensor is expected to be easily implemented into nanopositioning stage applications at a low cost and mathematical model introduced here can be used for design and performance estimation of the knife edge-based sensor as a tool.

  15. Rigorously testing multialternative decision field theory against random utility models.

    Science.gov (United States)

    Berkowitsch, Nicolas A J; Scheibehenne, Benjamin; Rieskamp, Jörg

    2014-06-01

    Cognitive models of decision making aim to explain the process underlying observed choices. Here, we test a sequential sampling model of decision making, multialternative decision field theory (MDFT; Roe, Busemeyer, & Townsend, 2001), on empirical grounds and compare it against 2 established random utility models of choice: the probit and the logit model. Using a within-subject experimental design, participants in 2 studies repeatedly choose among sets of options (consumer products) described on several attributes. The results of Study 1 showed that all models predicted participants' choices equally well. In Study 2, in which the choice sets were explicitly designed to distinguish the models, MDFT had an advantage in predicting the observed choices. Study 2 further revealed the occurrence of multiple context effects within single participants, indicating an interdependent evaluation of choice options and correlations between different context effects. In sum, the results indicate that sequential sampling models can provide relevant insights into the cognitive process underlying preferential choices and thus can lead to better choice predictions. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  16. Measuring the costs of photovoltaics in an electric utility planning framework

    International Nuclear Information System (INIS)

    Awerbuch, Shimon

    1993-01-01

    Utility planning models evaluate alternative generating options using the revenue requirements method-an engineering-oriented, discounted cash-flow (DCF) methodology that has been widely used for over three decades. Discounted cash-flow techniques were conceived in the context of active expense-intensive technologies, such as conventional, fuel-intensive power generation. Photovoltaic (PV) technology, by contrast, is passive and capital intensive-attributes that are similar to those of other new process technologies, such as computer-integrated manufacturing. Discounted cash-flow techniques have a dismal record for correctly valuing new technologies with these attributes, in part because their benefits cannot be easily measured using traditional accounting concepts. This paper examines how these issues affect cost measurement in both conventional and PV-based electricity, and presents kWh-cost estimates for three technologies (coal, gas and PV) using risk-adjusted approaches, which suggest that PV costs are generally equivalent to the gas/combined cycle and about twice the cost of base-load coal (environmental externalities are ignored). Finally, the paper evaluates independent power purchases for a typical US utility and finds that in such a setting the cost of PV-based power is comparable to the firm's published avoided costs. (author)

  17. Division Quilts: A Measurement Model

    Science.gov (United States)

    Pratt, Sarah S.; Lupton, Tina M.; Richardson, Kerri

    2015-01-01

    As teachers seek activities to assist students in understanding division as more than just the algorithm, they find many examples of division as fair sharing. However, teachers have few activities to engage students in a quotative (measurement) model of division. Efraim Fischbein and his colleagues (1985) defined two types of whole-number…

  18. Exponential GARCH Modeling with Realized Measures of Volatility

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Huang, Zhuo

    returns and volatility. We apply the model to DJIA stocks and an exchange traded fund that tracks the S&P 500 index and find that specifications with multiple realized measures dominate those that rely on a single realized measure. The empirical analysis suggests some convenient simplifications......We introduce the Realized Exponential GARCH model that can utilize multiple realized volatility measures for the modeling of a return series. The model specifies the dynamic properties of both returns and realized measures, and is characterized by a flexible modeling of the dependence between...

  19. Utility-free heuristic models of two-option choice can mimic predictions of utility-stage models under many conditions

    Directory of Open Access Journals (Sweden)

    Steven T Piantadosi

    2015-04-01

    Full Text Available Economists often model choices as if decision-makers assign each option a scalar value variable, known as utility, and then select the option with the highest utility. It remains unclear whether as-if utility models describe real mental and neural steps in choice. Although choices alone cannot prove the existence of a utility stage in choice, utility transformations are often taken to provide the most parsimonious or psychologically plausible explanation for choice data. Here, we show that it is possible to mathematically transform a large set of common utility-stage two-option choice models (specifically ones in which dimensions are linearly separable into a psychologically plausible heuristic model (specifically, a dimensional prioritization heuristic that has no utility computation stage. We then show that under a range of plausible assumptions, both classes of model predict similar neural responses. These results highlight the difficulties in using neuroeconomic data to infer the existence of a value stage in choice.

  20. Utility-free heuristic models of two-option choice can mimic predictions of utility-stage models under many conditions

    Science.gov (United States)

    Piantadosi, Steven T.; Hayden, Benjamin Y.

    2015-01-01

    Economists often model choices as if decision-makers assign each option a scalar value variable, known as utility, and then select the option with the highest utility. It remains unclear whether as-if utility models describe real mental and neural steps in choice. Although choices alone cannot prove the existence of a utility stage, utility transformations are often taken to provide the most parsimonious or psychologically plausible explanation for choice data. Here, we show that it is possible to mathematically transform a large set of common utility-stage two-option choice models (specifically ones in which dimensions are can be decomposed into additive functions) into a heuristic model (specifically, a dimensional prioritization heuristic) that has no utility computation stage. We then show that under a range of plausible assumptions, both classes of model predict similar neural responses. These results highlight the difficulties in using neuroeconomic data to infer the existence of a value stage in choice. PMID:25914613

  1. Animal models of GM2 gangliosidosis: utility and limitations

    Directory of Open Access Journals (Sweden)

    Lawson CA

    2016-07-01

    Full Text Available Cheryl A Lawson,1,2 Douglas R Martin2,3 1Department of Pathobiology, 2Scott-Ritchey Research Center, 3Department of Anatomy, Physiology and Pharmacology, Auburn University College of Veterinary Medicine, Auburn, AL, USA Abstract: GM2 gangliosidosis, a subset of lysosomal storage disorders, is caused by a deficiency of the glycohydrolase, β-N-acetylhexosaminidase, and includes the closely related Tay–Sachs and Sandhoff diseases. The enzyme deficiency prevents the normal, stepwise degradation of ganglioside, which accumulates unchecked within the cellular lysosome, particularly in neurons. As a result, individuals with GM2 gangliosidosis experience progressive neurological diseases including motor deficits, progressive weakness and hypotonia, decreased responsiveness, vision deterioration, and seizures. Mice and cats are well-established animal models for Sandhoff disease, whereas Jacob sheep are the only known laboratory animal model of Tay–Sachs disease to exhibit clinical symptoms. Since the human diseases are relatively rare, animal models are indispensable tools for further study of pathogenesis and for development of potential treatments. Though no effective treatments for gangliosidoses currently exist, animal models have been used to test promising experimental therapies. Herein, the utility and limitations of gangliosidosis animal models and how they have contributed to the development of potential new treatments are described. Keywords: GM2 gangliosidosis, Tay–Sachs disease, Sandhoff disease, lysosomal storage disorder, sphingolipidosis, brain disease

  2. Animal models of GM2 gangliosidosis: utility and limitations

    Science.gov (United States)

    Lawson, Cheryl A; Martin, Douglas R

    2016-01-01

    GM2 gangliosidosis, a subset of lysosomal storage disorders, is caused by a deficiency of the glycohydrolase, β-N-acetylhexosaminidase, and includes the closely related Tay–Sachs and Sandhoff diseases. The enzyme deficiency prevents the normal, stepwise degradation of ganglioside, which accumulates unchecked within the cellular lysosome, particularly in neurons. As a result, individuals with GM2 gangliosidosis experience progressive neurological diseases including motor deficits, progressive weakness and hypotonia, decreased responsiveness, vision deterioration, and seizures. Mice and cats are well-established animal models for Sandhoff disease, whereas Jacob sheep are the only known laboratory animal model of Tay–Sachs disease to exhibit clinical symptoms. Since the human diseases are relatively rare, animal models are indispensable tools for further study of pathogenesis and for development of potential treatments. Though no effective treatments for gangliosidoses currently exist, animal models have been used to test promising experimental therapies. Herein, the utility and limitations of gangliosidosis animal models and how they have contributed to the development of potential new treatments are described. PMID:27499644

  3. Utilizing Chamber Data for Developing and Validating Climate Change Models

    Science.gov (United States)

    Monje, Oscar

    2012-01-01

    Controlled environment chambers (e.g. growth chambers, SPAR chambers, or open-top chambers) are useful for measuring plant ecosystem responses to climatic variables and CO2 that affect plant water relations. However, data from chambers was found to overestimate responses of C fluxes to CO2 enrichment. Chamber data may be confounded by numerous artifacts (e.g. sidelighting, edge effects, increased temperature and VPD, etc) and this limits what can be measured accurately. Chambers can be used to measure canopy level energy balance under controlled conditions and plant transpiration responses to CO2 concentration can be elucidated. However, these measurements cannot be used directly in model development or validation. The response of stomatal conductance to CO2 will be the same as in the field, but the measured response must be recalculated in such a manner to account for differences in aerodynamic conductance, temperature and VPD between the chamber and the field.

  4. UTILIZATION OF MULTIPLE MEASUREMENTS FOR GLOBAL THREE-DIMENSIONAL MAGNETOHYDRODYNAMIC SIMULATIONS

    International Nuclear Information System (INIS)

    Wang, A. H.; Wu, S. T.; Tandberg-Hanssen, E.; Hill, Frank

    2011-01-01

    Magnetic field measurements, line of sight (LOS) and/or vector magnetograms, have been used in a variety of solar physics studies. Currently, the global transverse velocity measurements near the photosphere from the Global Oscillation Network Group (GONG) are available. We have utilized these multiple observational data, for the first time, to present a data-driven global three-dimensional and resistive magnetohydrodynamic (MHD) simulation, and to investigate the energy transport across the photosphere to the corona. The measurements of the LOS magnetic field and transverse velocity reflect the effects of convective zone dynamics and provide information from the sub-photosphere to the corona. In order to self-consistently include the observables on the lower boundary as the inputs to drive the model, a set of time-dependent boundary conditions is derived by using the method of characteristics. We selected GONG's global transverse velocity measurements of synoptic chart CR2009 near the photosphere and SOLIS full-resolution LOS magnetic field maps of synoptic chart CR2009 on the photosphere to simulate the equilibrium state and compute the energy transport across the photosphere. To show the advantage of using both observed magnetic field and transverse velocity data, we have studied two cases: (1) with the inputs of the LOS magnetic field and transverse velocity measurements, and (2) with the input of the LOS magnetic field and without the input of transverse velocity measurements. For these two cases, the simulation results presented here are a three-dimensional coronal magnetic field configuration, density distributions on the photosphere and at 1.5 solar radii, and the solar wind in the corona. The deduced physical characteristics are the total current helicity and the synthetic emission. By comparing all the physical parameters of case 1 and case 2 and their synthetic emission images with the EIT image, we find that using both the measured magnetic field and the

  5. Economic analysis of open space box model utilization in spacecraft

    Science.gov (United States)

    Mohammad, Atif F.; Straub, Jeremy

    2015-05-01

    It is a known fact that the amount of data about space that is stored is getting larger on an everyday basis. However, the utilization of Big Data and related tools to perform ETL (Extract, Transform and Load) applications will soon be pervasive in the space sciences. We have entered in a crucial time where using Big Data can be the difference (for terrestrial applications) between organizations underperforming and outperforming their peers. The same is true for NASA and other space agencies, as well as for individual missions and the highly-competitive process of mission data analysis and publication. In most industries, conventional opponents and new candidates alike will influence data-driven approaches to revolutionize and capture the value of Big Data archives. The Open Space Box Model is poised to take the proverbial "giant leap", as it provides autonomic data processing and communications for spacecraft. We can find economic value generated from such use of data processing in our earthly organizations in every sector, such as healthcare, retail. We also can easily find retailers, performing research on Big Data, by utilizing sensors driven embedded data in products within their stores and warehouses to determine how these products are actually used in the real world.

  6. A structured review of health utility measures and elicitation in advanced/metastatic breast cancer

    Directory of Open Access Journals (Sweden)

    Hao Y

    2016-06-01

    Full Text Available Yanni Hao,1 Verena Wolfram,2 Jennifer Cook2 1Novartis Pharmaceuticals, East Hanover, NJ, USA; 2Adelphi Values, Bollington, UK Background: Health utilities are increasingly incorporated in health economic evaluations. Different elicitation methods, direct and indirect, have been established in the past. This study examined the evidence on health utility elicitation previously reported in advanced/metastatic breast cancer and aimed to link these results to requirements of reimbursement bodies. Methods: Searches were conducted using a detailed search strategy across several electronic databases (MEDLINE, EMBASE, Cochrane Library, and EconLit databases, online sources (Cost-effectiveness Analysis Registry and the Health Economics Research Center, and web sites of health technology assessment (HTA bodies. Publications were selected based on the search strategy and the overall study objectives. Results: A total of 768 publications were identified in the searches, and 26 publications, comprising 18 journal articles and eight submissions to HTA bodies, were included in the evidence review. Most journal articles derived utilities from the European Quality of Life Five-Dimensions questionnaire (EQ-5D. Other utility measures, such as the direct methods standard gamble (SG, time trade-off (TTO, and visual analog scale (VAS, were less frequently used. Several studies described mapping algorithms to generate utilities from disease-specific health-related quality of life (HRQOL instruments such as European Organization for Research and Treatment of Cancer Quality of Life Questionnaire – Core 30 (EORTC QLQ-C30, European Organization for Research and Treatment of Cancer Quality of Life Questionnaire – Breast Cancer 23 (EORTC QLQ-BR23, Functional Assessment of Cancer Therapy – General questionnaire (FACT-G, and Utility-Based Questionnaire-Cancer (UBQ-C; most used EQ-5D as the reference. Sociodemographic factors that affect health utilities, such as age, sex

  7. Modeling a Packed Bed Reactor Utilizing the Sabatier Process

    Science.gov (United States)

    Shah, Malay G.; Meier, Anne J.; Hintze, Paul E.

    2017-01-01

    A numerical model is being developed using Python which characterizes the conversion and temperature profiles of a packed bed reactor (PBR) that utilizes the Sabatier process; the reaction produces methane and water from carbon dioxide and hydrogen. While the specific kinetics of the Sabatier reaction on the RuAl2O3 catalyst pellets are unknown, an empirical reaction rate equation1 is used for the overall reaction. As this reaction is highly exothermic, proper thermal control is of the utmost importance to ensure maximum conversion and to avoid reactor runaway. It is therefore necessary to determine what wall temperature profile will ensure safe and efficient operation of the reactor. This wall temperature will be maintained by active thermal controls on the outer surface of the reactor. Two cylindrical PBRs are currently being tested experimentally and will be used for validation of the Python model. They are similar in design except one of them is larger and incorporates a preheat loop by feeding the reactant gas through a pipe along the center of the catalyst bed. The further complexity of adding a preheat pipe to the model to mimic the larger reactor is yet to be implemented and validated; preliminary validation is done using the smaller PBR with no reactant preheating. When mapping experimental values of the wall temperature from the smaller PBR into the Python model, a good approximation of the total conversion and temperature profile has been achieved. A separate CFD model incorporates more complex three-dimensional effects by including the solid catalyst pellets within the domain. The goal is to improve the Python model to the point where the results of other reactor geometry can be reasonably predicted relatively quickly when compared to the much more computationally expensive CFD approach. Once a reactor size is narrowed down using the Python approach, CFD will be used to generate a more thorough prediction of the reactors performance.

  8. Solar Measurement and Modeling | Grid Modernization | NREL

    Science.gov (United States)

    Measurement and Modeling Solar Measurement and Modeling NREL supports grid integration studies , industry, government, and academia by disseminating solar resource measurements, models, and best practices have continuously gathered basic solar radiation information, and they now gather high-resolution data

  9. Utilizing Photogrammetry and Strain Gage Measurement to Characterize Pressurization of an Inflatable Module

    Science.gov (United States)

    Valle, Gerard D.; Selig, Molly; Litteken, Doug; Oliveras, Ovidio

    2012-01-01

    This paper documents the integration of a large hatch penetration into an inflatable module. This paper also documents the comparison of analytical load predictions with measured results utilizing strain measurement. Strain was measured by utilizing photogrammetric measurement and through measurement obtained from strain gages mounted to selected clevises that interface with the structural webbings. Bench testing showed good correlation between strain measurement obtained from an extensometer and photogrammetric measurement especially after the fabric has transitioned through the low load/high strain region of the curve. Test results for the full-scale torus showed mixed results in the lower load and thus lower strain regions. Overall strain, and thus load, measured by strain gages and photogrammetry tracked fairly well with analytical predictions. Methods and areas of improvements are discussed.

  10. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  11. Awareness of Occupational Injuries and Utilization of Safety Measures among Welders in Coastal South India

    Directory of Open Access Journals (Sweden)

    S Ganesh Kumar

    2013-10-01

    Full Text Available Background: Awareness of occupational hazards and its safety precautions among welders is an important health issue, especially in developing countries. Objective: To assess the awareness of occupational hazards and utilization of safety measures among welders in coastal South India. Methods: A cross-sectional study was conducted among 209 welders in Puducherry, South India. Baseline characteristics, awareness of health hazards, safety measures and their availability to and utilization by the participants were assessed using a pre-tested structured questionnaire. Results: The majority of studied welders aged between 20 and 40 years (n=160, 76.6% and had 1-10 years of education (n=181, 86.6%. They were more aware of hazards (n=174, 83.3% than safety measures (n=134, 64.1%. The majority of studied welders utilized at least one protective measure in the preceding week (n=200, 95.7%. Many of them had more than 5 years of experience (n=175, 83.7%, however, only 20% of them had institutional training (n=40, 19.1%. Age group, education level, and utilization of safety measures were significantly associated with awareness of hazards in univariate analysis (p<0.05. Conclusion: Awareness of occupational hazards and utilization of safety measures is low among welders in coastal South India, which highlights the importance of strengthening safety regulatory services towards this group of workers.

  12. Measurement error models with interactions

    Science.gov (United States)

    Midthune, Douglas; Carroll, Raymond J.; Freedman, Laurence S.; Kipnis, Victor

    2016-01-01

    An important use of measurement error models is to correct regression models for bias due to covariate measurement error. Most measurement error models assume that the observed error-prone covariate (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$W$\\end{document}) is a linear function of the unobserved true covariate (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$X$\\end{document}) plus other covariates (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$Z$\\end{document}) in the regression model. In this paper, we consider models for \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$W$\\end{document} that include interactions between \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$X$\\end{document} and \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$Z$\\end{document}. We derive the conditional distribution of

  13. A combined model to assess technical and economic consequences of changing conditions and management options for wastewater utilities.

    Science.gov (United States)

    Giessler, Mathias; Tränckner, Jens

    2018-02-01

    The paper presents a simplified model that quantifies economic and technical consequences of changing conditions in wastewater systems on utility level. It has been developed based on data from stakeholders and ministries, collected by a survey that determined resulting effects and adapted measures. The model comprises all substantial cost relevant assets and activities of a typical German wastewater utility. It consists of three modules: i) Sewer for describing the state development of sewer systems, ii) WWTP for process parameter consideration of waste water treatment plants (WWTP) and iii) Cost Accounting for calculation of expenses in the cost categories and resulting charges. Validity and accuracy of this model was verified by using historical data from an exemplary wastewater utility. Calculated process as well as economic parameters shows a high accuracy compared to measured parameters and given expenses. Thus, the model is proposed to support strategic, process oriented decision making on utility level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Resource allocation on computational grids using a utility model and the knapsack problem

    CERN Document Server

    Van der ster, Daniel C; Parra-Hernandez, Rafael; Sobie, Randall J

    2009-01-01

    This work introduces a utility model (UM) for resource allocation on computational grids and formulates the allocation problem as a variant of the 0–1 multichoice multidimensional knapsack problem. The notion of task-option utility is introduced, and it is used to effect allocation policies. We present a variety of allocation policies, which are expressed as functions of metrics that are both intrinsic and external to the task and resources. An external user-defined credit-value metric is shown to allow users to intervene in the allocation of urgent or low priority tasks. The strategies are evaluated in simulation against random workloads as well as those drawn from real systems. We measure the sensitivity of the UM-derived schedules to variations in the allocation policies and their corresponding utility functions. The UM allocation strategy is shown to optimally allocate resources congruent with the chosen policies.

  15. Evaluation model of wind energy resources and utilization efficiency of wind farm

    Science.gov (United States)

    Ma, Jie

    2018-04-01

    Due to the large amount of abandoned winds in wind farms, the establishment of a wind farm evaluation model is particularly important for the future development of wind farms In this essay, consider the wind farm's wind energy situation, Wind Energy Resource Model (WERM) and Wind Energy Utilization Efficiency Model(WEUEM) are established to conduct a comprehensive assessment of the wind farm. Wind Energy Resource Model (WERM) contains average wind speed, average wind power density and turbulence intensity, which assessed wind energy resources together. Based on our model, combined with the actual measurement data of a wind farm, calculate the indicators using the model, and the results are in line with the actual situation. We can plan the future development of the wind farm based on this result. Thus, the proposed establishment approach of wind farm assessment model has application value.

  16. FY 2000 report on the results of the project for measures for rationalization of the international energy utilization - the model project for the heightening of efficiency of the international energy consumption. 1/2. Model project for facilities for effective utilization of by-producing exhaust gases from chemical plant, etc.; 2000 nendo kokusai energy shohi koritsuka tou moderu jigyo seika hokokusho. Kagaku kojo fukusei haigasu tou yuko riyo setsubi moderu jigyo (1/2)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    For the purpose of contributing to the reduction in the energy consumption in China and the stable energy supply in Japan by heightening efficiency of the energy utilization in the petrochemical industry which is an industry of much energy consumption in China, a model project for facilities for effective utilization of by-producing gases from chemical plant, etc. was carried out, and the FY 2000 results were reported. Concretely, the combustion incinerator and combustion exhaust gas recovery facilities for waste water and gas were to be installed at acrylonitrile plant of petrochemical plant in China to recover the combustion exhaust gas as process gas used in plant for effective utilization. The plant at installation site has been run since 1995, having a production capacity of 50,000-60,000 tons. In this fiscal year, the detailed design and supply of electric instrumentation equipment and manufacture of boiler facilities were carried out according to the basic design made in the previous fiscal year. Further, the equipment manufactured in the previous year and this fiscal year were transported and inspected. The paper also reviewed drawings of the design of the facilities for part of which China takes responsibility. (NEDO)

  17. FY 2000 report on the results of the project for measures for rationalization of the international energy utilization - the model project for the heightening of efficiency of the international energy consumption. 2/2. Model project for facilities for effective utilization of by-producing exhaust gases from chemical plant, etc.; 2000 nendo kokusai energy shohi koritsuka tou moderu jigyo seika hokokusho. Kagaku kojo fukusei haigasu tou yuko riyo setsubi moderu jigyo (2/2)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    For the purpose of contributing to the reduction in the energy consumption in China and the stable energy supply in Japan by heightening efficiency of the energy utilization in the petrochemical industry which is an industry of much energy consumption in China, a model project for facilities for effective utilization of by-producing gases from chemical plant, etc. was carried out, and the FY 2000 results were reported. Concretely, the combustion incinerator and combustion exhaust gas recovery facilities for waste water and gas were to be installed at acrylonitrile plant of petrochemical plant in China to recover the combustion exhaust gas as process gas used in plant for effective utilization. In this fiscal year, the detailed design and supply of electric instrumentation equipment and manufacture of boiler facilities were carried out according to the basic design made in the previous fiscal year. Further, the equipment manufactured in the previous year and this fiscal year were transported and inspected. The paper also reviewed drawings of the design of the facilities for part of which China takes responsibility. The separate volume (2/2) included drawings of valve, fire detector, orifice, thermocouple, motor control equipment, etc. (NEDO)

  18. Measuring Health Utilities in Children and Adolescents: A Systematic Review of the Literature.

    Directory of Open Access Journals (Sweden)

    Dominic Thorrington

    Full Text Available The objective of this review was to evaluate the use of all direct and indirect methods used to estimate health utilities in both children and adolescents. Utilities measured pre- and post-intervention are combined with the time over which health states are experienced to calculate quality-adjusted life years (QALYs. Cost-utility analyses (CUAs estimate the cost-effectiveness of health technologies based on their costs and benefits using QALYs as a measure of benefit. The accurate measurement of QALYs is dependent on using appropriate methods to elicit health utilities.We sought studies that measured health utilities directly from patients or their proxies. We did not exclude those studies that also included adults in the analysis, but excluded those studies focused only on adults.We evaluated 90 studies from a total of 1,780 selected from the databases. 47 (52% studies were CUAs incorporated into randomised clinical trials; 23 (26% were health-state utility assessments; 8 (9% validated methods and 12 (13% compared existing or new methods. 22 unique direct or indirect calculation methods were used a total of 137 times. Direct calculation through standard gamble, time trade-off and visual analogue scale was used 32 times. The EuroQol EQ-5D was the most frequently-used single method, selected for 41 studies. 15 of the methods used were generic methods and the remaining 7 were disease-specific. 48 of the 90 studies (53% used some form of proxy, with 26 (29% using proxies exclusively to estimate health utilities.Several child- and adolescent-specific methods are still being developed and validated, leaving many studies using methods that have not been designed or validated for use in children or adolescents. Several studies failed to justify using proxy respondents rather than administering the methods directly to the patients. Only two studies examined missing responses to the methods administered with respect to the patients' ages.

  19. Utility of the Canadian Occupational Performance Measure as an admission and outcome measure in interdisciplinary community-based geriatric rehabilitation

    DEFF Research Database (Denmark)

    Larsen, Anette Enemark; Carlsson, Gunilla

    2012-01-01

    In a community-based geriatric rehabilitation project, the Canadian Occupational Performance Measure (COPM) was used to develop a coordinated, interdisciplinary, and client-centred approach focusing on occupational performance. The purpose of this study was to evaluate the utility of the COPM as ...... physician, home care, occupational therapy, physiotherapy...

  20. A random utility model of delay discounting and its application to people with externalizing psychopathology.

    Science.gov (United States)

    Dai, Junyi; Gunn, Rachel L; Gerst, Kyle R; Busemeyer, Jerome R; Finn, Peter R

    2016-10-01

    Previous studies have demonstrated that working memory capacity plays a central role in delay discounting in people with externalizing psychopathology. These studies used a hyperbolic discounting model, and its single parameter-a measure of delay discounting-was estimated using the standard method of searching for indifference points between intertemporal options. However, there are several problems with this approach. First, the deterministic perspective on delay discounting underlying the indifference point method might be inappropriate. Second, the estimation procedure using the R2 measure often leads to poor model fit. Third, when parameters are estimated using indifference points only, much of the information collected in a delay discounting decision task is wasted. To overcome these problems, this article proposes a random utility model of delay discounting. The proposed model has 2 parameters, 1 for delay discounting and 1 for choice variability. It was fit to choice data obtained from a recently published data set using both maximum-likelihood and Bayesian parameter estimation. As in previous studies, the delay discounting parameter was significantly associated with both externalizing problems and working memory capacity. Furthermore, choice variability was also found to be significantly associated with both variables. This finding suggests that randomness in decisions may be a mechanism by which externalizing problems and low working memory capacity are associated with poor decision making. The random utility model thus has the advantage of disclosing the role of choice variability, which had been masked by the traditional deterministic model. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  1. Assessing the empirical validity of alternative multi-attribute utility measures in the maternity context

    Directory of Open Access Journals (Sweden)

    Morrell Jane

    2009-05-01

    Full Text Available Abstract Background Multi-attribute utility measures are preference-based health-related quality of life measures that have been developed to inform economic evaluations of health care interventions. The objective of this study was to compare the empirical validity of two multi-attribute utility measures (EQ-5D and SF-6D based on hypothetical preferences in a large maternity population in England. Methods Women who participated in a randomised controlled trial of additional postnatal support provided by trained community support workers represented the study population for this investigation. The women were asked to complete the EQ-5D descriptive system (which defines health-related quality of life in terms of five dimensions: mobility, self care, usual activities, pain/discomfort and anxiety/depression and the SF-36 (which defines health-related quality of life, using 36 items, across eight dimensions: physical functioning, role limitations (physical, social functioning, bodily pain, general health, mental health, vitality and role limitations (emotional at six months postpartum. Their responses were converted into utility scores using the York A1 tariff set and the SF-6D utility algorithm, respectively. One-way analysis of variance was used to test the hypothetically-constructed preference rule that each set of utility scores differs significantly by self-reported health status (categorised as excellent, very good, good, fair or poor. The degree to which EQ-5D and SF-6D utility scores reflected alternative dichotomous configurations of self-reported health status and the Edinburgh Postnatal Depression Scale score was tested using the relative efficiency statistic and receiver operating characteristic (ROC curves. Results The mean utility score for the EQ-5D was 0.861 (95% CI: 0.844, 0.877, whilst the mean utility score for the SF-6D was 0.809 (95% CI: 0.796, 0.822, representing a mean difference in utility score of 0.052 (95% CI: 0.040, 0

  2. Current measurement system utilizing cryogenic techniques for the absolute measurement of the magnetic flux quantum

    International Nuclear Information System (INIS)

    Endo, T.; Murayama, Y.; Sakamoto, Y.; Sakuraba, T.; Shiota, F.

    1989-01-01

    A series of systems composed of cryogenic devices such as a Josephson potentiometer and a cryogenic current comparator has been proposed and developed to precisely measure a current with any value up to 1 A. These systems will be used to measure the injected electrical energy with an uncertainty of the order of 0.01 ppm or less in the absolute measurement of the magnetic flux quantum by superconducting magnetic levitation. Some preliminary experiments are described

  3. Key data elements for use in cost-utility modeling of biological treatments for rheumatoid arthritis.

    Science.gov (United States)

    Ganz, Michael L; Hansen, Brian Bekker; Valencia, Xavier; Strandberg-Larsen, Martin

    2015-05-01

    Economic evaluation is becoming more common and important as new biologic therapies for rheumatoid arthritis (RA) are developed. While much has been published about how to design cost-utility models for RA to conduct these evaluations, less has been written about the sources of data populating those models. The goal is to review the literature and to provide recommendations for future data collection efforts. This study reviewed RA cost-utility models published between January 2006 and February 2014 focusing on five key sources of data (health-related quality-of-life and utility, clinical outcomes, disease progression, course of treatment, and healthcare resource use and costs). It provided recommendations for collecting the appropriate data during clinical and other studies to support modeling of biologic treatments for RA. Twenty-four publications met the selection criteria. Almost all used two steps to convert clinical outcomes data to utilities rather than more direct methods; most did not use clinical outcomes measures that captured absolute levels of disease activity and physical functioning; one-third of them, in contrast with clinical reality, assumed zero disease progression for biologic-treated patients; little more than half evaluated courses of treatment reflecting guideline-based or actual clinical care; and healthcare resource use and cost data were often incomplete. Based on these findings, it is recommended that future studies collect clinical outcomes and health-related quality-of-life data using appropriate instruments that can convert directly to utilities; collect data on actual disease progression; be designed to capture real-world courses of treatment; and collect detailed data on a wide range of healthcare resources and costs.

  4. Utilization of old vibro-acoustic measuring equipment to grasp basic concepts of vibration measurements

    DEFF Research Database (Denmark)

    Darula, Radoslav

    2013-01-01

    The aim of the paper is to show that even old vibro-acoustic (analog) equipment can be used as a very suitable teaching equipment to grasp basic principles of measurements in an era, when measurement equipments are more-or-less treated as ‘black-boxes’, i.e. the user cannot see directly how...

  5. Measuring Collective Efficacy: A Multilevel Measurement Model for Nested Data

    Science.gov (United States)

    Matsueda, Ross L.; Drakulich, Kevin M.

    2016-01-01

    This article specifies a multilevel measurement model for survey response when data are nested. The model includes a test-retest model of reliability, a confirmatory factor model of inter-item reliability with item-specific bias effects, an individual-level model of the biasing effects due to respondent characteristics, and a neighborhood-level…

  6. Modeling Substrate Utilization, Metabolite Production, and Uranium Immobilization in Shewanella oneidensis Biofilms

    Directory of Open Access Journals (Sweden)

    Ryan S. Renslow

    2017-06-01

    Full Text Available In this study, we developed a two-dimensional mathematical model to predict substrate utilization and metabolite production rates in Shewanella oneidensis MR-1 biofilm in the presence and absence of uranium (U. In our model, lactate and fumarate are used as the electron donor and the electron acceptor, respectively. The model includes the production of extracellular polymeric substances (EPS. The EPS bound to the cell surface and distributed in the biofilm were considered bound EPS (bEPS and loosely associated EPS (laEPS, respectively. COMSOL® Multiphysics finite element analysis software was used to solve the model numerically (model file provided in the Supplementary Material. The input variables of the model were the lactate, fumarate, cell, and EPS concentrations, half saturation constant for fumarate, and diffusion coefficients of the substrates and metabolites. To estimate unknown parameters and calibrate the model, we used a custom designed biofilm reactor placed inside a nuclear magnetic resonance (NMR microimaging and spectroscopy system and measured substrate utilization and metabolite production rates. From these data we estimated the yield coefficients, maximum substrate utilization rate, half saturation constant for lactate, stoichiometric ratio of fumarate and acetate to lactate and stoichiometric ratio of succinate to fumarate. These parameters are critical to predicting the activity of biofilms and are not available in the literature. Lastly, the model was used to predict uranium immobilization in S. oneidensis MR-1 biofilms by considering reduction and adsorption processes in the cells and in the EPS. We found that the majority of immobilization was due to cells, and that EPS was less efficient at immobilizing U. Furthermore, most of the immobilization occurred within the top 10 μm of the biofilm. To the best of our knowledge, this research is one of the first biofilm immobilization mathematical models based on experimental

  7. A structured review of health utility measures and elicitation in advanced/metastatic breast cancer.

    Science.gov (United States)

    Hao, Yanni; Wolfram, Verena; Cook, Jennifer

    2016-01-01

    Health utilities are increasingly incorporated in health economic evaluations. Different elicitation methods, direct and indirect, have been established in the past. This study examined the evidence on health utility elicitation previously reported in advanced/metastatic breast cancer and aimed to link these results to requirements of reimbursement bodies. Searches were conducted using a detailed search strategy across several electronic databases (MEDLINE, EMBASE, Cochrane Library, and EconLit databases), online sources (Cost-effectiveness Analysis Registry and the Health Economics Research Center), and web sites of health technology assessment (HTA) bodies. Publications were selected based on the search strategy and the overall study objectives. A total of 768 publications were identified in the searches, and 26 publications, comprising 18 journal articles and eight submissions to HTA bodies, were included in the evidence review. Most journal articles derived utilities from the European Quality of Life Five-Dimensions questionnaire (EQ-5D). Other utility measures, such as the direct methods standard gamble (SG), time trade-off (TTO), and visual analog scale (VAS), were less frequently used. Several studies described mapping algorithms to generate utilities from disease-specific health-related quality of life (HRQOL) instruments such as European Organization for Research and Treatment of Cancer Quality of Life Questionnaire - Core 30 (EORTC QLQ-C30), European Organization for Research and Treatment of Cancer Quality of Life Questionnaire - Breast Cancer 23 (EORTC QLQ-BR23), Functional Assessment of Cancer Therapy - General questionnaire (FACT-G), and Utility-Based Questionnaire-Cancer (UBQ-C); most used EQ-5D as the reference. Sociodemographic factors that affect health utilities, such as age, sex, income, and education, as well as disease progression, choice of utility elicitation method, and country settings, were identified within the journal articles. Most

  8. Utilizing the non-bridge oxygen model to predict the glass viscosity

    International Nuclear Information System (INIS)

    Choi, Kwansik; Sheng, Jiawei; Maeng, Sung Jun; Song, Myung Jae

    1998-01-01

    Viscosity is the most important process property of waste glass. Viscosity measurement is difficult and costs much. Non-bridging Oxygen (NBO) model which relates glass composition to viscosity had been developed for high level waste at the Savannah River Site (SRS). This research utilized this NBO model to predict the viscosity of KEPRI's 55 glasses. It was found that there was a linear relationship between the measured viscosity and the predicted viscosity. The NBO model could be used to predict glass viscosity in glass formulation development. However the precision of predicted viscosity is out of satisfaction because the composition ranges are very different between the SRS and KEPRI glasses. The modification of NBO calculation, which included modification of alkaline earth elements and TiO 2 , could not strikingly improve the precision of predicted values

  9. Implementation of energy efficiency measures by municipal utilities; Umsetzung von Energieeffizienzmassnahmen durch Stadtwerke

    Energy Technology Data Exchange (ETDEWEB)

    Horst, Juri; Droeschel, Barbara [Institut fuer ZukunftsEnergieSysteme (IZES), Saarbruecken (Germany)

    2012-04-15

    Local players have a very special role to fill in the implementation of the German federal government's ambitious energy efficiency goals. In the past the contributions made by municipal utilities in the way of special offers or measures to develop efficiency potentials were only modest. Moreover there were specific impediments that discouraged a significant competition-driven efficiency services market from developing. However, there are other instruments available that could encourage municipal utilities to implement efficiency goals. A recent research project has shown how standardised efficiency programmes can be used to tap into existing efficiency potentials at a sufficient level of intensity and with macroeconomic benefit.

  10. Indirect Measurement of Energy Density of Soft PZT Ceramic Utilizing Mechanical Stress

    Science.gov (United States)

    Unruan, Muangjai; Unruan, Sujitra; Inkong, Yutthapong; Yimnirun, Rattikorn

    2017-11-01

    This paper reports on an indirect measurement of energy density of soft PZT ceramic utilizing mechanical stress. The method works analogous to the Olsen cycle and allows for a large amount of electro-mechanical energy conversion. A maximum energy density of 350 kJ/m3/cycle was found under 0-312 MPa and 1-20 kV/cm of applied mechanical stress and electric field, respectively. The obtained result is substantially higher than the results reported in previous studies of PZT materials utilizing a direct piezoelectric effect.

  11. A dynamic Brownian bridge movement model to estimate utilization distributions for heterogeneous animal movement.

    Science.gov (United States)

    Kranstauber, Bart; Kays, Roland; Lapoint, Scott D; Wikelski, Martin; Safi, Kamran

    2012-07-01

    1. The recently developed Brownian bridge movement model (BBMM) has advantages over traditional methods because it quantifies the utilization distribution of an animal based on its movement path rather than individual points and accounts for temporal autocorrelation and high data volumes. However, the BBMM assumes unrealistic homogeneous movement behaviour across all data. 2. Accurate quantification of the utilization distribution is important for identifying the way animals use the landscape. 3. We improve the BBMM by allowing for changes in behaviour, using likelihood statistics to determine change points along the animal's movement path. 4. This novel extension, outperforms the current BBMM as indicated by simulations and examples of a territorial mammal and a migratory bird. The unique ability of our model to work with tracks that are not sampled regularly is especially important for GPS tags that have frequent failed fixes or dynamic sampling schedules. Moreover, our model extension provides a useful one-dimensional measure of behavioural change along animal tracks. 5. This new method provides a more accurate utilization distribution that better describes the space use of realistic, behaviourally heterogeneous tracks. © 2012 The Authors. Journal of Animal Ecology © 2012 British Ecological Society.

  12. Markov Decision Process Measurement Model.

    Science.gov (United States)

    LaMar, Michelle M

    2018-03-01

    Within-task actions can provide additional information on student competencies but are challenging to model. This paper explores the potential of using a cognitive model for decision making, the Markov decision process, to provide a mapping between within-task actions and latent traits of interest. Psychometric properties of the model are explored, and simulation studies report on parameter recovery within the context of a simple strategy game. The model is then applied to empirical data from an educational game. Estimates from the model are found to correlate more strongly with posttest results than a partial-credit IRT model based on outcome data alone.

  13. User Utility Oriented Queuing Model for Resource Allocation in Cloud Environment

    Directory of Open Access Journals (Sweden)

    Zhe Zhang

    2015-01-01

    Full Text Available Resource allocation is one of the most important research topics in servers. In the cloud environment, there are massive hardware resources of different kinds, and many kinds of services are usually run on virtual machines of the cloud server. In addition, cloud environment is commercialized, and economical factor should also be considered. In order to deal with commercialization and virtualization of cloud environment, we proposed a user utility oriented queuing model for task scheduling. Firstly, we modeled task scheduling in cloud environment as an M/M/1 queuing system. Secondly, we classified the utility into time utility and cost utility and built a linear programming model to maximize total utility for both of them. Finally, we proposed a utility oriented algorithm to maximize the total utility. Massive experiments validate the effectiveness of our proposed model.

  14. Social Security Measures for Elderly Population in Delhi, India: Awareness, Utilization and Barriers.

    Science.gov (United States)

    Kohli, Charu; Gupta, Kalika; Banerjee, Bratati; Ingle, Gopal Krishna

    2017-05-01

    World population of elderly is increasing at a fast pace. The number of elderly in India has increased by 54.77% in the last 15 years. A number of social security measures have been taken by Indian government. To assess awareness, utilization and barriers faced while utilizing social security schemes by elderly in a secondary care hospital situated in a rural area in Delhi, India. A cross-sectional study was conducted among 360 individuals aged 60 years and above in a secondary care hospital situated in a rural area in Delhi. A pre-tested, semi-structured schedule prepared in local language was used. Data was analysed using SPSS software (version 17.0). Chi-square test was used to observe any statistical association between categorical variables. The results were considered statistically significant if p-value was less than 0.05. A majority of study subjects were females (54.2%), Hindu (89.7%), married (60.3%) and were not engaged in any occupation (82.8%). Awareness about Indira Gandhi National Old Age Pension Scheme (IGNOAPS) was present among 286 (79.4%) and Annapurna scheme in 193 (53.6%) subjects. Among 223 subjects who were below poverty line, 179 (80.3%) were aware of IGNOAPS; while, 112 (50.2%) were utilizing the scheme. There was no association of awareness with education status, occupation, religion, family type, marital status and caste (p>0.05). Corruption and tedious administrative formalities were major barriers reported. Awareness generation, provision of information on how to approach the concerned authority for utilizing the scheme and ease of administrative procedures should be an integral part of any social security scheme or measure. In the present study, about 79.4% of elderly were aware and 45% of the eligible subjects were utilizing pension scheme. Major barriers reported in utilization of schemes were corruption and tedious administrative procedures.

  15. Mathematical models utilized in the retrieval of displacement information encoded in fringe patterns

    Science.gov (United States)

    Sciammarella, Cesar A.; Lamberti, Luciano

    2016-02-01

    All the techniques that measure displacements, whether in the range of visible optics or any other form of field methods, require the presence of a carrier signal. A carrier signal is a wave form modulated (modified) by an input, deformation of the medium. A carrier is tagged to the medium under analysis and deforms with the medium. The wave form must be known both in the unmodulated and the modulated conditions. There are two basic mathematical models that can be utilized to decode the information contained in the carrier, phase modulation or frequency modulation, both are closely connected. Basic problems connected to the detection and recovery of displacement information that are common to all optical techniques will be analyzed in this paper, focusing on the general theory common to all the methods independently of the type of signal utilized. The aspects discussed are those that have practical impact in the process of data gathering and data processing.

  16. Utilizing Photogrammetry and Strain Gage Measurement to Characterize Pressurization of Inflatable Modules

    Science.gov (United States)

    Mohammed, Anil

    2011-01-01

    This paper focuses on integrating a large hatch penetration into inflatable modules of various constructions. This paper also compares load predictions with test measurements. The strain was measured by utilizing photogrammetric methods and strain gages mounted to select clevises that interface with the structural webbings. Bench testing showed good correlation between strain data collected from an extensometer and photogrammetric measurements, even when the material transitioned from the low load to high load strain region of the curve. The full-scale torus design module showed mixed results as well in the lower load and high strain regions. After thorough analysis of photogrammetric measurements, strain gage measurements, and predicted load, the photogrammetric measurements seem to be off by a factor of two.

  17. Business model innovation for sustainable energy: German utilities and renewable energy

    International Nuclear Information System (INIS)

    Richter, Mario

    2013-01-01

    The electric power sector stands at the beginning of a fundamental transformation process towards a more sustainable production based on renewable energies. Consequently, electric utilities as incumbent actors face a massive challenge to find new ways of creating, delivering, and capturing value from renewable energy technologies. This study investigates utilities' business models for renewable energies by analyzing two generic business models based on a series of in-depth interviews with German utility managers. It is found that utilities have developed viable business models for large-scale utility-side renewable energy generation. At the same time, utilities lack adequate business models to commercialize small-scale customer-side renewable energy technologies. By combining the business model concept with innovation and organization theory practical recommendations for utility mangers and policy makers are derived. - Highlights: • The energy transition creates a fundamental business model challenge for utilities. • German utilities succeed in large-scale and fail in small-scale renewable generation. • Experiences from other industries are available to inform utility managers. • Business model innovation capabilities will be crucial to master the energy transition

  18. The utility of Earth system Models of Intermediate Complexity

    NARCIS (Netherlands)

    Weber, S.L.

    2010-01-01

    Intermediate-complexity models are models which describe the dynamics of the atmosphere and/or ocean in less detail than conventional General Circulation Models (GCMs). At the same time, they go beyond the approach taken by atmospheric Energy Balance Models (EBMs) or ocean box models by

  19. A System Dynamics Approach to Modeling the Sensitivity of Inappropriate Emergency Department Utilization

    Science.gov (United States)

    Behr, Joshua G.; Diaz, Rafael

    Non-urgent Emergency Department utilization has been attributed with increasing congestion in the flow and treatment of patients and, by extension, conditions the quality of care and profitability of the Emergency Department. Interventions designed to divert populations to more appropriate care may be cautiously received by operations managers due to uncertainty about the impact an adopted intervention may have on the two values of congestion and profitability. System Dynamics (SD) modeling and simulation may be used to measure the sensitivity of these two, often-competing, values of congestion and profitability and, thus, provide an additional layer of information designed to inform strategic decision making.

  20. Local cerebral glucose utilization in the beagle puppy model of intraventricular hemorrhage

    International Nuclear Information System (INIS)

    Ment, L.R.; Stewart, W.B.; Duncan, C.C.

    1982-01-01

    Local cerebral glucose utilization has been measured by means of carbon-14( 14 C)-autoradiography with 2-deoxyglucose in the newborn beagle puppy model of intraventricular hemorrhage. Our studies demonstrate gray matter/white matter differentiation of uptake of 14 C-2-deoxyglucose in the control pups, as would be expected from adult animal studies. However, there is a marked homogeneity of 14 C-2-deoxyglucose uptake in all brain regions in the puppies with intraventricular hemorrhage, possibly indicating a loss of the known coupling between cerebral blood flow and metabolism in this neuropathological condition

  1. An Index of Trauma Severity Based on Multiattribute Utility: An Illustration of Complex Utility Modeling.

    Science.gov (United States)

    1981-10-01

    measure for Central Nervus System is the Glasgow Cons Score (GCS), a scale of brain and spinal cord injury (Langfitt [1978]), and is itself an additive...concerns directly relating to the injury itself were identified. These were: 1. Ventilation Severity 2 Circulation Severity 3. Central Nervous System ...interacting system within which these concerns represent interacting parts. Most trauma involves only one of these systems , but more than one may be

  2. A Framework for Organizing Current and Future Electric Utility Regulatory and Business Models

    Energy Technology Data Exchange (ETDEWEB)

    Satchwell, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cappers, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Schwartz, Lisa [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fadrhonc, Emily Martin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-06-01

    In this report, we will present a descriptive and organizational framework for incremental and fundamental changes to regulatory and utility business models in the context of clean energy public policy goals. We will also discuss the regulated utility's role in providing value-added services that relate to distributed energy resources, identify the "openness" of customer information and utility networks necessary to facilitate change, and discuss the relative risks, and the shifting of risks, for utilities and customers.

  3. Emergency Preparedness Education for Nurses: Core Competency Familiarity Measured Utilizing an Adapted Emergency Preparedness Information Questionnaire.

    Science.gov (United States)

    Georgino, Madeline M; Kress, Terri; Alexander, Sheila; Beach, Michael

    2015-01-01

    The purpose of this project was to measure trauma nurse improvement in familiarity with emergency preparedness and disaster response core competencies as originally defined by the Emergency Preparedness Information Questionnaire after a focused educational program. An adapted version of the Emergency Preparedness Information Questionnaire was utilized to measure familiarity of nurses with core competencies pertinent to first responder capabilities. This project utilized a pre- and postsurvey descriptive design and integrated education sessions into the preexisting, mandatory "Trauma Nurse Course" at large, level I trauma center. A total of 63 nurses completed the intervention during May and September 2014 sessions. Overall, all 8 competencies demonstrated significant (P < .001; 98% confidence interval) improvements in familiarity. In conclusion, this pilot quality improvement project demonstrated a unique approach to educating nurses to be more ready and comfortable when treating victims of a disaster.

  4. Modeling energy flexibility of low energy buildings utilizing thermal mass

    DEFF Research Database (Denmark)

    Foteinaki, Kyriaki; Heller, Alfred; Rode, Carsten

    2016-01-01

    In the future energy system a considerable increase in the penetration of renewable energy is expected, challenging the stability of the system, as both production and consumption will have fluctuating patterns. Hence, the concept of energy flexibility will be necessary in order for the consumption...... to match the production patterns, shifting demand from on-peak hours to off-peak hours. Buildings could act as flexibility suppliers to the energy system, through load shifting potential, provided that the large thermal mass of the building stock could be utilized for energy storage. In the present study...... the load shifting potential of an apartment of a low energy building in Copenhagen is assessed, utilizing the heat storage capacity of the thermal mass when the heating system is switched off for relieving the energy system. It is shown that when using a 4-hour preheating period before switching off...

  5. Directional wave measurements and modelling

    Digital Repository Service at National Institute of Oceanography (India)

    Anand, N.M.; Nayak, B.U.; Bhat, S.S.; SanilKumar, V.

    Some of the results obtained from analysis of the monsoon directional wave data measured over 4 years in shallow waters off the west coast of India are presented. The directional spectrum computed from the time series data seems to indicate...

  6. Utilization of MatPIV program to different geotechnical models

    Science.gov (United States)

    Aklik, P.; Idinger, G.

    2009-04-01

    The Particle Imaging Velocimetry (PIV) technique is being used to measure soil displacements. PIV has been used for many years in fluid mechanics; but for physical modeling in geotechnical engineering, this technique is still relatively new. PIV is a worldwide growth in soil mechanics over the last decade owing to the developments in digital cameras and laser technologies. The use of PIV is feasible provided the surface contains sufficient texture. A Cambridge group has shown that natural sand contains enough texture for applying PIV. In a texture-based approach, the only requirement is for any patch, big or small to be sufficiently unique so that statistical tracking of this patch is possible. In this paper, some of the soil mechanic's models were investigated such as retaining walls, slope failures, and foundations. The photographs were taken with the help of the high resolution digital camera, the displacements of soils were evaluated with free software named as MatPIV and the displacement graphics between the two images were obtained. Nikon D60 digital camera is 10.2 MB and it has special properties which makes it possible to use in PIV applications. These special properties are Airflow Control System and Image Sensor cleaning for protection against dust, Active D-Lighting for highlighted or shadowy areas while shooting, advanced three-point AF system for fast, efficient and precise autofocus. Its fast and continuous shooting mode enables up to 100 JPEG images at three frames per second. Norm Sand (DIN 1164) was used for all the models in a glass rectangular box. For every experiment, MatPIV was used to calculate the velocities from the two images. MatPIV program was used in two ways such as easy way and difficult way: In the easy way, the two images with 64*64 pixels with 50% or 75% overlap of the interrogation windows were taken into consideration and the calculation was performed with a single iteration through the images and the result consisted of four

  7. Statistical properties of a utility measure of observer performance compared to area under the ROC curve

    Science.gov (United States)

    Abbey, Craig K.; Samuelson, Frank W.; Gallas, Brandon D.; Boone, John M.; Niklason, Loren T.

    2013-03-01

    The receiver operating characteristic (ROC) curve has become a common tool for evaluating diagnostic imaging technologies, and the primary endpoint of such evaluations is the area under the curve (AUC), which integrates sensitivity over the entire false positive range. An alternative figure of merit for ROC studies is expected utility (EU), which focuses on the relevant region of the ROC curve as defined by disease prevalence and the relative utility of the task. However if this measure is to be used, it must also have desirable statistical properties keep the burden of observer performance studies as low as possible. Here, we evaluate effect size and variability for EU and AUC. We use two observer performance studies recently submitted to the FDA to compare the EU and AUC endpoints. The studies were conducted using the multi-reader multi-case methodology in which all readers score all cases in all modalities. ROC curves from the study were used to generate both the AUC and EU values for each reader and modality. The EU measure was computed assuming an iso-utility slope of 1.03. We find mean effect sizes, the reader averaged difference between modalities, to be roughly 2.0 times as big for EU as AUC. The standard deviation across readers is roughly 1.4 times as large, suggesting better statistical properties for the EU endpoint. In a simple power analysis of paired comparison across readers, the utility measure required 36% fewer readers on average to achieve 80% statistical power compared to AUC.

  8. Rolling Resistance Measurement and Model Development

    DEFF Research Database (Denmark)

    Andersen, Lasse Grinderslev; Larsen, Jesper; Fraser, Elsje Sophia

    2015-01-01

    There is an increased focus worldwide on understanding and modeling rolling resistance because reducing the rolling resistance by just a few percent will lead to substantial energy savings. This paper reviews the state of the art of rolling resistance research, focusing on measuring techniques, s......, surface and texture modeling, contact models, tire models, and macro-modeling of rolling resistance...

  9. Kinetic models of cell growth, substrate utilization and bio ...

    African Journals Online (AJOL)

    STORAGESEVER

    2008-05-02

    May 2, 2008 ... Aspergillus fumigatus. A simple model was proposed using the Logistic Equation for the growth, ... costs and also involved in less sophisticated fermentation ... apply and they are accurately proved that the model can express ...

  10. Utility of noninvasive transcutaneous measurement of postoperative hemoglobin in total joint arthroplasty patients.

    Science.gov (United States)

    Stoesz, Michael; Wood, Kristin; Clark, Wesley; Kwon, Young-Min; Freiberg, Andrew A

    2014-11-01

    This study prospectively evaluated the clinical utility of a noninvasive transcutaneous device for postoperative hemoglobin measurement in 100 total hip and knee arthroplasty patients. A protocol to measure hemoglobin noninvasively, prior to venipuncture, successfully avoided venipuncture in 73% of patients. In the remaining 27 patients, there were a total of 48 venipunctures performed during the postoperative hospitalization period due to reasons including transcutaneous hemoglobin measurement less than or equal to 9 g/dL (19), inability to obtain a transcutaneous hemoglobin measurement (8), clinical signs of anemia (3), and noncompliance with the study protocol (18). Such screening protocols may provide a convenient and cost-effective alternative to routine venipuncture for identifying patients at risk for blood transfusion after elective joint arthroplasty. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Examining the Relationship between Technological Pedagogical Content Knowledge (TPACK) and Student Achievement Utilizing the Florida Value-Added Model

    Science.gov (United States)

    Farrell, Ivan K.; Hamed, Kastro M.

    2017-01-01

    Utilizing a correlational research design, we sought to examine the relationship between the technological pedagogical content knowledge (TPACK) of in-service teachers and student achievement measured with each individual teacher's Value-Added Model (VAM) score. The TPACK survey results and a teacher's VAM score were also examined, separately,…

  12. Utilization of coincidence criteria in absolute length measurements by optical interferometry in vacuum and air

    International Nuclear Information System (INIS)

    Schödel, R

    2015-01-01

    Traceability of length measurements to the international system of units (SI) can be realized by using optical interferometry making use of well-known frequencies of monochromatic light sources mentioned in the Mise en Pratique for the realization of the metre. At some national metrology institutes, such as Physikalisch-Technische Bundesanstalt (PTB) in Germany, the absolute length of prismatic bodies (e.g. gauge blocks) is realized by so-called gauge-block interference comparators. At PTB, a number of such imaging phase-stepping interference comparators exist, including specialized vacuum interference comparators, each equipped with three highly stabilized laser light sources. The length of a material measure is expressed as a multiple of each wavelength. The large number of integer interference orders can be extracted by the method of exact fractions in which the coincidence of the lengths resulting from the different wavelengths is utilized as a criterion. The unambiguous extraction of the integer interference orders is an essential prerequisite for correct length measurements. This paper critically discusses coincidence criteria and their validity for three modes of absolute length measurements: 1) measurements under vacuum in which the wavelengths can be identified with the vacuum wavelengths, 2) measurements under air in which the air refractive index is obtained from environmental parameters using an empirical equation, and 3) measurements under air in which the air refractive index is obtained interferometrically by utilizing a vacuum cell placed along the measurement pathway. For case 3), which corresponds to PTB’s Kösters-Comparator for long gauge blocks, the unambiguous determination of integer interference orders related to the air refractive index could be improved by about a factor of ten when an ‘overall dispersion value,’ suggested in this paper, is used as coincidence criterion. (paper)

  13. Aerosol behaviour modeling and measurements

    Energy Technology Data Exchange (ETDEWEB)

    Gieseke, J A; Reed, L D [Batelle Memorial Institute, Columbus, OH (United States)

    1977-01-01

    Aerosol behavior within Liquid Metal Fast Breeder Reactor (LMFBR) containments is of critical importance since most of the radioactive species are expected to be associated with particulate forms and the mass of radiologically significant material leaked to the ambient atmosphere is directly related to the aerosol concentration airborne within the containment. Mathematical models describing the behavior of aerosols in closed environments, besides providing a direct means of assessing the importance of specific assumptions regarding accident sequences, will also serve as the basic tool with which to predict the consequences of various postulated accident situations. Consequently, considerable efforts have been recently directed toward the development of accurate and physically realistic theoretical aerosol behavior models. These models have accounted for various mechanisms affecting agglomeration rates of airborne particulate matter as well as particle removal rates from closed systems. In all cases, spatial variations within containments have been neglected and a well-mixed control volume has been assumed. Examples of existing computer codes formulated from the mathematical aerosol behavior models are the Brookhaven National Laboratory TRAP code, the PARDISEKO-II and PARDISEKO-III codes developed at Karlsruhe Nuclear Research Center, and the HAA-2, HAA-3, and HAA-3B codes developed by Atomics International. Because of their attractive short computation times, the HAA-3 and HAA-3B codes have been used extensively for safety analyses and are attractive candidates with which to demonstrate order of magnitude estimates of the effects of various physical assumptions. Therefore, the HAA-3B code was used as the nucleus upon which changes have been made to account for various physical mechanisms which are expected to be present in postulated accident situations and the latest of the resulting codes has been termed the HAARM-2 code. It is the primary purpose of the HAARM

  14. TE Wave Measurement and Modeling

    CERN Document Server

    Sikora, John P; Sonnad, Kiran G; Alesini, David; De Santis, Stefano

    2013-01-01

    In the TE wave method, microwaves are coupled into the beam-pipe and the effect of the electron cloud on these microwaves is measured. An electron cloud (EC) density can then be calculated from this measurement. There are two analysis methods currently in use. The first treats the microwaves as being transmitted from one point to another in the accelerator. The second more recent method, treats the beam-pipe as a resonant cavity. This paper will summarize the reasons for adopting the resonant TE wave analysis as well as give examples from CESRTA and DA{\\Phi}NE of resonant beam-pipe. The results of bead-pull bench measurements will show some possible standing wave patterns, including a cutoff mode (evanescent) where the field decreases exponentially with distance from the drive point. We will outline other recent developments in the TE wave method including VORPAL simulations of microwave resonances, as well as the simulation of transmission in the presence of both an electron cloud and magnetic fields.

  15. Context analysis for a new regulatory model for electric utilities in Brazil

    International Nuclear Information System (INIS)

    El Hage, Fabio S.; Rufín, Carlos

    2016-01-01

    This article examines what would have to change in the Brazilian regulatory framework in order to make utilities profit from energy efficiency and the integration of resources, instead of doing so from traditional consumption growth, as it happens at present. We argue that the Brazilian integrated electric sector resembles a common-pool resources problem, and as such it should incorporate, in addition to the centralized operation for power dispatch already in place, demand side management, behavioral strategies, and smart grids, attained through a new business and regulatory model for utilities. The paper proposes several measures to attain a more sustainable and productive electricity distribution industry: decoupling revenues from volumetric sales through a fixed maximum load fee, which would completely offset current disincentives for energy efficiency; the creation of a market for negawatts (saved megawatts) using the current Brazilian mechanism of public auctions for the acquisition of wholesale energy; and the integration of technologies, especially through the growth of unregulated products and services. Through these measures, we believe that Brazil could improve both energy security and overall sustainability of its power sector in the long run. - Highlights: • Necessary changes in the Brazilian regulatory framework towards energy efficiency. • How to incorporate demand side management, behavioral strategies, and smart grids. • Proposition of a market for negawatts at public auctions. • Measures to attain a more sustainable electricity distribution industry in Brazil.

  16. Validation of the SF-6D Health State Utilities Measure in Lower Extremity Sarcoma

    Directory of Open Access Journals (Sweden)

    Kenneth R. Gundle

    2014-01-01

    Full Text Available Aim. Health state utilities measures are preference-weighted patient-reported outcome (PRO instruments that facilitate comparative effectiveness research. One such measure, the SF-6D, is generated from the Short Form 36 (SF-36. This report describes a psychometric evaluation of the SF-6D in a cross-sectional population of lower extremity sarcoma patients. Methods. Patients with lower extremity sarcoma from a prospective database who had completed the SF-36 and Toronto Extremity Salvage Score (TESS were eligible for inclusion. Computed SF-6D health states were given preference weights based on a prior valuation. The primary outcome was correlation between the SF-6D and TESS. Results. In 63 pairs of surveys in a lower extremity sarcoma population, the mean preference-weighted SF-6D score was 0.59 (95% CI 0.4–0.81. The distribution of SF-6D scores approximated a normal curve (skewness = 0.11. There was a positive correlation between the SF-6D and TESS (r=0.75, P<0.01. Respondents who reported walking aid use had lower SF-6D scores (0.53 versus 0.61, P=0.03. Five respondents underwent amputation, with lower SF-6D scores that approached significance (0.48 versus 0.6, P=0.06. Conclusions. The SF-6D health state utilities measure demonstrated convergent validity without evidence of ceiling or floor effects. The SF-6D is a health state utilities measure suitable for further research in sarcoma patients.

  17. Laser shaft alignment measurement model

    Science.gov (United States)

    Mo, Chang-tao; Chen, Changzheng; Hou, Xiang-lin; Zhang, Guoyu

    2007-12-01

    Laser beam's track which is on photosensitive surface of the a receiver will be closed curve, when driving shaft and the driven shaft rotate with same angular velocity and rotation direction. The coordinate of arbitrary point which is on the curve is decided by the relative position of two shafts. Basing on the viewpoint, a mathematic model of laser alignment is set up. By using a data acquisition system and a data processing model of laser alignment meter with single laser beam and a detector, and basing on the installation parameter of computer, the state parameter between two shafts can be obtained by more complicated calculation and correction. The correcting data of the four under chassis of the adjusted apparatus moving on the level and the vertical plane can be calculated. This will instruct us to move the apparatus to align the shafts.

  18. Expected utility without utility

    OpenAIRE

    Castagnoli, E.; Licalzi, M.

    1996-01-01

    This paper advances an interpretation of Von Neumann–Morgenstern’s expected utility model for preferences over lotteries which does not require the notion of a cardinal utility over prizes and can be phrased entirely in the language of probability. According to it, the expected utility of a lottery can be read as the probability that this lottery outperforms another given independent lottery. The implications of this interpretation for some topics and models in decision theory are considered....

  19. Recent advances in modeling nutrient utilization in ruminants1

    NARCIS (Netherlands)

    Kebreab, E.; Dijkstra, J.; Bannink, A.; France, J.

    2009-01-01

    Mathematical modeling techniques have been applied to study various aspects of the ruminant, such as rumen function, post-absorptive metabolism and product composition. This review focuses on advances made in modeling rumen fermentation and its associated rumen disorders, and energy and nutrient

  20. Electric power bidding model for practical utility system

    Directory of Open Access Journals (Sweden)

    M. Prabavathi

    2018-03-01

    Full Text Available A competitive open market environment has been created due to the restructuring in the electricity market. In the new competitive market, mostly a centrally operated pool with a power exchange has been introduced to meet the offers from the competing suppliers with the bids of the customers. In such an open access environment, the formation of bidding strategy is one of the most challenging and important tasks for electricity participants to maximize their profit. To build bidding strategies for power suppliers and consumers in the restructured electricity market, a new mathematical framework is proposed in this paper. It is assumed that each participant submits several blocks of real power quantities along with their bidding prices. The effectiveness of the proposed method is tested on Indian Utility-62 bus system and IEEE-118 bus system. Keywords: Bidding strategy, Day ahead electricity market, Market clearing price, Market clearing volume, Block bid, Intermediate value theorem

  1. Business Process Modelling for Measuring Quality

    NARCIS (Netherlands)

    Heidari, F.; Loucopoulos, P.; Brazier, F.M.

    2013-01-01

    Business process modelling languages facilitate presentation, communication and analysis of business processes with different stakeholders. This paper proposes an approach that drives specification and measurement of quality requirements and in doing so relies on business process models as

  2. Symmetry evaluation for an interferometric fiber optic gyro coil utilizing a bidirectional distributed polarization measurement system.

    Science.gov (United States)

    Peng, Feng; Li, Chuang; Yang, Jun; Hou, Chengcheng; Zhang, Haoliang; Yu, Zhangjun; Yuan, Yonggui; Li, Hanyang; Yuan, Libo

    2017-07-10

    We propose a dual-channel measurement system for evaluating the optical path symmetry of an interferometric fiber optic gyro (IFOG) coil. Utilizing a bidirectional distributed polarization measurement system, the forward and backward transmission performances of an IFOG coil are characterized simultaneously by just a one-time measurement. The simple but practical configuration is composed of a bidirectional Mach-Zehnder interferometer and multichannel transmission devices connected to the IFOG coil under test. The static and dynamic temperature results of the IFOG coil reveal that its polarization-related symmetric properties can be effectively obtained with high accuracy. The optical path symmetry investigation is highly beneficial in monitoring and improving the winding technology of an IFOG coil and reducing the nonreciprocal effect of an IFOG.

  3. Dipole field measurement technique utilizing the Faraday rotation effect in polarization preserving optical fibers

    International Nuclear Information System (INIS)

    Haddock, C.; Tong, M.Y.M.

    1989-10-01

    TRIUMF is presently in the project definition stage of its proposed KAON factory. The facility will require approximately 300 dipole magnets. The rapid measurement of representative parameters of these magnets, in particular effective length, is one of the challenges to be met. As well as the commissioning of a.c magnetic field measurement systems based on established techniques a project is underway to investigate an alternative method utilizing the Faraday Rotation effect in polarization preserving optical fibers. It is shown that a fiber equivalent to a Faraday cell can be constructed by winding a fiber in a such a way that the induced beat length L p is equal to (2n+1) times the bending circumference, with n integer. Background to the subject and preliminary results of the measurements are reported in this paper

  4. Utility based maintenance analysis using a Random Sign censoring model

    International Nuclear Information System (INIS)

    Andres Christen, J.; Ruggeri, Fabrizio; Villa, Enrique

    2011-01-01

    Industrial systems subject to failures are usually inspected when there are evident signs of an imminent failure. Maintenance is therefore performed at a random time, somehow dependent on the failure mechanism. A competing risk model, namely a Random Sign model, is considered to relate failure and maintenance times. We propose a novel Bayesian analysis of the model and apply it to actual data from a water pump in an oil refinery. The design of an optimal maintenance policy is then discussed under a formal decision theoretic approach, analyzing the goodness of the current maintenance policy and making decisions about the optimal maintenance time.

  5. Marginal Utility of Conditional Sensitivity Analyses for Dynamic Models

    Science.gov (United States)

    Background/Question/MethodsDynamic ecological processes may be influenced by many factors. Simulation models thatmimic these processes often have complex implementations with many parameters. Sensitivityanalyses are subsequently used to identify critical parameters whose uncertai...

  6. About the parametrizations utilized to perform magnetic moments measurements using the transient field technique

    Energy Technology Data Exchange (ETDEWEB)

    Gómez, A. M., E-mail: amgomezl-1@uqvirtual.edu.co [Programa de Física, Universidad del Quindo (Colombia); Torres, D. A., E-mail: datorresg@unal.edu.co [Physics Department, Universidad Nacional de Colombia, Bogotá (Colombia)

    2016-07-07

    The experimental study of nuclear magnetic moments, using the Transient Field technique, makes use of spin-orbit hyperfine interactions to generate strong magnetic fields, above the kilo-Tesla regime, capable to create a precession of the nuclear spin. A theoretical description of such magnetic fields is still under theoretical research, and the use of parametrizations is still a common way to address the lack of theoretical information. In this contribution, a review of the main parametrizations utilized in the measurements of Nuclear Magnetic Moments will be presented, the challenges to create a theoretical description from first principles will be discussed.

  7. Indoor MIMO Channel Measurement and Modeling

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ødum; Andersen, Jørgen Bach

    2005-01-01

    Forming accurate models of the multiple input multiple output (MIMO) channel is essential both for simulation as well as understanding of the basic properties of the channel. This paper investigates different known models using measurements obtained with a 16x32 MIMO channel sounder for the 5.8GHz...... band. The measurements were carried out in various indoor scenarios including both temporal and spatial aspects of channel changes. The models considered include the so-called Kronecker model, a model proposed by Weichselberger et. al., and a model involving the full covariance matrix, the most...

  8. Measurement control program at model facility

    International Nuclear Information System (INIS)

    Schneider, R.A.

    1984-01-01

    A measurement control program for the model plant is described. The discussion includes the technical basis for such a program, the application of measurement control principles to each measurement, and the use of special experiments to estimate measurement error parameters for difficult-to-measure materials. The discussion also describes the statistical aspects of the program, and the documentation procedures used to record, maintain, and process the basic data

  9. Evaluating the performance and utility of regional climate models

    DEFF Research Database (Denmark)

    Christensen, Jens H.; Carter, Timothy R.; Rummukainen, Markku

    2007-01-01

    This special issue of Climatic Change contains a series of research articles documenting co-ordinated work carried out within a 3-year European Union project 'Prediction of Regional scenarios and Uncertainties for Defining European Climate change risks and Effects' (PRUDENCE). The main objective...... of the PRUDENCE project was to provide high resolution climate change scenarios for Europe at the end of the twenty-first century by means of dynamical downscaling (regional climate modelling) of global climate simulations. The first part of the issue comprises seven overarching PRUDENCE papers on: (1) the design...... of the model simulations and analyses of climate model performance, (2 and 3) evaluation and intercomparison of simulated climate changes, (4 and 5) specialised analyses of impacts on water resources and on other sectors including agriculture, ecosystems, energy, and transport, (6) investigation of extreme...

  10. On the Utility of Island Models in Dynamic Optimization

    DEFF Research Database (Denmark)

    Lissovoi, Andrei; Witt, Carsten

    2015-01-01

    A simple island model with λ islands and migration occurring after every τ iterations is studied on the dynamic fitness function Maze. This model is equivalent to a (1+λ) EA if τ=1, i.e., migration occurs during every iteration. It is proved that even for an increased offspring population size up...... to λ=O(n1-ε), the (1+λ) EA is still not able to track the optimum of Maze. If the migration interval is increased, the algorithm is able to track the optimum even for logarithmic λ. Finally, the relationship of τ, λ, and the ability of the island model to track the optimum is investigated more closely....

  11. Recursive inter-generational utility in global climate risk modeling

    Energy Technology Data Exchange (ETDEWEB)

    Minh, Ha-Duong [Centre International de Recherche sur l' Environnement et le Developpement (CIRED-CNRS), 75 - Paris (France); Treich, N. [Institut National de Recherches Agronomiques (INRA-LEERNA), 31 - Toulouse (France)

    2003-07-01

    This paper distinguishes relative risk aversion and resistance to inter-temporal substitution in climate risk modeling. Stochastic recursive preferences are introduced in a stylized numeric climate-economy model using preliminary IPCC 1998 scenarios. It shows that higher risk aversion increases the optimal carbon tax. Higher resistance to inter-temporal substitution alone has the same effect as increasing the discount rate, provided that the risk is not too large. We discuss implications of these findings for the debate upon discounting and sustainability under uncertainty. (author)

  12. Utilization-Based Modeling and Optimization for Cognitive Radio Networks

    Science.gov (United States)

    Liu, Yanbing; Huang, Jun; Liu, Zhangxiong

    The cognitive radio technique promises to manage and allocate the scarce radio spectrum in the highly varying and disparate modern environments. This paper considers a cognitive radio scenario composed of two queues for the primary (licensed) users and cognitive (unlicensed) users. According to the Markov process, the system state equations are derived and an optimization model for the system is proposed. Next, the system performance is evaluated by calculations which show the rationality of our system model. Furthermore, discussions among different parameters for the system are presented based on the experimental results.

  13. User-owned utility models for rural electrification

    Energy Technology Data Exchange (ETDEWEB)

    Waddle, D.

    1997-12-01

    The author discusses the history of rural electric cooperatives (REC) in the United States, and the broader question of whether such organizations can serve as a model for rural electrification in other countries. The author points out the features of such cooperatives which have given them stability and strength, and emphasizes that for success of such programs, many of these same features must be present. He definitely feels the cooperative models are not outdated, but they need strong local support, and a governmental structure which is supportive, and in particular not negative.

  14. Model measurements in the cryogenic National Transonic Facility - An overview

    Science.gov (United States)

    Holmes, H. K.

    1985-01-01

    In the operation of the National Transonic Facility (NTF) higher Reynolds numbers are obtained on the basis of a utilization of low operational temperatures and high pressures. Liquid nitrogen is used as cryogenic medium, and temperatures in the range from -320 F to 160 F can be employed. A maximum pressure of 130 psi is specified, while the NTF design parameter for the Reynolds number is 120,000,000. In view of the new requirements regarding the measurement systems, major developments had to be undertaken in virtually all wind tunnel measurement areas and, in addition, some new measurement systems were needed. Attention is given to force measurement, pressure measurement, model attitude, model deformation, and the data system.

  15. Modeling Resource Utilization of a Large Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current ...

  16. Modelling Resource Utilization of a Large Data Acquisition System

    CERN Document Server

    Santos, Alejandro; The ATLAS collaboration

    2017-01-01

    The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current ...

  17. Household time allocation model based on a group utility function

    NARCIS (Netherlands)

    Zhang, J.; Borgers, A.W.J.; Timmermans, H.J.P.

    2002-01-01

    Existing activity-based models typically assume an individual decision-making process. In household decision-making, however, interaction exists among household members and their activities during the allocation of the members' limited time. This paper, therefore, attempts to develop a new household

  18. Explaining Distortions in Utility Elicitation through the Rank-Dependent Model for Risky Choices

    NARCIS (Netherlands)

    P.P. Wakker (Peter); A.M. Stiggelbout (Anne)

    1995-01-01

    textabstractThe standard gamble (SG) method has been accepted as the gold standard for the elicitation of utility when risk or uncertainty is involved in decisions, and thus for the measurement of utility in medical decisions. Unfortunately, the SG method is distorted by a general dislike for

  19. Utilization of Software Tools for Uncertainty Calculation in Measurement Science Education

    International Nuclear Information System (INIS)

    Zangl, Hubert; Zine-Zine, Mariam; Hoermaier, Klaus

    2015-01-01

    Despite its importance, uncertainty is often neglected by practitioners in the design of system even in safety critical applications. Thus, problems arising from uncertainty may only be identified late in the design process and thus lead to additional costs. Although there exists numerous tools to support uncertainty calculation, reasons for limited usage in early design phases may be low awareness of the existence of the tools and insufficient training in the practical application. We present a teaching philosophy that addresses uncertainty from the very beginning of teaching measurement science, in particular with respect to the utilization of software tools. The developed teaching material is based on the GUM method and makes use of uncertainty toolboxes in the simulation environment. Based on examples in measurement science education we discuss advantages and disadvantages of the proposed teaching philosophy and include feedback from students

  20. [Thermal energy utilization analysis and energy conservation measures of fluidized bed dryer].

    Science.gov (United States)

    Xing, Liming; Zhao, Zhengsheng

    2012-07-01

    To propose measures for enhancing thermal energy utilization by analyzing drying process and operation principle of fluidized bed dryers,in order to guide optimization and upgrade of fluidized bed drying equipment. Through a systematic analysis on drying process and operation principle of fluidized beds,the energy conservation law was adopted to calculate thermal energy of dryers. The thermal energy of fluidized bed dryers is mainly used to make up for thermal consumption of water evaporation (Qw), hot air from outlet equipment (Qe), thermal consumption for heating and drying wet materials (Qm) and heat dissipation to surroundings through hot air pipelines and cyclone separators. Effective measures and major approaches to enhance thermal energy utilization of fluidized bed dryers were to reduce exhaust gas out by the loss of heat Qe, recycle dryer export air quantity of heat, preserve heat for dry towers, hot air pipes and cyclone separators, dehumidify clean air in inlets and reasonably control drying time and air temperature. Such technical parameters such air supply rate, air inlet temperature and humidity, material temperature and outlet temperature and humidity are set and controlled to effectively save energy during the drying process and reduce the production cost.

  1. The Effect of Geographic Units of Analysis on Measuring Geographic Variation in Medical Services Utilization

    Directory of Open Access Journals (Sweden)

    Agnus M. Kim

    2016-07-01

    Full Text Available Objectives: We aimed to evaluate the effect of geographic units of analysis on measuring geographic variation in medical services utilization. For this purpose, we compared geographic variations in the rates of eight major procedures in administrative units (districts and new areal units organized based on the actual health care use of the population in Korea. Methods: To compare geographic variation in geographic units of analysis, we calculated the age–sex standardized rates of eight major procedures (coronary artery bypass graft surgery, percutaneous transluminal coronary angioplasty, surgery after hip fracture, knee-replacement surgery, caesarean section, hysterectomy, computed tomography scan, and magnetic resonance imaging scan from the National Health Insurance database in Korea for the 2013 period. Using the coefficient of variation, the extremal quotient, and the systematic component of variation, we measured geographic variation for these eight procedures in districts and new areal units. Results: Compared with districts, new areal units showed a reduction in geographic variation. Extremal quotients and inter-decile ratios for the eight procedures were lower in new areal units. While the coefficient of variation was lower for most procedures in new areal units, the pattern of change of the systematic component of variation between districts and new areal units differed among procedures. Conclusions: Geographic variation in medical service utilization could vary according to the geographic unit of analysis. To determine how geographic characteristics such as population size and number of geographic units affect geographic variation, further studies are needed.

  2. Formal Definition of Measures for BPMN Models

    Science.gov (United States)

    Reynoso, Luis; Rolón, Elvira; Genero, Marcela; García, Félix; Ruiz, Francisco; Piattini, Mario

    Business process models are currently attaining more relevance, and more attention is therefore being paid to their quality. This situation led us to define a set of measures for the understandability of BPMN models, which is shown in a previous work. We focus on understandability since a model must be well understood before any changes are made to it. These measures were originally informally defined in natural language. As is well known, natural language is ambiguous and may lead to misunderstandings and a misinterpretation of the concepts captured by a measure and the way in which the measure value is obtained. This has motivated us to provide the formal definition of the proposed measures using OCL (Object Constraint Language) upon the BPMN (Business Process Modeling Notation) metamodel presented in this paper. The main advantages and lessons learned (which were obtained both from the current work and from previous works carried out in relation to the formal definition of other measures) are also summarized.

  3. Expected utility and catastrophic risk in a stochastic economy-climate model

    Energy Technology Data Exchange (ETDEWEB)

    Ikefuji, M. [Institute of Social and Economic Research, Osaka University, Osaka (Japan); Laeven, R.J.A.; Magnus, J.R. [Department of Econometrics and Operations Research, Tilburg University, Tilburg (Netherlands); Muris, C. [CentER, Tilburg University, Tilburg (Netherlands)

    2010-11-15

    In the context of extreme climate change, we ask how to conduct expected utility analysis in the presence of catastrophic risks. Economists typically model decision making under risk and uncertainty by expected utility with constant relative risk aversion (power utility); statisticians typically model economic catastrophes by probability distributions with heavy tails. Unfortunately, the expected utility framework is fragile with respect to heavy-tailed distributional assumptions. We specify a stochastic economy-climate model with power utility and explicitly demonstrate this fragility. We derive necessary and sufficient compatibility conditions on the utility function to avoid fragility and solve our stochastic economy-climate model for two examples of such compatible utility functions. We further develop and implement a procedure to learn the input parameters of our model and show that the model thus specified produces quite robust optimal policies. The numerical results indicate that higher levels of uncertainty (heavier tails) lead to less abatement and consumption, and to more investment, but this effect is not unlimited.

  4. Measuring corporate social responsibility using composite indices: Mission impossible? The case of the electricity utility industry

    Directory of Open Access Journals (Sweden)

    Juan Diego Paredes-Gazquez

    2016-01-01

    Full Text Available Corporate social responsibility is a multidimensional concept that is often measured using diverse indicators. Composite indices can aggregate these single indicators into one measurement. This article aims to identify the key challenges in constructing a composite index for measuring corporate social responsibility. The process is illustrated by the construction of a composite index for measuring social outcomes in the electricity utility industry. The sample consisted of seventy-four companies from twenty-three different countries, and one special administrative region operating in the industry in 2011. The findings show that (1 the unavailability of information about corporate social responsibility, (2 the particular characteristics of this information and (3 the weighting of indicators are the main obstacles when constructing the composite index. We highlight than an effective composite index should has a clear objective, a solid theoretical background and a robust structure. In a practical sense, it should be reconsidered how researchers use composite indexes to measure corporate social responsibility, as more transparency and stringency is needed when constructing these tools.

  5. Standard Model measurements with the ATLAS detector

    Directory of Open Access Journals (Sweden)

    Hassani Samira

    2015-01-01

    Full Text Available Various Standard Model measurements have been performed in proton-proton collisions at a centre-of-mass energy of √s = 7 and 8 TeV using the ATLAS detector at the Large Hadron Collider. A review of a selection of the latest results of electroweak measurements, W/Z production in association with jets, jet physics and soft QCD is given. Measurements are in general found to be well described by the Standard Model predictions.

  6. Asset transformation and the challenges to servitize a utility business model

    International Nuclear Information System (INIS)

    Helms, Thorsten

    2016-01-01

    The traditional energy utility business model is under pressure, and energy services are expected to play an important role for the energy transition. Experts and scholars argue that utilities need to innovate their business models, and transform from commodity suppliers to service providers. The transition from a product-oriented, capital-intensive business model based on tangible assets, towards a service-oriented, expense-intensive business model based on intangible assets may present great managerial and organizational challenges. Little research exists about such transitions for capital-intensive commodity providers, and particularly energy utilities, where the challenges to servitize are expected to be greatest. This qualitative paper explores the barriers to servitization within selected Swiss and German utility companies through a series of interviews with utility managers. One of them is ‘asset transformation’, the shift from tangible to intangible assets as major input factor for the value proposition, which is proposed as a driver for the complexity of business model transitions. Managers need to carefully manage those challenges, and find ways to operate both new service and established utility business models aside. Policy makers can support the transition of utilities through more favorable regulatory frameworks for energy services, and by supporting the exchange of knowledge in the industry. - Highlights: •The paper analyses the expected transformation of utilities into service-providers. •Service and utility business models possess very different attributes. •The former is based on intangible, the latter on tangible assets. •The transformation into a service-provider is related with great challenges. •Asset transformation is proposed as a barrier for business model innovation.

  7. Measurement Model Specification Error in LISREL Structural Equation Models.

    Science.gov (United States)

    Baldwin, Beatrice; Lomax, Richard

    This LISREL study examines the robustness of the maximum likelihood estimates under varying degrees of measurement model misspecification. A true model containing five latent variables (two endogenous and three exogenous) and two indicator variables per latent variable was used. Measurement model misspecification considered included errors of…

  8. ASPEN+ and economic modeling of equine waste utilization for localized hot water heating via fast pyrolysis

    Science.gov (United States)

    ASPEN Plus based simulation models have been developed to design a pyrolysis process for the on-site production and utilization of pyrolysis oil from equine waste at the Equine Rehabilitation Center at Morrisville State College (MSC). The results indicate that utilization of all available Equine Reh...

  9. Decision modelling tools for utilities in the deregulated energy market

    Energy Technology Data Exchange (ETDEWEB)

    Makkonen, S. [Process Vision Oy, Helsinki (Finland)

    2005-07-01

    This thesis examines the impact of the deregulation of the energy market on decision making and optimisation in utilities and demonstrates how decision support applications can solve specific encountered tasks in this context. The themes of the thesis are presented in different frameworks in order to clarify the complex decision making and optimisation environment where new sources of uncertainties arise due to the convergence of energy markets, globalisation of energy business and increasing competition. This thesis reflects the changes in the decision making and planning environment of European energy companies during the period from 1995 to 2004. It also follows the development of computational performance and evolution of energy information systems during the same period. Specifically, this thesis consists of studies at several levels of the decision making hierarchy ranging from top-level strategic decision problems to specific optimisation algorithms. On the other hand, the studies also follow the progress of the liberalised energy market from the monopolistic era to the fully competitive market with new trading instruments and issues like emissions trading. This thesis suggests that there is an increasing need for optimisation and multiple criteria decision making methods, and that new approaches based on the use of operations research are welcome as the deregulation proceeds and uncertainties increase. Technically, the optimisation applications presented are based on Lagrangian relaxation techniques and the dedicated Power Simplex algorithm supplemented with stochastic scenario analysis for decision support, a heuristic method to allocate common benefits and potential losses of coalitions of power companies, and an advanced Branch- and-Bound algorithm to solve efficiently nonconvex optimisation problems. The optimisation problems are part of the operational and tactical decision making process that has become very complex in the recent years. Similarly

  10. Fiscal 1996 coal production/utilization technology promotion subsidy/clean coal technology promotion business/regional model survey. Study report on `Environmental load reduction measures: feasibility study of a coal utilization eco/energy supply system`; 1996 nendo sekitan seisan riyo gijutsu shinkohi hojokin clean coal technology suishin jigyo chiiki model chosa. `Kankyo fuka teigen taisaku sekitan riyo eko energy kyokyu system no kanosei chosa` chosa hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    Oil demand is expected to substantially grow in the future, and the use of oil with combustibles such as hull, baggase and waste is considered from an effective use of energy. A regional model survey was conducted as measures to reduce environmental loads where the fuel mixing combustion with coal and other energy is made the core. The domestic production amount of hull is 2.4-3.0 tons/year, which have a heating value of 3,500 kcal/kg. If hull can be formed into the one storable for a the long term (the one mixed with low grade coal, etc.), it can be a fuel for stable supply. Bagasse is produced 100 million tons/year, which have a heating value of 2,500 kcal/kg. Among wastes, waste tire, plastics, waste, sludge, etc. have a lot of problems in terms of price and environment, but each of them has a heating value during 3,000-10,000 kcal/kg. As to the coal combustion, the pollutional regulation on it is strict, and much higher processing technology is needed. The technology of coal fuel mixing combustion with other energy has not risen higher than the developmental level. Though the technology is a little bit higher in price than the coal fuel single combustion, it is viable. 38 refs., 32 figs., 65 tabs.

  11. Radio propagation measurement and channel modelling

    CERN Document Server

    Salous, Sana

    2013-01-01

    While there are numerous books describing modern wireless communication systems that contain overviews of radio propagation and radio channel modelling, there are none that contain detailed information on the design, implementation and calibration of radio channel measurement equipment, the planning of experiments and the in depth analysis of measured data. The book would begin with an explanation of the fundamentals of radio wave propagation and progress through a series of topics, including the measurement of radio channel characteristics, radio channel sounders, measurement strategies

  12. The determination of chromium-50 in human blood and its utilization for blood volume measurements

    International Nuclear Information System (INIS)

    Zeisler, R.; Young, I.

    1986-01-01

    Possible relationships between insufficient blood volume increases during pregnancy and infant mortality could be established with an adequate measurement procedure. An accurate and precise technique for blood volume measurements has been found in the isotope dilution technique using chromium-51 as a label for red blood cells. However, in a study involving pregnant women, only stable isotopes can be used for labeling. Stable chromium-50 can be determined in total blood samples before and after dilution experiments by neutron activation analysis (NAA) or mass spectrometry. However, both techniques may be affected by insufficient sensitivity and contamination problems at the inherently low natural chromium concentrations to be measured in the blood. NAA procedures involving irradiations with highly thermalized neutrons at a fluence rate of 2x10 13 n/cm 2 xs and low background gamma spectrometry are applied to the analysis of total blood. Natural levels of chromium-50 in human and animal blood have been found to be <0.1 ng/mL; i.e., total chromium levels of <3 ng/mL. Based on the NAA procedure, a new approach to the blood volume measurement via chromium-50 isotope dilution has been developed which utilizes the ratio of the induced activities of chromium-51 to the iron-59 in three blood samples taken from each individual, namely blank, labeled and diluted labeled blood. (author)

  13. Viscosity estimation utilizing flow velocity field measurements in a rotating magnetized plasma

    International Nuclear Information System (INIS)

    Yoshimura, Shinji; Tanaka, Masayoshi Y.

    2008-01-01

    The importance of viscosity in determining plasma flow structures has been widely recognized. In laboratory plasmas, however, viscosity measurements have been seldom performed so far. In this paper we present and discuss an estimation method of effective plasma kinematic viscosity utilizing flow velocity field measurements. Imposing steady and axisymmetric conditions, we derive the expression for radial flow velocity from the azimuthal component of the ion fluid equation. The expression contains kinematic viscosity, vorticity of azimuthal rotation and its derivative, collision frequency, azimuthal flow velocity and ion cyclotron frequency. Therefore all quantities except the viscosity are given provided that the flow field can be measured. We applied this method to a rotating magnetized argon plasma produced by the Hyper-I device. The flow velocity field measurements were carried out using a directional Langmuir probe installed in a tilting motor drive unit. The inward ion flow in radial direction, which is not driven in collisionless inviscid plasmas, was clearly observed. As a result, we found the anomalous viscosity, the value of which is two orders of magnitude larger than the classical one. (author)

  14. Utilization of FEM model for steel microstructure determination

    Science.gov (United States)

    Kešner, A.; Chotěborský, R.; Linda, M.; Hromasová, M.

    2018-02-01

    Agricultural tools which are used in soil processing, they are worn by abrasive wear mechanism cases by hard minerals particles in the soil. The wear rate is influenced by mechanical characterization of tools material and wear rate is influenced also by soil mineral particle contents. Mechanical properties of steel can be affected by a technology of heat treatment that it leads to a different microstructures. Experimental work how to do it is very expensive and thanks to numerical methods like FEM we can assumed microstructure at low cost but each of numerical model is necessary to be verified. The aim of this work has shown a procedure of prediction microstructure of steel for agricultural tools. The material characterizations of 51CrV4 grade steel were used for numerical simulation like TTT diagram, heat capacity, heat conduction and other physical properties of material. A relationship between predicted microstructure by FEM and real microstructure after heat treatment shows a good correlation.

  15. Viable business models for public utilities; Zukunftsfaehige Geschaeftsmodelle fuer Stadtwerke

    Energy Technology Data Exchange (ETDEWEB)

    Gebhardt, Andreas; Weiss, Claudia [Buelow und Consorten GmbH, Hamburg (Germany)

    2013-04-15

    Small suppliers are faced with mounting pressures from an increasingly complex regulatory regime and a market that rewards size. Many have been able to adapt to the new framework conditions by successively optimizing existing activities. However, when change takes hold of all stages of the value chain it is no longer enough to merely modify one's previous strategies. It rather becomes necessary to review one's business model for its sustainability, take stock of the company's competencies and set priorities along the value chain. This is where a network-oriented focussing strategy can assist in ensuring efficient delivery of services in core areas while enabling the company to present itself on the market with a full range of services.

  16. A customer satisfaction model for a utility service industry

    Science.gov (United States)

    Jamil, Jastini Mohd; Nawawi, Mohd Kamal Mohd; Ramli, Razamin

    2016-08-01

    This paper explores the effect of Image, Customer Expectation, Perceived Quality and Perceived Value on Customer Satisfaction, and to investigate the effect of Image and Customer Satisfaction on Customer Loyalty of mobile phone provider in Malaysia. The result of this research is based on data gathered online from international students in one of the public university in Malaysia. Partial Least Squares Structural Equation Modeling (PLS-SEM) has been used to analyze the data that have been collected from the international students' perceptions. The results found that Image and Perceived Quality have significant impact on Customer Satisfaction. Image and Customer Satisfaction ware also found to have significantly related to Customer Loyalty. However, no significant impact has been found between Customer Expectation with Customer Satisfaction, Perceived Value with Customer Satisfaction, and Customer Expectation with Perceived Value. We hope that the findings may assist the mobile phone provider in production and promotion of their services.

  17. BWR Fuel Assemblies Physics Analysis Utilizing 3D MCNP Modeling

    International Nuclear Information System (INIS)

    Chiang, Ren-Tai; Williams, John B.; Folk, Ken S.

    2008-01-01

    MCNP is used to model a partially controlled BWR fresh fuel four assemblies (2x2) system for better understanding BWR fuel behavior and for benchmarking production codes. The impact of the GE14 plenum regions on axial power distribution is observed by comparing against the GE13 axial power distribution, in which the GE14 relative power is lower than the GE13 relative power at the 15. node and at the 16. node due to presence of the plenum regions in GE14 fuel in these two nodes. The segmented rod power distribution study indicates that the azimuthally dependent power distribution is very significant for the fuel rods next to the water gap in the uncontrolled portion. (authors)

  18. BWR Fuel Assemblies Physics Analysis Utilizing 3D MCNP Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Ren-Tai [University of Florida, Gainesville, Florida 32611 (United States); Williams, John B.; Folk, Ken S. [Southern Nuclear Company, Birmingham, Alabama 35242 (United States)

    2008-07-01

    MCNP is used to model a partially controlled BWR fresh fuel four assemblies (2x2) system for better understanding BWR fuel behavior and for benchmarking production codes. The impact of the GE14 plenum regions on axial power distribution is observed by comparing against the GE13 axial power distribution, in which the GE14 relative power is lower than the GE13 relative power at the 15. node and at the 16. node due to presence of the plenum regions in GE14 fuel in these two nodes. The segmented rod power distribution study indicates that the azimuthally dependent power distribution is very significant for the fuel rods next to the water gap in the uncontrolled portion. (authors)

  19. Computer model for estimating electric utility environmental noise

    International Nuclear Information System (INIS)

    Teplitzky, A.M.; Hahn, K.J.

    1991-01-01

    This paper reports on a computer code for estimating environmental noise emissions from the operation and the construction of electric power plants that was developed based on algorithms. The computer code (Model) is used to predict octave band sound power levels for power plant operation and construction activities on the basis of the equipment operating characteristics and calculates off-site sound levels for each noise source and for an entire plant. Estimated noise levels are presented either as A-weighted sound level contours around the power plant or as octave band levels at user defined receptor locations. Calculated sound levels can be compared with user designated noise criteria, and the program can assist the user in analyzing alternative noise control strategies

  20. Biomimetic peptide-based models of [FeFe]-hydrogenases: utilization of phosphine-containing peptides

    Energy Technology Data Exchange (ETDEWEB)

    Roy, Souvik [Department of Chemistry and Biochemistry; Arizona State University; Tempe, USA; Nguyen, Thuy-Ai D. [Department of Chemistry and Biochemistry; Arizona State University; Tempe, USA; Gan, Lu [Department of Chemistry and Biochemistry; Arizona State University; Tempe, USA; Jones, Anne K. [Department of Chemistry and Biochemistry; Arizona State University; Tempe, USA

    2015-01-01

    Peptide based models for [FeFe]-hydrogenase were synthesized utilizing unnatural phosphine-amino acids and their electrocatalytic properties were investigated in mixed aqueous-organic solvents.

  1. Measurement-based reliability/performability models

    Science.gov (United States)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  2. Factors Impacting Student Service Utilization at Ontario Colleges: Key Performance Indicators as a Measure of Success: A Niagara College View

    Science.gov (United States)

    Veres, David

    2015-01-01

    Student success in Ontario College is significantly influenced by the utilization of student services. At Niagara College there has been a significant investment in student services as a strategy to support student success. Utilizing existing KPI data, this quantitative research project is aimed at measuring factors that influence both the use of…

  3. Double-label autoradiographic deoxyglucose method for sequential measurement of regional cerebral glucose utilization

    Energy Technology Data Exchange (ETDEWEB)

    Redies, C; Diksic, M; Evans, A C; Gjedde, A; Yamamoto, Y L

    1987-08-01

    A new double-label autoradiographic glucose analog method for the sequential measurement of altered regional cerebral metabolic rates for glucose in the same animal is presented. This method is based on the sequential injection of two boluses of glucose tracer labeled with two different isotopes (short-lived /sup 18/F and long-lived /sup 3/H, respectively). An operational equation is derived which allows the determination of glucose utilization for the time period before the injection of the second tracer; this equation corrects for accumulation and loss of the first tracer from the metabolic pool occurring after the injection of the second tracer. An error analysis of this operational equation is performed. The double-label deoxyglucose method is validated in the primary somatosensory (''barrel'') cortex of the anesthetized rat. Two different rows of whiskers were stimulated sequentially in each rat; the two periods of stimulation were each preceded by an injection of glucose tracer. After decapitation, dried brain slices were first exposed, in direct contact, to standard X-ray film and then to uncoated, ''tritium-sensitive'' film. Results show that the double-label deoxyglucose method proposed in this paper allows the quantification and complete separation of glucose utilization patterns elicited by two different stimulations sequentially applied in the same animal.

  4. Cost-utility model of rasagiline in the treatment of advanced Parkinson's disease in Finland.

    Science.gov (United States)

    Hudry, Joumana; Rinne, Juha O; Keränen, Tapani; Eckert, Laurent; Cochran, John M

    2006-04-01

    The economic burden of Parkinson's disease (PD) is high, especially in patients experiencing motor fluctuations. Rasagiline has demonstrated efficacy against symptoms of PD in early and advanced stages of the disease. To assess the cost-utility of rasagiline and entacapone as adjunctive therapies to levodopa versus standard levodopa care in PD patients with motor fluctuations in Finland. A 2 year probabilistic Markov model with 3 health states: "25% or less off-time/day," "greater than 25% off-time/day," and "dead" was used. Off-time represents time awake with poor or absent motor function. Model inputs included transition probabilities from randomized clinical trials, utilities from a preference measurement study, and costs and resources from a Finnish cost-of-illness study. Effectiveness measures were quality-adjusted life years (QALYs) and number of months spent with 25% or less off-time/day. Uncertainty around parameters was taken into account by Monte Carlo simulations. Over 2 years from a societal perspective, rasagiline or entacapone as adjunctive therapies to levodopa showed greater effectiveness than levodopa alone at no additional costs. Benefits after 2 years were 0.13 (95% CI 0.08 to 0.17) additional QALYs and 5.2 (3.6 to 6.7) additional months for rasagiline and 0.12 (0.08 to 0.17) QALYs and 5.1 (3.5 to 6.6) months for entacapone, both in adjunct to levodopa compared with levodopa alone. The results of this study support the use of rasagiline and entacapone as adjunctive cost-effective alternatives to levodopa alone in PD patients with motor fluctuations in Finland. With a different mode of action, rasagiline is a valuable therapeutic alternative to entacapone at no additional charge to society.

  5. Modelling of limestone injection for SO2 capture in a coal fired utility boiler

    International Nuclear Information System (INIS)

    Kovacik, G.J.; Reid, K.; McDonald, M.M.; Knill, K.

    1997-01-01

    A computer model was developed for simulating furnace sorbent injection for SO 2 capture in a full scale utility boiler using TASCFlow TM computational fluid dynamics (CFD) software. The model makes use of a computational grid of the superheater section of a tangentially fired utility boiler. The computer simulations are three dimensional so that the temperature and residence time distribution in the boiler could be realistically represented. Results of calculations of simulated sulphur capture performance of limestone injection in a typical utility boiler operation were presented

  6. A Framework for Organizing Current and Future Electric Utility Regulatory and Business Models

    Energy Technology Data Exchange (ETDEWEB)

    Satchwell, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cappers, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Schwartz, Lisa C. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fadrhonc, Emily Martin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-06-01

    Many regulators, utilities, customer groups, and other stakeholders are reevaluating existing regulatory models and the roles and financial implications for electric utilities in the context of today’s environment of increasing distributed energy resource (DER) penetrations, forecasts of significant T&D investment, and relatively flat or negative utility sales growth. When this is coupled with predictions about fewer grid-connected customers (i.e., customer defection), there is growing concern about the potential for serious negative impacts on the regulated utility business model. Among states engaged in these issues, the range of topics under consideration is broad. Most of these states are considering whether approaches that have been applied historically to mitigate the impacts of previous “disruptions” to the regulated utility business model (e.g., energy efficiency) as well as to align utility financial interests with increased adoption of such “disruptive technologies” (e.g., shareholder incentive mechanisms, lost revenue mechanisms) are appropriate and effective in the present context. A handful of states are presently considering more fundamental changes to regulatory models and the role of regulated utilities in the ownership, management, and operation of electric delivery systems (e.g., New York “Reforming the Energy Vision” proceeding).

  7. Co-firing straw and coal in a 150-MWe utility boiler: in situ measurements

    DEFF Research Database (Denmark)

    Hansen, P. F.B.; Andersen, Karin Hedebo; Wieck-Hansen, K.

    1998-01-01

    A 2-year demonstration program is carried out by the Danish utility I/S Midtkraft at a 150-MWe PF-boiler unit reconstructed for co-firing straw and coal. As a part of the demonstration program, a comprehensive in situ measurement campaign was conducted during the spring of 1996 in collaboration...... with the Technical University of Denmark. Six sample positions have been established between the upper part of the furnace and the economizer. The campaign included in situ sampling of deposits on water/air-cooled probes, sampling of fly ash, flue gas and gas phase alkali metal compounds, and aerosols as well...... deposition propensities and high temperature corrosion during co-combustion of straw and coal in PF-boilers. Danish full scale results from co-firing straw and coal, the test facility and test program, and the potential theoretical support from the Technical University of Denmark are presented in this paper...

  8. Attofarad resolution capacitance-voltage measurement of nanometer scale field effect transistors utilizing ambient noise

    International Nuclear Information System (INIS)

    Gokirmak, Ali; Inaltekin, Hazer; Tiwari, Sandip

    2009-01-01

    A high resolution capacitance-voltage (C-V) characterization technique, enabling direct measurement of electronic properties at the nanoscale in devices such as nanowire field effect transistors (FETs) through the use of random fluctuations, is described. The minimum noise level required for achieving sub-aF (10 -18 F) resolution, the leveraging of stochastic resonance, and the effect of higher levels of noise are illustrated through simulations. The non-linear ΔC gate-source/drain -V gate response of FETs is utilized to determine the inversion layer capacitance (C inv ) and carrier mobility. The technique is demonstrated by extracting the carrier concentration and effective electron mobility in a nanoscale Si FET with C inv = 60 aF.

  9. Measuring the Capacity Utilization of Public District Hospitals in Tunisia: Using Dual Data Envelopment Analysis Approach

    Directory of Open Access Journals (Sweden)

    Chokri Arfa

    2017-01-01

    Full Text Available Background Public district hospitals (PDHs in Tunisia are not operating at full plant capacity and underutilize their operating budget. Methods Individual PDHs capacity utilization (CU is measured for 2000 and 2010 using dual data envelopment analysis (DEA approach with shadow prices input and output restrictions. The CU is estimated for 101 of 105 PDH in 2000 and 94 of 105 PDH in 2010. Results In average, unused capacity is estimated at 18% in 2010 vs. 13% in 2000. Of PDHs 26% underutilize their operating budget in 2010 vs. 21% in 2000. Conclusion Inadequate supply, health quality and the lack of operating budget should be tackled to reduce unmet user’s needs and the bypassing of the PDHs and, thus to increase their CU. Social health insurance should be turned into a direct purchaser of curative and preventive care for the PDHs.

  10. On how access to an insurance market affects investments in safety measures, based on the expected utility theory

    International Nuclear Information System (INIS)

    Bjorheim Abrahamsen, Eirik; Asche, Frank

    2011-01-01

    This paper focuses on how access to an insurance market should influence investments in safety measures in accordance with the ruling paradigm for decision-making under uncertainty-the expected utility theory. We show that access to an insurance market in most situations will influence investments in safety measures. For an expected utility maximizer, an overinvestment in safety measures is likely if access to an insurance market is ignored, while an underinvestment in safety measures is likely if insurance is purchased without paying attention to the possibility for reducing the probability and/or consequences of an accidental event by safety measures.

  11. Utility of sonographic measurement of the common tensor tendon in patients with lateral epicondylitis.

    Science.gov (United States)

    Lee, Min Hee; Cha, Jang Gyu; Jin, Wook; Kim, Byung Sung; Park, Jai Soung; Lee, Hae Kyung; Hong, Hyun Sook

    2011-06-01

    The purpose of this article is to evaluate prospectively the utility of sonographic measurements of the common extensor tendon for diagnosing lateral epicondylitis. Forty-eight patients with documented lateral epicondylitis and 63 healthy volunteers were enrolled and underwent ultrasound of the elbow joint. The common extensor tendon overlying the bony landmark was scanned transversely, and the cross-section area and the maximum thickness were measured. Clinical examination was used as the reference standard in the diagnosis of lateral epicondylitis. Data from the patient and control groups were compared with established optimal diagnostic criteria for lateral epicondylitis using receiver operating characteristic curves. Qualitative evaluation with grayscale ultrasound was also performed on patients and healthy volunteers. The common extensor tendon was significantly thicker in patients with lateral epicondylitis than in control subjects (p lateral epicondylitis. For qualitative evaluation with gray-scale ultrasound, overall sensitivity, specificity, and accuracy values in the diagnosis of lateral epicondylitis were 76.5%, 76.2%, and 76.3%, respectively. The quantitative sonographic measurements had an excellent diagnostic performance for lateral epicondylitis, as well as good or excellent interreader agreement. A common extensor tendon cross-section area greater than or equal to 32 mm(2) and a thickness of 4.2 mm correlated well with the presence of lateral epicondylitis. However, further prospective study is necessary to determine whether quantitative ultrasound with these cutoff values can improve the accuracy of the diagnosis of lateral epicondylitis.

  12. Model SH intelligent instrument for thickness measuring

    International Nuclear Information System (INIS)

    Liu Juntao; Jia Weizhuang; Zhao Yunlong

    1995-01-01

    The authors introduce Model SH Intelligent Instrument for thickness measuring by using principle of beta back-scattering and its application range, features, principle of operation, system design, calibration and specifications

  13. Smart Kinesthetic Measurement Model in Dance Composision

    OpenAIRE

    Triana, Dinny Devi

    2017-01-01

    This research aimed to discover a model of assessment that could measure kinesthetic intelligence in arranging a dance from several related variable, both direct variable and indirect variable. The research method used was a qualitative method using path analysis to determine the direct and indirect variable; therefore, the dominant variable that supported the measurement model of kinesthetic intelligence in arranging dance could be discovered. The population used was the students of the art ...

  14. Examining the utility of satellite-based wind sheltering estimates for lake hydrodynamic modeling

    Science.gov (United States)

    Van Den Hoek, Jamon; Read, Jordan S.; Winslow, Luke A.; Montesano, Paul; Markfort, Corey D.

    2015-01-01

    Satellite-based measurements of vegetation canopy structure have been in common use for the last decade but have never been used to estimate canopy's impact on wind sheltering of individual lakes. Wind sheltering is caused by slower winds in the wake of topography and shoreline obstacles (e.g. forest canopy) and influences heat loss and the flux of wind-driven mixing energy into lakes, which control lake temperatures and indirectly structure lake ecosystem processes, including carbon cycling and thermal habitat partitioning. Lakeshore wind sheltering has often been parameterized by lake surface area but such empirical relationships are only based on forested lakeshores and overlook the contributions of local land cover and terrain to wind sheltering. This study is the first to examine the utility of satellite imagery-derived broad-scale estimates of wind sheltering across a diversity of land covers. Using 30 m spatial resolution ASTER GDEM2 elevation data, the mean sheltering height, hs, being the combination of local topographic rise and canopy height above the lake surface, is calculated within 100 m-wide buffers surrounding 76,000 lakes in the U.S. state of Wisconsin. Uncertainty of GDEM2-derived hs was compared to SRTM-, high-resolution G-LiHT lidar-, and ICESat-derived estimates of hs, respective influences of land cover type and buffer width on hsare examined; and the effect of including satellite-based hs on the accuracy of a statewide lake hydrodynamic model was discussed. Though GDEM2 hs uncertainty was comparable to or better than other satellite-based measures of hs, its higher spatial resolution and broader spatial coverage allowed more lakes to be included in modeling efforts. GDEM2 was shown to offer superior utility for estimating hs compared to other satellite-derived data, but was limited by its consistent underestimation of hs, inability to detect within-buffer hs variability, and differing accuracy across land cover types. Nonetheless

  15. A generalized measurement model to quantify health: the multi-attribute preference response model.

    Science.gov (United States)

    Krabbe, Paul F M

    2013-01-01

    After 40 years of deriving metric values for health status or health-related quality of life, the effective quantification of subjective health outcomes is still a challenge. Here, two of the best measurement tools, the discrete choice and the Rasch model, are combined to create a new model for deriving health values. First, existing techniques to value health states are briefly discussed followed by a reflection on the recent revival of interest in patients' experience with regard to their possible role in health measurement. Subsequently, three basic principles for valid health measurement are reviewed, namely unidimensionality, interval level, and invariance. In the main section, the basic operation of measurement is then discussed in the framework of probabilistic discrete choice analysis (random utility model) and the psychometric Rasch model. It is then shown how combining the main features of these two models yields an integrated measurement model, called the multi-attribute preference response (MAPR) model, which is introduced here. This new model transforms subjective individual rank data into a metric scale using responses from patients who have experienced certain health states. Its measurement mechanism largely prevents biases such as adaptation and coping. Several extensions of the MAPR model are presented. The MAPR model can be applied to a wide range of research problems. If extended with the self-selection of relevant health domains for the individual patient, this model will be more valid than existing valuation techniques.

  16. Automated statistical modeling of analytical measurement systems

    International Nuclear Information System (INIS)

    Jacobson, J.J.

    1992-01-01

    The statistical modeling of analytical measurement systems at the Idaho Chemical Processing Plant (ICPP) has been completely automated through computer software. The statistical modeling of analytical measurement systems is one part of a complete quality control program used by the Remote Analytical Laboratory (RAL) at the ICPP. The quality control program is an integration of automated data input, measurement system calibration, database management, and statistical process control. The quality control program and statistical modeling program meet the guidelines set forth by the American Society for Testing Materials and American National Standards Institute. A statistical model is a set of mathematical equations describing any systematic bias inherent in a measurement system and the precision of a measurement system. A statistical model is developed from data generated from the analysis of control standards. Control standards are samples which are made up at precise known levels by an independent laboratory and submitted to the RAL. The RAL analysts who process control standards do not know the values of those control standards. The object behind statistical modeling is to describe real process samples in terms of their bias and precision and, to verify that a measurement system is operating satisfactorily. The processing of control standards gives us this ability

  17. Measuring Model Rocket Engine Thrust Curves

    Science.gov (United States)

    Penn, Kim; Slaton, William V.

    2010-01-01

    This paper describes a method and setup to quickly and easily measure a model rocket engine's thrust curve using a computer data logger and force probe. Horst describes using Vernier's LabPro and force probe to measure the rocket engine's thrust curve; however, the method of attaching the rocket to the force probe is not discussed. We show how a…

  18. A framework for estimating health state utility values within a discrete choice experiment: modeling risky choices.

    Science.gov (United States)

    Robinson, Angela; Spencer, Anne; Moffatt, Peter

    2015-04-01

    There has been recent interest in using the discrete choice experiment (DCE) method to derive health state utilities for use in quality-adjusted life year (QALY) calculations, but challenges remain. We set out to develop a risk-based DCE approach to derive utility values for health states that allowed 1) utility values to be anchored directly to normal health and death and 2) worse than dead health states to be assessed in the same manner as better than dead states. Furthermore, we set out to estimate alternative models of risky choice within a DCE model. A survey was designed that incorporated a risk-based DCE and a "modified" standard gamble (SG). Health state utility values were elicited for 3 EQ-5D health states assuming "standard" expected utility (EU) preferences. The DCE model was then generalized to allow for rank-dependent expected utility (RDU) preferences, thereby allowing for probability weighting. A convenience sample of 60 students was recruited and data collected in small groups. Under the assumption of "standard" EU preferences, the utility values derived within the DCE corresponded fairly closely to the mean results from the modified SG. Under the assumption of RDU preferences, the utility values estimated are somewhat lower than under the assumption of standard EU, suggesting that the latter may be biased upward. Applying the correct model of risky choice is important whether a modified SG or a risk-based DCE is deployed. It is, however, possible to estimate a probability weighting function within a DCE and estimate "unbiased" utility values directly, which is not possible within a modified SG. We conclude by setting out the relative strengths and weaknesses of the 2 approaches in this context. © The Author(s) 2014.

  19. Settings in Social Networks : a Measurement Model

    NARCIS (Netherlands)

    Schweinberger, Michael; Snijders, Tom A.B.

    2003-01-01

    A class of statistical models is proposed that aims to recover latent settings structures in social networks. Settings may be regarded as clusters of vertices. The measurement model is based on two assumptions. (1) The observed network is generated by hierarchically nested latent transitive

  20. Models Used for Measuring Customer Engagement

    Directory of Open Access Journals (Sweden)

    Mihai TICHINDELEAN

    2013-12-01

    Full Text Available The purpose of the paper is to define and measure the customer engagement as a forming element of the relationship marketing theory. In the first part of the paper, the authors review the marketing literature regarding the concept of customer engagement and summarize the main models for measuring it. One probability model (Pareto/NBD model and one parametric model (RFM model specific for the customer acquisition phase are theoretically detailed. The second part of the paper is an application of the RFM model; the authors demonstrate that there is no statistical significant variation within the clusters formed on two different data sets (training and test set if the cluster centroids of the training set are used as initial cluster centroids for the second test set.

  1. The Utility of Using a Near-Infrared (NIR) Camera to Measure Beach Surface Moisture

    Science.gov (United States)

    Nelson, S.; Schmutz, P. P.

    2017-12-01

    Surface moisture content is an important factor that must be considered when studying aeolian sediment transport in a beach environment. A few different instruments and procedures are available for measuring surface moisture content (i.e. moisture probes, LiDAR, and gravimetric moisture data from surface scrapings); however, these methods can be inaccurate, costly, and inapplicable, particularly in the field. Near-infrared (NIR) spectral band imagery is another technique used to obtain moisture data. NIR imagery has been predominately used through remote sensing and has yet to be used for ground-based measurements. Dry sand reflects infrared radiation given off by the sun and wet sand absorbs IR radiation. All things considered, this study assesses the utility of measuring surface moisture content of beach sand with a modified NIR camera. A traditional point and shoot digital camera was internally modified with the placement of a visible light-blocking filter. Images were taken of three different types of beach sand at controlled moisture content values, with sunlight as the source of infrared radiation. A technique was established through trial and error by comparing resultant histogram values using Adobe Photoshop with the various moisture conditions. The resultant IR absorption histogram values were calibrated to actual gravimetric moisture content from surface scrapings of the samples. Overall, the results illustrate that the NIR spectrum modified camera does not provide the ability to adequately measure beach surface moisture content. However, there were noted differences in IR absorption histogram values among the different sediment types. Sediment with darker quartz mineralogy provided larger variations in histogram values, but the technique is not sensitive enough to accurately represent low moisture percentages, which are of most importance when studying aeolian sediment transport.

  2. Markowitz portfolio optimization model employing fuzzy measure

    Science.gov (United States)

    Ramli, Suhailywati; Jaaman, Saiful Hafizah

    2017-04-01

    Markowitz in 1952 introduced the mean-variance methodology for the portfolio selection problems. His pioneering research has shaped the portfolio risk-return model and become one of the most important research fields in modern finance. This paper extends the classical Markowitz's mean-variance portfolio selection model applying the fuzzy measure to determine the risk and return. In this paper, we apply the original mean-variance model as a benchmark, fuzzy mean-variance model with fuzzy return and the model with return are modeled by specific types of fuzzy number for comparison. The model with fuzzy approach gives better performance as compared to the mean-variance approach. The numerical examples are included to illustrate these models by employing Malaysian share market data.

  3. Utility values associated with advanced or metastatic non-small cell lung cancer: data needs for economic modeling.

    Science.gov (United States)

    Brown, Jacqueline; Cook, Keziah; Adamski, Kelly; Lau, Jocelyn; Bargo, Danielle; Breen, Sarah; Chawla, Anita

    2017-04-01

    Cost-effectiveness analyses often inform healthcare reimbursement decisions. The preferred measure of effectiveness is the quality adjusted life year (QALY) gained, where the quality of life adjustment is measured in terms of utility. Areas covered: We assessed the availability and variation of utility values for health states associated with advanced or metastatic non-small cell lung cancer (NSCLC) to identify values appropriate for cost-effectiveness models assessing alternative treatments. Our systematic search of six electronic databases (January 2000 to August 2015) found the current literature to be sparse in terms of utility values associated with NSCLC, identifying 27 studies. Utility values were most frequently reported over time and by treatment type, and less frequently by disease response, stage of disease, adverse events or disease comorbidities. Expert commentary: In response to rising healthcare costs, payers increasingly consider the cost-effectiveness of novel treatments in reimbursement decisions, especially in oncology. As the number of therapies available to treat NSCLC increases, cost-effectiveness analyses will play a key role in reimbursement decisions in this area. Quantifying the relationship between health and quality of life for NSCLC patients via utility values is an important component of assessing the cost effectiveness of novel treatments.

  4. On model selections for repeated measurement data in clinical studies.

    Science.gov (United States)

    Zou, Baiming; Jin, Bo; Koch, Gary G; Zhou, Haibo; Borst, Stephen E; Menon, Sandeep; Shuster, Jonathan J

    2015-05-10

    Repeated measurement designs have been widely used in various randomized controlled trials for evaluating long-term intervention efficacies. For some clinical trials, the primary research question is how to compare two treatments at a fixed time, using a t-test. Although simple, robust, and convenient, this type of analysis fails to utilize a large amount of collected information. Alternatively, the mixed-effects model is commonly used for repeated measurement data. It models all available data jointly and allows explicit assessment of the overall treatment effects across the entire time spectrum. In this paper, we propose an analytic strategy for longitudinal clinical trial data where the mixed-effects model is coupled with a model selection scheme. The proposed test statistics not only make full use of all available data but also utilize the information from the optimal model deemed for the data. The performance of the proposed method under various setups, including different data missing mechanisms, is evaluated via extensive Monte Carlo simulations. Our numerical results demonstrate that the proposed analytic procedure is more powerful than the t-test when the primary interest is to test for the treatment effect at the last time point. Simulations also reveal that the proposed method outperforms the usual mixed-effects model for testing the overall treatment effects across time. In addition, the proposed framework is more robust and flexible in dealing with missing data compared with several competing methods. The utility of the proposed method is demonstrated by analyzing a clinical trial on the cognitive effect of testosterone in geriatric men with low baseline testosterone levels. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Utilizing Multidimensional Measures of Race in Education Research: The Case of Teacher Perceptions.

    Science.gov (United States)

    Irizarry, Yasmiyn

    2015-10-01

    Education scholarship on race using quantitative data analysis consists largely of studies on the black-white dichotomy, and more recently, on the experiences of student within conventional racial/ethnic categories (white, Hispanic/Latina/o, Asian, black). Despite substantial shifts in the racial and ethnic composition of American children, studies continue to overlook the diverse racialized experiences for students of Asian and Latina/o descent, the racialization of immigration status, and the educational experiences of Native American students. This study provides one possible strategy for developing multidimensional measures of race using large-scale datasets and demonstrates the utility of multidimensional measures for examining educational inequality, using teacher perceptions of student behavior as a case in point. With data from the first grade wave of the Early Childhood Longitudinal Study, Kindergarten Cohort of 1998-1999, I examine differences in teacher ratings of Externalizing Problem Behaviors and Approaches to Learning across fourteen racialized subgroups at the intersections of race, ethnicity, and immigrant status. Results show substantial subgroup variation in teacher perceptions of problem and learning behaviors, while also highlighting key points of divergence and convergence within conventional racial/ethnic categories.

  6. An Analysis/Synthesis System of Audio Signal with Utilization of an SN Model

    Directory of Open Access Journals (Sweden)

    G. Rozinaj

    2004-12-01

    Full Text Available An SN (sinusoids plus noise model is a spectral model, in which theperiodic components of the sound are represented by sinusoids withtime-varying frequencies, amplitudes and phases. The remainingnon-periodic components are represented by a filtered noise. Thesinusoidal model utilizes physical properties of musical instrumentsand the noise model utilizes the human inability to perceive the exactspectral shape or the phase of stochastic signals. SN modeling can beapplied in a compression, transformation, separation of sounds, etc.The designed system is based on methods used in the SN modeling. Wehave proposed a model that achieves good results in audio perception.Although many systems do not save phases of the sinusoids, they areimportant for better modelling of transients, for the computation ofresidual and last but not least for stereo signals, too. One of thefundamental properties of the proposed system is the ability of thesignal reconstruction not only from the amplitude but from the phasepoint of view, as well.

  7. Development of a Deterministic Optimization Model for Design of an Integrated Utility and Hydrogen Supply Network

    International Nuclear Information System (INIS)

    Hwangbo, Soonho; Lee, In-Beum; Han, Jeehoon

    2014-01-01

    Lots of networks are constructed in a large scale industrial complex. Each network meet their demands through production or transportation of materials which are needed to companies in a network. Network directly produces materials for satisfying demands in a company or purchase form outside due to demand uncertainty, financial factor, and so on. Especially utility network and hydrogen network are typical and major networks in a large scale industrial complex. Many studies have been done mainly with focusing on minimizing the total cost or optimizing the network structure. But, few research tries to make an integrated network model by connecting utility network and hydrogen network. In this study, deterministic mixed integer linear programming model is developed for integrating utility network and hydrogen network. Steam Methane Reforming process is necessary for combining two networks. After producing hydrogen from Steam-Methane Reforming process whose raw material is steam vents from utility network, produced hydrogen go into hydrogen network and fulfill own needs. Proposed model can suggest optimized case in integrated network model, optimized blueprint, and calculate optimal total cost. The capability of the proposed model is tested by applying it to Yeosu industrial complex in Korea. Yeosu industrial complex has the one of the biggest petrochemical complex and various papers are based in data of Yeosu industrial complex. From a case study, the integrated network model suggests more optimal conclusions compared with previous results obtained by individually researching utility network and hydrogen network

  8. Optimization of a dual-fuel heating system utilizing an EMS a maintain persistence of measures

    International Nuclear Information System (INIS)

    Wolpert, J.S.; Wolpert, S.B.; Martin, G.

    1993-01-01

    An older small office building was subjected to a program substituting gas for electric heat to reduce energy cost and improve comfort for approximately one year and was permanently instituted, with the installation of an energy management system (EMS) the following year. This paper will present a description of the facility, its usage patterns, and the measures taken to introduce the fuel-switching program. The impacts on energy usage and cost as well as comfort will also be reported. This program was initiated by a preliminary audit of the facility conducted by the service contractor in conjunction with the area gas wholesaler. During the audit it was observed that the heating set points for the gas-fired equipment was kept fairly low. This was the result of the desire to keep the cooling set point low and the use of auto-changeover thermostats. The result of this was that the system utilized the gas heat to come up to 68-70 degrees with the majority of the zones then relying on their electric heat to bring temperatures into the 73-75 degrees range. In addition to impacting energy costs, this approach generated numerous comfort complaints. As a further electric penalty, the low cooling set point resulted in a heavy reliance on electric heat (reheat) all summer. The basis of the proposed strategy was to reduce the heavy usage of electric heat by making the building comfortable through reliance more heavily on gas heat. This was tested by raising the heating set points for the RTUS. The success of this approach, along with the comfort considerations and the desire for further savings, led to the installation of an EMS. This allowed further refinements of the control strategy, which are briefly described. When completed, the fuel-switching led to an increase in annual gas costs of 125% with a corresponding decrease in electric cost of nearly 30% for an annual utility cost savings of over 19%

  9. The Utility of Personality Measures in the Admissions Process at the United States Naval Academy

    Science.gov (United States)

    2002-06-01

    ISFP, INFP, INTP , ESTP, ESFP, ENFP, ENTP, ESTJ, ESFJ, ENFJ , AND ENTJ. Within each pair of personality indicators a scale score is determined. Each...applicants if the reader is willing to make the assumption that the relationship between personality measures (MBTI & PHQ) and attrition is the same... relationship between them. 1. Tinto’s Student Integration Model Vincent Tinto’s work is accepted as the basis for the modern study of college attrition

  10. What’s Needed from Climate Modeling to Advance Actionable Science for Water Utilities?

    Science.gov (United States)

    Barsugli, J. J.; Anderson, C. J.; Smith, J. B.; Vogel, J. M.

    2009-12-01

    “…perfect information on climate change is neither available today nor likely to be available in the future, but … over time, as the threats climate change poses to our systems grow more real, predicting those effects with greater certainty is non-discretionary. We’re not yet at a level at which climate change projections can drive climate change adaptation.” (Testimony of WUCA Staff Chair David Behar to the House Committee on Science and Technology, May 5, 2009) To respond to this challenge, the Water Utility Climate Alliance (WUCA) has sponsored a white paper titled “Options for Improving Climate Modeling to Assist Water Utility Planning for Climate Change. ” This report concerns how investments in the science of climate change, and in particular climate modeling and downscaling, can best be directed to help make climate projections more actionable. The meaning of “model improvement” can be very different depending on whether one is talking to a climate model developer or to a water manager trying to incorporate climate projections in to planning. We first surveyed the WUCA members on present and potential uses of climate model projections and on climate inputs to their various system models. Based on those surveys and on subsequent discussions, we identified four dimensions along which improvement in modeling would make the science more “actionable”: improved model agreement on change in key parameters; narrowing the range of model projections; providing projections at spatial and temporal scales that match water utilities system models; providing projections that water utility planning horizons. With these goals in mind we developed four options for improving global-scale climate modeling and three options for improving downscaling that will be discussed. However, there does not seem to be a single investment - the proverbial “magic bullet” -- which will substantially reduce the range of model projections at the scales at which utility

  11. Dispersion modeling of accidental releases of toxic gases - utility for the fire brigades.

    Science.gov (United States)

    Stenzel, S.; Baumann-Stanzer, K.

    2009-09-01

    Several air dispersion models are available for prediction and simulation of the hazard areas associated with accidental releases of toxic gases. The most model packages (commercial or free of charge) include a chemical database, an intuitive graphical user interface (GUI) and automated graphical output for effective presentation of results. The models are designed especially for analyzing different accidental toxic release scenarios ("worst-case scenarios”), preparing emergency response plans and optimal countermeasures as well as for real-time risk assessment and management. The research project RETOMOD (reference scenarios calculations for toxic gas releases - model systems and their utility for the fire brigade) was conducted by the Central Institute for Meteorology and Geodynamics (ZAMG) in cooperation with the Viennese fire brigade, OMV Refining & Marketing GmbH and Synex Ries & Greßlehner GmbH. RETOMOD was funded by the KIRAS safety research program of the Austrian Ministry of Transport, Innovation and Technology (www.kiras.at). The main tasks of this project were 1. Sensitivity study and optimization of the meteorological input for modeling of the hazard areas (human exposure) during the accidental toxic releases. 2. Comparison of several model packages (based on reference scenarios) in order to estimate the utility for the fire brigades. For the purpose of our study the following models were tested and compared: ALOHA (Areal Location of Hazardous atmosphere, EPA), MEMPLEX (Keudel av-Technik GmbH), Trace (Safer System), Breeze (Trinity Consulting), SAM (Engineering office Lohmeyer). A set of reference scenarios for Chlorine, Ammoniac, Butane and Petrol were proceed, with the models above, in order to predict and estimate the human exposure during the event. Furthermore, the application of the observation-based analysis and forecasting system INCA, developed in the Central Institute for Meteorology and Geodynamics (ZAMG) in case of toxic release was

  12. Clinical Utility of Noninvasive Method to Measure Specific Gravity in the Pediatric Population.

    Science.gov (United States)

    Hall, Jeanine E; Huynh, Pauline P; Mody, Ameer P; Wang, Vincent J

    2018-04-01

    Clinicians rely on any combination of signs and symptoms, clinical scores, or invasive procedures to assess the hydration status in children. Noninvasive tests to evaluate for dehydration in the pediatric population are appealing. The objective of our study is to assess the utility of measuring specific gravity of tears compared to specific gravity of urine and the clinical assessment of dehydration. We conducted a prospective cohort convenience sample study, in a pediatric emergency department at a tertiary care children's hospital. We approached parents/guardians of children aged 6 months to 4 years undergoing transurethral catheterization for evaluation of urinary tract infection for enrollment. We collected tears and urine for measurement of tear specific gravity (TSG) and urine specific gravity (USG), respectively. Treating physicians completed dehydration assessment forms to assess for hydration status. Among the 60 participants included, the mean TSG was 1.0183 (SD = 0.007); the mean USG was 1.0186 (SD = 0.0083). TSG and USG were positively correlated with each other (Pearson Correlation = 0.423, p = 0.001). Clinical dehydration scores ranged from 0 to 3, with 87% assigned a score of 0, by physician assessment. Mean number of episodes of vomiting and diarrhea in a 24-hour period were 2.2 (SD = 3.9) and 1.5 (SD = 3.2), respectively. Sixty-two percent of parents reported decreased oral intake. TSG measurements yielded similar results compared with USG. Further studies are needed to determine if TSG can be used as a noninvasive method of dehydration assessment in children. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Utilizing Data Mining for Predictive Modeling of Colorectal Cancer using Electronic Medical Records

    NARCIS (Netherlands)

    Hoogendoorn, M.; Moons, L.G.; Numans, M.E.; Sips, R.J.

    2014-01-01

    Colorectal cancer (CRC) is a relatively common cause of death around the globe. Predictive models for the development of CRC could be highly valuable and could facilitate an early diagnosis and increased survival rates. Currently available predictive models are improving, but do not fully utilize

  14. Local Stability Conditions for Two Types of Monetary Models with Recursive Utility

    OpenAIRE

    Miyazaki, Kenji; Utsunomiya, Hitoshi

    2009-01-01

    This paper explores local stability conditions for money-in-utility-function (MIUF) and transaction-costs (TC) models with recursive utility.A monetary variant of the Brock-Gale condition provides a theoretical justification of the comparative statics analysis. One of sufficient conditions for local stability is increasing marginal impatience (IMI) in consumption and money. However, this does not deny the possibility of decreasing marginal impatience (DMI). The local stability with DMI is mor...

  15. Cross-bridge blocker BTS permits direct measurement of SR Ca2+ pump ATP utilization in toadfish swimbladder muscle fibers.

    Science.gov (United States)

    Young, Iain S; Harwood, Claire L; Rome, Lawrence C

    2003-10-01

    Because the major processes involved in muscle contraction require rapid utilization of ATP, measurement of ATP utilization can provide important insights into the mechanisms of contraction. It is necessary, however, to differentiate between the contribution made by cross-bridges and that of the sarcoplasmic reticulum (SR) Ca2+ pumps. Specific and potent SR Ca2+ pump blockers have been used in skinned fibers to permit direct measurement of cross-bridge ATP utilization. Up to now, there was no analogous cross-bridge blocker. Recently, N-benzyl-p-toluene sulfonamide (BTS) was found to suppress force generation at micromolar concentrations. We tested whether BTS could be used to block cross-bridge ATP utilization, thereby permitting direct measurement of SR Ca2+ pump ATP utilization in saponin-skinned fibers. At 25 microM, BTS virtually eliminates force and cross-bridge ATP utilization (both BTS. At 25 microM, BTS had no effect on SR pump ATP utilization. Hence, we used BTS to make some of the first direct measurements of ATP utilization of intact SR over a physiological range of [Ca2+]at 15 degrees C. Curve fits to SR Ca2+ pump ATP utilization vs. pCa indicate that they have much lower Hill coefficients (1.49) than that describing cross-bridge force generation vs. pCa (approximately 5). Furthermore, we found that BTS also effectively eliminates force generation in bundles of intact swimbladder muscle, suggesting that it will be an important tool for studying integrated SR function during normal motor behavior.

  16. User Guide and Documentation for Five MODFLOW Ground-Water Modeling Utility Programs

    Science.gov (United States)

    Banta, Edward R.; Paschke, Suzanne S.; Litke, David W.

    2008-01-01

    This report documents five utility programs designed for use in conjunction with ground-water flow models developed with the U.S. Geological Survey's MODFLOW ground-water modeling program. One program extracts calculated flow values from one model for use as input to another model. The other four programs extract model input or output arrays from one model and make them available in a form that can be used to generate an ArcGIS raster data set. The resulting raster data sets may be useful for visual display of the data or for further geographic data processing. The utility program GRID2GRIDFLOW reads a MODFLOW binary output file of cell-by-cell flow terms for one (source) model grid and converts the flow values to input flow values for a different (target) model grid. The spatial and temporal discretization of the two models may differ. The four other utilities extract selected 2-dimensional data arrays in MODFLOW input and output files and write them to text files that can be imported into an ArcGIS geographic information system raster format. These four utilities require that the model cells be square and aligned with the projected coordinate system in which the model grid is defined. The four raster-conversion utilities are * CBC2RASTER, which extracts selected stress-package flow data from a MODFLOW binary output file of cell-by-cell flows; * DIS2RASTER, which extracts cell-elevation data from a MODFLOW Discretization file; * MFBIN2RASTER, which extracts array data from a MODFLOW binary output file of head or drawdown; and * MULT2RASTER, which extracts array data from a MODFLOW Multiplier file.

  17. Realized Beta GARCH: A Multivariate GARCH Model with Realized Measures of Volatility and CoVolatility

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger; Voev, Valeri

    We introduce a multivariate GARCH model that utilizes and models realized measures of volatility and covolatility. The realized measures extract information contained in high-frequency data that is particularly beneficial during periods with variation in volatility and covolatility. Applying the ...

  18. Assessment of the biophysical impacts of utility-scale photovoltaics through observations and modelling

    Science.gov (United States)

    Broadbent, A. M.; Georgescu, M.; Krayenhoff, E. S.; Sailor, D.

    2017-12-01

    Utility-scale solar power plants are a rapidly growing component of the solar energy sector. Utility-scale photovoltaic (PV) solar power generation in the United States has increased by 867% since 2012 (EIA, 2016). This expansion is likely to continue as the cost PV technologies decrease. While most agree that solar power can decrease greenhouse gas emissions, the biophysical effects of PV systems on surface energy balance (SEB), and implications for surface climate, are not well understood. To our knowledge, there has never been a detailed observational study of SEB at a utility-scale solar array. This study presents data from an eddy covariance observational tower, temporarily placed above a utility-scale PV array in Southern Arizona. Comparison of PV SEB with a reference (unmodified) site, shows that solar panels can alter the SEB and near surface climate. SEB observations are used to develop and validate a new and more complete SEB PV model. In addition, the PV model is compared to simpler PV modelling methods. The simpler PV models produce differing results to our newly developed model and cannot capture the more complex processes that influence PV SEB. Finally, hypothetical scenarios of PV expansion across the continental United States (CONUS) were developed using various spatial mapping criteria. CONUS simulations of PV expansion reveal regional variability in biophysical effects of PV expansion. The study presents the first rigorous and validated simulations of the biophysical effects of utility-scale PV arrays.

  19. Testing substellar models with dynamical mass measurements

    Directory of Open Access Journals (Sweden)

    Liu M.C.

    2011-07-01

    Full Text Available We have been using Keck laser guide star adaptive optics to monitor the orbits of ultracool binaries, providing dynamical masses at lower luminosities and temperatures than previously available and enabling strong tests of theoretical models. We have identified three specific problems with theory: (1 We find that model color–magnitude diagrams cannot be reliably used to infer masses as they do not accurately reproduce the colors of ultracool dwarfs of known mass. (2 Effective temperatures inferred from evolutionary model radii are typically inconsistent with temperatures derived from fitting atmospheric models to observed spectra by 100–300 K. (3 For the only known pair of field brown dwarfs with a precise mass (3% and age determination (≈25%, the measured luminosities are ~2–3× higher than predicted by model cooling rates (i.e., masses inferred from Lbol and age are 20–30% larger than measured. To make progress in understanding the observed discrepancies, more mass measurements spanning a wide range of luminosity, temperature, and age are needed, along with more accurate age determinations (e.g., via asteroseismology for primary stars with brown dwarf binary companions. Also, resolved optical and infrared spectroscopy are needed to measure lithium depletion and to characterize the atmospheres of binary components in order to better assess model deficiencies.

  20. Measures of Quality in Business Process Modelling

    Directory of Open Access Journals (Sweden)

    Radek Hronza

    2015-06-01

    Full Text Available Business process modelling and analysing is undoubtedly one of the most important parts of Applied (Business Informatics. Quality of business process models (diagrams is crucial for any purpose in this area. The goal of a process analyst’s work is to create generally understandable, explicit and error free models. If a process is properly described, created models can be used as an input into deep analysis and optimization. It can be assumed that properly designed business process models (similarly as in the case of correctly written algorithms contain characteristics that can be mathematically described. Besides it will be possible to create a tool that will help process analysts to design proper models. As part of this review will be conducted systematic literature review in order to find and analyse business process model’s design and business process model’s quality measures. It was found that mentioned area had already been the subject of research investigation in the past. Thirty-three suitable scietific publications and twenty-two quality measures were found. Analysed scientific publications and existing quality measures do not reflect all important attributes of business process model’s clarity, simplicity and completeness. Therefore it would be appropriate to add new measures of quality.

  1. A stochastic model for quantum measurement

    International Nuclear Information System (INIS)

    Budiyono, Agung

    2013-01-01

    We develop a statistical model of microscopic stochastic deviation from classical mechanics based on a stochastic process with a transition probability that is assumed to be given by an exponential distribution of infinitesimal stationary action. We apply the statistical model to stochastically modify a classical mechanical model for the measurement of physical quantities reproducing the prediction of quantum mechanics. The system+apparatus always has a definite configuration at all times, as in classical mechanics, fluctuating randomly following a continuous trajectory. On the other hand, the wavefunction and quantum mechanical Hermitian operator corresponding to the physical quantity arise formally as artificial mathematical constructs. During a single measurement, the wavefunction of the whole system+apparatus evolves according to a Schrödinger equation and the configuration of the apparatus acts as the pointer of the measurement so that there is no wavefunction collapse. We will also show that while the outcome of each single measurement event does not reveal the actual value of the physical quantity prior to measurement, its average in an ensemble of identical measurements is equal to the average of the actual value of the physical quantity prior to measurement over the distribution of the configuration of the system. (paper)

  2. Experimental and numerical investigation of the flow measurement method utilized in the steam generator of HTR-PM

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Shiming; Ren, Cheng; Sun, Yangfei [Institute of Nuclear and New Energy Technology of Tsinghua University, Collaborative Innovation Center of Advanced Nuclear Energy Technology, Key Laboratory of Advanced Reactor Engineering and Safety of Ministry of Education, Beijing 100084 (China); Tu, Jiyuan [Institute of Nuclear and New Energy Technology of Tsinghua University, Collaborative Innovation Center of Advanced Nuclear Energy Technology, Key Laboratory of Advanced Reactor Engineering and Safety of Ministry of Education, Beijing 100084 (China); School of Aerospace, Mechanical & Manufacturing Engineering, RMIT University, Melbourne, VIC 3083 (Australia); Yang, Xingtuan, E-mail: yangxt107@sina.com [Institute of Nuclear and New Energy Technology of Tsinghua University, Collaborative Innovation Center of Advanced Nuclear Energy Technology, Key Laboratory of Advanced Reactor Engineering and Safety of Ministry of Education, Beijing 100084 (China)

    2016-08-15

    Highlights: • The flow confluence process in the steam generator is very important for HTR-PM. • The complicated flow in the unique pipeline configuration is studied by both of experimental and numerical method. • The pressure uniformity at the bottom of the model was tested to evaluate the accuracy of the experimental results. • Flow separation and the secondary flow is described for explaining the nonuniformity of the flow distribution. - Abstract: The helium flow measurement method is very important for the design of HTR-PM. Water experiments and numerical simulation with a 1/5 scaled model are conducted to investigate the flow measurement method utilized in the steam generator of HTR-PM. Pressure information at specific location of the 90° elbows with the diameter of 46.75 mm and radius ratio of 1.5 is measured to evaluate the flow rate in the riser-pipes. Pressure uniformity at the bottom of the experimental apparatus is tested to evaluate the influence of the equipment error on the final experimental results. Numerical results obtained by using the realizable k–ε model are compared with the experimental data. The results reveal that flow oscillation does not occur in the confluence system. For every single riser-pipe, the flow is stable despite the nonuniformity of the flow distribution. The average flow rates of the two pipe series show good repeatability regardless of the increases and decreases of the average velocity. In the header box, the flows out of the riser-pipes encounter with each other and finally distort the pressure distribution and the nonuniformity of the flow distribution becomes more significant along with the increasing Reynolds number.

  3. Measuring and modeling water imbibition into tuff

    International Nuclear Information System (INIS)

    Peters, R.R.; Klavetter, E.A.; George, J.T.; Gauthier, J.H.

    1986-01-01

    Yucca Mountain (Nevada) is being investigated as a potential site for a high-level-radioactive-waste repository. The site combines a partially saturated hydrologic system and a stratigraphy of fractured, welded and nonwelded tuffs. The long time scale for site hydrologic phenomena makes their direct measurement prohibitive. Also, modeling is difficult because the tuffs exhibit widely varying, and often highly nonlinear hydrologic properties. To increase a basic understanding of both the hydrologic properties of tuffs and the modeling of flow in partially saturated regimes, the following tasks were performed, and the results are reported: (1) Laboratory Experiment: Water imbibition into a cylinder of tuff (taken from Yucca Mountain drill core) was measured by immersing one end of a dry sample in water and noting its weight at various times. The flow of water was approximately one-dimensional, filling the sample from bottom to top. (2) Computer Simulation: The experiment was modeled using TOSPAC (a one-dimensional, finite-difference computer program for simulating water flow in partially saturated, fractured, layered media) with data currently considered for use in site-scale modeling of a repository in Yucca Mountain. The measurements and the results of the modeling are compared. Conclusions are drawn with respect to the accuracy of modeling transient flow in a partially saturated, porous medium using a one-dimensional model and currently available hydrologic-property data

  4. Review of utility values for economic modeling in type 2 diabetes.

    Science.gov (United States)

    Beaudet, Amélie; Clegg, John; Thuresson, Per-Olof; Lloyd, Adam; McEwan, Phil

    2014-06-01

    Economic analysis in type 2 diabetes mellitus (T2DM) requires an assessment of the effect of a wide range of complications. The objective of this article was to identify a set of utility values consistent with the National Institute for Health and Care Excellence (NICE) reference case and to critically discuss and illustrate challenges in creating such a utility set. A systematic literature review was conducted to identify studies reporting utility values for relevant complications. The methodology of each study was assessed for consistency with the NICE reference case. A suggested set of utility values applicable to modeling was derived, giving preference to studies reporting multiple complications and correcting for comorbidity. The review considered 21 relevant diabetes complications. A total of 16,574 articles were identified; after screening, 61 articles were assessed for methodological quality. Nineteen articles met NICE criteria, reporting utility values for 20 of 21 relevant complications. For renal transplant, because no articles meeting NICE criteria were identified, two articles using other methodologies were included. Index value estimates for T2DM without complication ranged from 0.711 to 0.940. Utility decrement associated with complications ranged from 0.014 (minor hypoglycemia) to 0.28 (amputation). Limitations associated with the selection of a utility value for use in economic modeling included variability in patient recruitment, heterogeneity in statistical analysis, large variability around some point estimates, and lack of recent data. A reference set of utility values for T2DM and its complications in line with NICE requirements was identified. This research illustrates the challenges associated with systematically selecting utility data for economic evaluations. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  5. Assessment and Utility of Frailty Measures in Critical Illness, Cardiology, and Cardiac Surgery.

    Science.gov (United States)

    Rajabali, Naheed; Rolfson, Darryl; Bagshaw, Sean M

    2016-09-01

    Frailty is a clearly emerging theme in acute care medicine, with obvious prognostic and health resource implications. "Frailty" is a term used to describe a multidimensional syndrome of loss of homeostatic reserves that gives rise to a vulnerability to adverse outcomes after relatively minor stressor events. This is conceptually simple, yet there has been little consensus on the operational definition. The gold standard method to diagnose frailty remains a comprehensive geriatric assessment; however, a variety of validated physical performance measures, judgement-based tools, and multidimensional scales are being applied in critical care, cardiology, and cardiac surgery settings, including open cardiac surgery and transcatheter aortic value replacement. Frailty is common among patients admitted to the intensive care unit and correlates with an increased risk for adverse events, increased resource use, and less favourable patient-centred outcomes. Analogous findings have been described across selected acute cardiology and cardiac surgical settings, in particular those that commonly intersect with critical care services. The optimal methods for screening and diagnosing frailty across these settings remains an active area of investigation. Routine assessment for frailty conceivably has numerous purported benefits for patients, families, health care providers, and health administrators through better informed decision-making regarding treatments or goals of care, prognosis for survival, expectations for recovery, risk of complications, and expected resource use. In this review, we discuss the measurement of frailty and its utility in patients with critical illness and in cardiology and cardiac surgery settings. Copyright © 2016 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.

  6. A simple method for measuring glucose utilization of insulin-sensitive tissues by using the brain as a reference

    International Nuclear Information System (INIS)

    Namba, Hiroki; Nakagawa, Keiichi; Iyo, Masaomi; Fukushi, Kiyoshi; Irie, Toshiaki

    1994-01-01

    A simple method, without measurement of the plasma input function, to obtain semiquantitative values of glucose utilization in tissues other than the brain with radioactive deoxyglucose is reported. The brain, in which glucose utilization is essentially insensitive to plasma glucose and insulin concentrations, was used as an internal reference. The effects of graded doses of oral glucose loading (0.5, 1 and 2 mg/g body weight) on insulin-sensitive tissues (heart, muscle and fat tissue) were studied in the rat. By using the brain-reference method, dose-dependent increases in glucose utilization were clearly shown in all the insulin-sensitive tissues examined. The method seems to be of value for measurement of glucose utilization using radioactive deoxyglucose and positron emission tomography in the heart or other insulin-sensitive tissues, especially during glucose loading. (orig.)

  7. Modeling the Dynamic Interrelations between Mobility, Utility, and Land Asking Price

    Science.gov (United States)

    Hidayat, E.; Rudiarto, I.; Siegert, F.; Vries, W. D.

    2018-02-01

    Limited and insufficient information about the dynamic interrelation among mobility, utility, and land price is the main reason to conduct this research. Several studies, with several approaches, and several variables have been conducted so far in order to model the land price. However, most of these models appear to generate primarily static land prices. Thus, a research is required to compare, design, and validate different models which calculate and/or compare the inter-relational changes of mobility, utility, and land price. The applied method is a combination of analysis of literature review, expert interview, and statistical analysis. The result is newly improved mathematical model which have been validated and is suitable for the case study location. This improved model consists of 12 appropriate variables. This model can be implemented in the Salatiga city as the case study location in order to arrange better land use planning to mitigate the uncontrolled urban growth.

  8. THE BUSINESS MODEL AND FINANCIAL ASSETS MEASUREMENT

    OpenAIRE

    NICULA Ileana

    2012-01-01

    The paper work analyses some aspects regarding the implementation of IFRS 9, the relationship between the business model approach and the assets classification and measurement. It does not discuss the cash flows characteristics, another important aspect of assets classification, or the reclassifications. The business model is related to some characteristics of the banks (opaqueness, leverage ratio, compliance to capital, sound liquidity requirements and risk management) and to Special Purpose...

  9. Adolescent idiopathic scoliosis screening for school, community, and clinical health promotion practice utilizing the PRECEDE-PROCEED model

    Directory of Open Access Journals (Sweden)

    Wyatt Lawrence A

    2005-11-01

    Full Text Available Abstract Background Screening for adolescent idiopathic scoliosis (AIS is a commonly performed procedure for school children during the high risk years. The PRECEDE-PROCEDE (PP model is a health promotion planning model that has not been utilized for the clinical diagnosis of AIS. The purpose of this research is to study AIS in the school age population using the PP model and its relevance for community, school, and clinical health promotion. Methods MEDLINE was utilized to locate AIS data. Studies were screened for relevance and applicability under the auspices of the PP model. Where data was unavailable, expert opinion was utilized based on consensus. Results The social assessment of quality of life is limited with few studies approaching the long-term effects of AIS. Epidemiologically, AIS is the most common form of scoliosis and leading orthopedic problem in children. Behavioral/environmental studies focus on discovering etiologic relationships yet this data is confounded because AIS is not a behavioral. Illness and parenting health behaviors can be appreciated. The educational diagnosis is confounded because AIS is an orthopedic disorder and not behavioral. The administration/policy diagnosis is hindered in that scoliosis screening programs are not considered cost-effective. Policies are determined in some schools because 26 states mandate school scoliosis screening. There exists potential error with the Adam's test. The most widely used measure in the PP model, the Health Belief Model, has not been utilized in any AIS research. Conclusion The PP model is a useful tool for a comprehensive study of a particular health concern. This research showed where gaps in AIS research exist suggesting that there may be problems to the implementation of school screening. Until research disparities are filled, implementation of AIS screening by school, community, and clinical health promotion will be compromised. Lack of data and perceived importance by

  10. Utilizing Gaze Behavior for Inferring Task Transitions Using Abstract Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Daniel Fernando Tello Gamarra

    2016-12-01

    Full Text Available We demonstrate an improved method for utilizing observed gaze behavior and show that it is useful in inferring hand movement intent during goal directed tasks. The task dynamics and the relationship between hand and gaze behavior are learned using an Abstract Hidden Markov Model (AHMM. We show that the predicted hand movement transitions occur consistently earlier in AHMM models with gaze than those models that do not include gaze observations.

  11. Measurement of Laser Weld Temperatures for 3D Model Input

    Energy Technology Data Exchange (ETDEWEB)

    Dagel, Daryl [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grossetete, Grant [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Maccallum, Danny O. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-10-01

    Laser welding is a key joining process used extensively in the manufacture and assembly of critical components for several weapons systems. Sandia National Laboratories advances the understanding of the laser welding process through coupled experimentation and modeling. This report summarizes the experimental portion of the research program, which focused on measuring temperatures and thermal history of laser welds on steel plates. To increase confidence in measurement accuracy, researchers utilized multiple complementary techniques to acquire temperatures during laser welding. This data serves as input to and validation of 3D laser welding models aimed at predicting microstructure and the formation of defects and their impact on weld-joint reliability, a crucial step in rapid prototyping of weapons components.

  12. DIAMOND: A model of incremental decision making for resource acquisition by electric utilities

    Energy Technology Data Exchange (ETDEWEB)

    Gettings, M.; Hirst, E.; Yourstone, E.

    1991-02-01

    Uncertainty is a major issue facing electric utilities in planning and decision making. Substantial uncertainties exist concerning future load growth; the lifetimes and performances of existing power plants; the construction times, costs, and performances of new resources being brought online; and the regulatory and economic environment in which utilities operate. This report describes a utility planning model that focuses on frequent and incremental decisions. The key features of this model are its explicit treatment of uncertainty, frequent user interaction with the model, and the ability to change prior decisions. The primary strength of this model is its representation of the planning and decision-making environment that utility planners and executives face. Users interact with the model after every year or two of simulation, which provides an opportunity to modify past decisions as well as to make new decisions. For example, construction of a power plant can be started one year, and if circumstances change, the plant can be accelerated, mothballed, canceled, or continued as originally planned. Similarly, the marketing and financial incentives for demand-side management programs can be changed from year to year, reflecting the short lead time and small unit size of these resources. This frequent user interaction with the model, an operational game, should build greater understanding and insights among utility planners about the risks associated with different types of resources. The model is called DIAMOND, Decision Impact Assessment Model. In consists of four submodels: FUTURES, FORECAST, SIMULATION, and DECISION. It runs on any IBM-compatible PC and requires no special software or hardware. 19 refs., 13 figs., 15 tabs.

  13. The elastic body model: a pedagogical approach integrating real time measurements and modelling activities

    International Nuclear Information System (INIS)

    Fazio, C; Guastella, I; Tarantino, G

    2007-01-01

    In this paper, we describe a pedagogical approach to elastic body movement based on measurements of the contact times between a metallic rod and small bodies colliding with it and on modelling of the experimental results by using a microcomputer-based laboratory and simulation tools. The experiments and modelling activities have been built in the context of the laboratory of mechanical wave propagation of the two-year graduate teacher education programme of Palermo's University. Some considerations about observed modifications in trainee teachers' attitudes in utilizing experiments and modelling are discussed

  14. Utilizing The Synergy of Airborne Backscatter Lidar and In-Situ Measurements for Evaluating CALIPSO

    Directory of Open Access Journals (Sweden)

    Tsekeri Alexandra

    2016-01-01

    Full Text Available Airborne campaigns dedicated to satellite validation are crucial for the effective global aerosol monitoring. CALIPSO is currently the only active remote sensing satellite mission, acquiring the vertical profiles of the aerosol backscatter and extinction coefficients. Here we present a method for CALIPSO evaluation from combining lidar and in-situ airborne measurements. The limitations of the method have to do mainly with the in-situ instrumentation capabilities and the hydration modelling. We also discuss the future implementation of our method in the ICE-D campaign (Cape Verde, August 2015.

  15. Model project to promote cultivation and utilization of renewable resources. Modellvorhaben zur Foerderung des Anbaus und der Verwertung nachwachsender Rohstoffe

    Energy Technology Data Exchange (ETDEWEB)

    1991-09-01

    This revised report on the model projects presents individual projects and measures complementary to each other, documenting, in their totality, an advanced state of development. Moreover it shows the following: that the basic challenge of a model project, especially in the field of the energetic use of biomass, can be met by marrying agriculture to power utilities. So, projects are under way where cultivation of China reed and its utilization in power-and-heat cogeneration plants will, in the future, complement each other. Further questions that are not represented in the research programme of Lower Saxonia are dealt with at the federal level, so that the field of renewable resurces may currently be considered as comprehensively covered. (orig./EF).

  16. Utility of local health registers in measuring perinatal mortality: a case study in rural Indonesia.

    Science.gov (United States)

    Burke, Leona; Suswardany, Dwi Linna; Michener, Keryl; Mazurki, Setiawaty; Adair, Timothy; Elmiyati, Catur; Rao, Chalapati

    2011-03-17

    Perinatal mortality is an important indicator of obstetric and newborn care services. Although the vast majority of global perinatal mortality is estimated to occur in developing countries, there is a critical paucity of reliable data at the local level to inform health policy, plan health care services, and monitor their impact. This paper explores the utility of information from village health registers to measure perinatal mortality at the sub district level in a rural area of Indonesia. A retrospective pregnancy cohort for 2007 was constructed by triangulating data from antenatal care, birth, and newborn care registers in a sample of villages in three rural sub districts in Central Java, Indonesia. For each pregnancy, birth outcome and first week survival were traced and recorded from the different registers, as available. Additional local death records were consulted to verify perinatal mortality, or identify deaths not recorded in the health registers. Analyses were performed to assess data quality from registers, and measure perinatal mortality rates. Qualitative research was conducted to explore knowledge and practices of village midwives in register maintenance and reporting of perinatal mortality. Field activities were conducted in 23 villages, covering a total of 1759 deliveries that occurred in 2007. Perinatal mortality outcomes were 23 stillbirths and 15 early neonatal deaths, resulting in a perinatal mortality rate of 21.6 per 1000 live births in 2007. Stillbirth rates for the study population were about four times the rates reported in the routine Maternal and Child Health program information system. Inadequate awareness and supervision, and alternate workload were cited by local midwives as factors resulting in inconsistent data reporting. Local maternal and child health registers are a useful source of information on perinatal mortality in rural Indonesia. Suitable training, supervision, and quality control, in conjunction with computerisation to

  17. Causal Measurement Models: Can Criticism Stimulate Clarification?

    Science.gov (United States)

    Markus, Keith A.

    2016-01-01

    In their 2016 work, Aguirre-Urreta et al. provided a contribution to the literature on causal measurement models that enhances clarity and stimulates further thinking. Aguirre-Urreta et al. presented a form of statistical identity involving mapping onto the portion of the parameter space involving the nomological net, relationships between the…

  18. Model measurements for new accelerating techniques

    International Nuclear Information System (INIS)

    Aronson, S.; Haseroth, H.; Knott, J.; Willis, W.

    1988-06-01

    We summarize the work carried out for the past two years, concerning some different ways for achieving high-field gradients, particularly in view of future linear lepton colliders. These studies and measurements on low power models concern the switched power principle and multifrequency excitation of resonant cavities. 15 refs., 12 figs

  19. Interpreting, measuring, and modeling soil respiration

    Science.gov (United States)

    Michael G. Ryan; Beverly E. Law

    2005-01-01

    This paper reviews the role of soil respiration in determining ecosystem carbon balance, and the conceptual basis for measuring and modeling soil respiration. We developed it to provide background and context for this special issue on soil respiration and to synthesize the presentations and discussions at the workshop. Soil respiration is the largest component of...

  20. Using DORIS measurements for ionosphere modeling

    Science.gov (United States)

    Dettmering, Denise; Schmidt, Michael; Limberger, Marco

    2013-04-01

    Nowadays, most of the ionosphere models used in geodesy are based on terrestrial GNSS measurements and describe the Vertical Total Electron Content (VTEC) depending on longitude, latitude, and time. Since modeling the height distribution of the electrons is difficult due to the measurement geometry, the VTEC maps are based on the the assumption of a single-layer ionosphere. Moreover, the accuracy of the VTEC maps is different for different regions of the Earth, because the GNSS stations are unevenly distributed over the globe and some regions (especially the ocean areas) are not very well covered by observations. To overcome the unsatisfying measurement geometry of the terrestrial GNSS measurements and to take advantage of the different sensitivities of other space-geodetic observation techniques, we work on the development of multi-dimensional models of the ionosphere from the combination of modern space-geodetic satellite techniques. Our approach consists of a given background model and an unknown correction part expanded in terms of B-spline functions. Different space-geodetic measurements are used to estimate the unknown model coefficients. In order to take into account the different accuracy levels of the observations, a Variance Component Estimation (VCE) is applied. We already have proven the usefulness of radio occultation data from space-borne GPS receivers and of two-frequency altimetry data. Currently, we test the capability of DORIS observations to derive ionospheric parameters such as VTEC. Although DORIS was primarily designed for precise orbit computation of satellites, it can be used as a tool to study the Earth's ionosphere. The DORIS ground beacons are almost globally distributed and the system is on board of various Low Earth Orbiters (LEO) with different orbit heights, such as Jason-2, Cryosat-2, and HY-2. The last generation of DORIS receivers directly provides phase measurements on two frequencies. In this contribution, we test the DORIS

  1. ATLAS MDT neutron sensitivity measurement and modeling

    International Nuclear Information System (INIS)

    Ahlen, S.; Hu, G.; Osborne, D.; Schulz, A.; Shank, J.; Xu, Q.; Zhou, B.

    2003-01-01

    The sensitivity of the ATLAS precision muon detector element, the Monitored Drift Tube (MDT), to fast neutrons has been measured using a 5.5 MeV Van de Graaff accelerator. The major mechanism of neutron-induced signals in the drift tubes is the elastic collisions between the neutrons and the gas nuclei. The recoil nuclei lose kinetic energy in the gas and produce the signals. By measuring the ATLAS drift tube neutron-induced signal rate and the total neutron flux, the MDT neutron signal sensitivities were determined for different drift gas mixtures and for different neutron beam energies. We also developed a sophisticated simulation model to calculate the neutron-induced signal rate and signal spectrum for ATLAS MDT operation configurations. The calculations agree with the measurements very well. This model can be used to calculate the neutron sensitivities for different gaseous detectors and for neutron energies above those available to this experiment

  2. Nonclassical measurements errors in nonlinear models

    DEFF Research Database (Denmark)

    Madsen, Edith; Mulalic, Ismir

    Discrete choice models and in particular logit type models play an important role in understanding and quantifying individual or household behavior in relation to transport demand. An example is the choice of travel mode for a given trip under the budget and time restrictions that the individuals...... estimates of the income effect it is of interest to investigate the magnitude of the estimation bias and if possible use estimation techniques that take the measurement error problem into account. We use data from the Danish National Travel Survey (NTS) and merge it with administrative register data...... that contains very detailed information about incomes. This gives a unique opportunity to learn about the magnitude and nature of the measurement error in income reported by the respondents in the Danish NTS compared to income from the administrative register (correct measure). We find that the classical...

  3. Consumer preferences for alternative fuel vehicles: Comparing a utility maximization and a regret minimization model

    International Nuclear Information System (INIS)

    Chorus, Caspar G.; Koetse, Mark J.; Hoen, Anco

    2013-01-01

    This paper presents a utility-based and a regret-based model of consumer preferences for alternative fuel vehicles, based on a large-scale stated choice-experiment held among company car leasers in The Netherlands. Estimation and application of random utility maximization and random regret minimization discrete choice models shows that while the two models achieve almost identical fit with the data and differ only marginally in terms of predictive ability, they generate rather different choice probability-simulations and policy implications. The most eye-catching difference between the two models is that the random regret minimization model accommodates a compromise-effect, as it assigns relatively high choice probabilities to alternative fuel vehicles that perform reasonably well on each dimension instead of having a strong performance on some dimensions and a poor performance on others. - Highlights: • Utility- and regret-based models of preferences for alternative fuel vehicles. • Estimation based on stated choice-experiment among Dutch company car leasers. • Models generate rather different choice probabilities and policy implications. • Regret-based model accommodates a compromise-effect

  4. Modeling Water Utility Investments and Improving Regulatory Policies using Economic Optimisation in England and Wales

    Science.gov (United States)

    Padula, S.; Harou, J. J.

    2012-12-01

    Water utilities in England and Wales are regulated natural monopolies called 'water companies'. Water companies must obtain periodic regulatory approval for all investments (new supply infrastructure or demand management measures). Both water companies and their regulators use results from least economic cost capacity expansion optimisation models to develop or assess water supply investment plans. This presentation first describes the formulation of a flexible supply-demand planning capacity expansion model for water system planning. The model uses a mixed integer linear programming (MILP) formulation to choose the least-cost schedule of future supply schemes (reservoirs, desalination plants, etc.) and demand management (DM) measures (leakage reduction, water efficiency and metering options) and bulk transfers. Decisions include what schemes to implement, when to do so, how to size schemes and how much to use each scheme during each year of an n-year long planning horizon (typically 30 years). In addition to capital and operating (fixed and variable) costs, the estimated social and environmental costs of schemes are considered. Each proposed scheme is costed discretely at one or more capacities following regulatory guidelines. The model uses a node-link network structure: water demand nodes are connected to supply and demand management (DM) options (represented as nodes) or to other demand nodes (transfers). Yields from existing and proposed are estimated separately using detailed water resource system simulation models evaluated over the historical period. The model simultaneously considers multiple demand scenarios to ensure demands are met at required reliability levels; use levels of each scheme are evaluated for each demand scenario and weighted by scenario likelihood so that operating costs are accurately evaluated. Multiple interdependency relationships between schemes (pre-requisites, mutual exclusivity, start dates, etc.) can be accounted for by

  5. Utility of silicone filtering for diffusive model CO2 sensors in field experiments

    Directory of Open Access Journals (Sweden)

    Shinjiro Ohkubo

    2013-05-01

    Full Text Available Installing a diffusive model CO2 sensor in the soil is a direct and useful method to observe the time variation of gas CO2 concentration in soil. Furthermore, it requires no bulky measurement system. A hydrophobic silicone filter prevents water infiltration. Therefore, a sensor whose detection element is covered with a silicone filter can be durable in the field even when experiencing inundation (e.g. farmland with snow melting, wetland with varying water level. The utility of a diffusive model of CO2 sensor covered with silicone filter was examined in laboratory and field experiments. Applying the silicone filter delays the response to change in ambient CO2 concentration, which results from lower gas permeability than those of other conventionally used filters made of materials, such as polytetrafluoroethylene. Theoretically, apart from the precision of the sensor itself, diurnal variation of soil gas CO2 concentration is calculable from obtained series of data with a silicone-covered sensor with negligible error. The error is estimated at approximately 1% of the diurnal amplitude in most cases of a 10-min logging interval. Drastic changes that occur, such as those of a rainfall event, cause a larger gap separating calculated and real values. However, the proportion of this gap to the extent of the drastic increase was extremely small (0.43% for a 10-min logging interval. For accurate estimation, a smoothly varied data series must be prepared as input data. Using a moving average or applying a fitting curve can be useful when using a sensor or data logger with low resolution. Estimating the gas permeability coefficient is crucial for calculation. The gas permeability coefficient can be estimated through laboratory experiments. This study revealed the possibility of evaluating the time variation of soil gas CO2 concentration by installing a diffusive model of silicone-covered sensor in an inundated field.

  6. Utilization of arterial blood gas measurements in a large tertiary care hospital.

    Science.gov (United States)

    Melanson, Stacy E F; Szymanski, Trevor; Rogers, Selwyn O; Jarolim, Petr; Frendl, Gyorgy; Rawn, James D; Cooper, Zara; Ferrigno, Massimo

    2007-04-01

    We describe the patterns of utilization of arterial blood gas (ABG) tests in a large tertiary care hospital. To our knowledge, no hospital-wide analysis of ABG test utilization has been published. We analyzed 491 ABG tests performed during 24 two-hour intervals, representative of different staff shifts throughout the 7-day week. The clinician ordering each ABG test was asked to fill out a utilization survey. The most common reasons for requesting an ABG test were changes in ventilator settings (27.6%), respiratory events (26.4%), and routine (25.7%). Of the results, approximately 79% were expected, and a change in patient management (eg, a change in ventilator settings) occurred in 42% of cases. Many ABG tests were ordered as part of a clinical routine or to monitor parameters that can be assessed clinically or through less invasive testing. Implementation of practice guidelines may prove useful in controlling test utilization and in decreasing costs.

  7. Do generic utility measures capture what is important to the quality of life of people with multiple sclerosis?

    OpenAIRE

    Kuspinar, Ayse; Mayo, Nancy E

    2013-01-01

    Purpose The three most widely used utility measures are the Health Utilities Index Mark 2 and 3 (HUI2 and HUI3), the EuroQol-5D (EQ-5D) and the Short-Form-6D (SF-6D). In line with guidelines for economic evaluation from agencies such as the National Institute for Health and Clinical Excellence (NICE) and the Canadian Agency for Drugs and Technologies in Health (CADTH), these measures are currently being used to evaluate the cost-effectiveness of different interventions in MS. However, the cha...

  8. Implications of Model Structure and Detail for Utility Planning: Scenario Case Studies Using the Resource Planning Model

    Energy Technology Data Exchange (ETDEWEB)

    Mai, Trieu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Barrows, Clayton [National Renewable Energy Lab. (NREL), Golden, CO (United States); Lopez, Anthony [National Renewable Energy Lab. (NREL), Golden, CO (United States); Hale, Elaine [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dyson, Mark [National Renewable Energy Lab. (NREL), Golden, CO (United States); Eurek, Kelly [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-04-01

    In this report, we analyze the impacts of model configuration and detail in capacity expansion models, computational tools used by utility planners looking to find the least cost option for planning the system and by researchers or policy makers attempting to understand the effects of various policy implementations. The present analysis focuses on the importance of model configurations — particularly those related to capacity credit, dispatch modeling, and transmission modeling — to the construction of scenario futures. Our analysis is primarily directed toward advanced tools used for utility planning and is focused on those impacts that are most relevant to decisions with respect to future renewable capacity deployment. To serve this purpose, we develop and employ the NREL Resource Planning Model to conduct a case study analysis that explores 12 separate capacity expansion scenarios of the Western Interconnection through 2030.

  9. Varying coefficients model with measurement error.

    Science.gov (United States)

    Li, Liang; Greene, Tom

    2008-06-01

    We propose a semiparametric partially varying coefficient model to study the relationship between serum creatinine concentration and the glomerular filtration rate (GFR) among kidney donors and patients with chronic kidney disease. A regression model is used to relate serum creatinine to GFR and demographic factors in which coefficient of GFR is expressed as a function of age to allow its effect to be age dependent. GFR measurements obtained from the clearance of a radioactively labeled isotope are assumed to be a surrogate for the true GFR, with the relationship between measured and true GFR expressed using an additive error model. We use locally corrected score equations to estimate parameters and coefficient functions, and propose an expected generalized cross-validation (EGCV) method to select the kernel bandwidth. The performance of the proposed methods, which avoid distributional assumptions on the true GFR and residuals, is investigated by simulation. Accounting for measurement error using the proposed model reduced apparent inconsistencies in the relationship between serum creatinine and GFR among different clinical data sets derived from kidney donor and chronic kidney disease source populations.

  10. The Utility of the UTAUT Model in Explaining Mobile Learning Adoption in Higher Education in Guyana

    Science.gov (United States)

    Thomas, Troy Devon; Singh, Lenandlar; Gaffar, Kemuel

    2013-01-01

    In this paper, we compare the utility of modified versions of the unified theory of acceptance and use of technology (UTAUT) model in explaining mobile learning adoption in higher education in a developing country and evaluate the size and direction of the impacts of the UTAUT factors on behavioural intention to adopt mobile learning in higher…

  11. IAPCS: A COMPUTER MODEL THAT EVALUATES POLLUTION CONTROL SYSTEMS FOR UTILITY BOILERS

    Science.gov (United States)

    The IAPCS model, developed by U.S. EPA`s Air and Energy Engineering Research Laboratory and made available to the public through the National Technical Information Service, can be used by utility companies, architectural and engineering companies, and regulatory agencies at all l...

  12. An integrated utility-based model of conflict evaluation and resolution in the Stroop task.

    Science.gov (United States)

    Chuderski, Adam; Smolen, Tomasz

    2016-04-01

    Cognitive control allows humans to direct and coordinate their thoughts and actions in a flexible way, in order to reach internal goals regardless of interference and distraction. The hallmark test used to examine cognitive control is the Stroop task, which elicits both the weakly learned but goal-relevant and the strongly learned but goal-irrelevant response tendencies, and requires people to follow the former while ignoring the latter. After reviewing the existing computational models of cognitive control in the Stroop task, its novel, integrated utility-based model is proposed. The model uses 3 crucial control mechanisms: response utility reinforcement learning, utility-based conflict evaluation using the Festinger formula for assessing the conflict level, and top-down adaptation of response utility in service of conflict resolution. Their complex, dynamic interaction led to replication of 18 experimental effects, being the largest data set explained to date by 1 Stroop model. The simulations cover the basic congruency effects (including the response latency distributions), performance dynamics and adaptation (including EEG indices of conflict), as well as the effects resulting from manipulations applied to stimulation and responding, which are yielded by the extant Stroop literature. (c) 2016 APA, all rights reserved).

  13. Utilizing the PREPaRE Model When Multiple Classrooms Witness a Traumatic Event

    Science.gov (United States)

    Bernard, Lisa J.; Rittle, Carrie; Roberts, Kathy

    2011-01-01

    This article presents an account of how the Charleston County School District responded to an event by utilizing the PREPaRE model (Brock, et al., 2009). The acronym, PREPaRE, refers to a range of crisis response activities: P (prevent and prepare for psychological trauma), R (reaffirm physical health and perceptions of security and safety), E…

  14. Entropy-optimal weight constraint elicitation with additive multi-attribute utility models

    NARCIS (Netherlands)

    Valkenhoef , van Gert; Tervonen, Tommi

    2016-01-01

    We consider the elicitation of incomplete preference information for the additive utility model in terms of linear constraints on the weights. Eliciting incomplete preferences using holistic pair-wise judgments is convenient for the decision maker, but selecting the best pair-wise comparison is

  15. Utilization of neutrons in nuclear data measurements and bulk sample analysis

    International Nuclear Information System (INIS)

    Jonah, S. A.

    1995-01-01

    Experimental investigations were carried out with neutrons in the fields of neutron data measurements and bulk sample analysis based on the interactions of neutron interactions required in the investigations together with some salient features of the sources employed are enumerated. Excitation cross section curves and isomeric cross section ratio of 58 Ni(n,p) 58 Co m , g reaction over the neutron energy range of between 5 and 15 MeV were determined using the activation analysis technique in combination with high-resolution gamma spectroscopy. Characteristics of the incident neutrons produced via the D-T reaction of a neutron generator and D-D reaction of a cyclotron were determined experimentally to account for the contributing effects of background neutrons especially in the 5-13 MeV neutron energy range where existing data are scanty and rather discrepant. The measured data agree well with calculated data using nuclear models but deviate significantly from the recommended data based on existing literature data. The measured δ act and δ m /δ g data made it possible to determine the cross section curve for 58 Ni(n,p) 58 Co m reaction. Furthermore the flux density distributions of thermal and primary fast neutrons in different configurations of bulk samples consisting of water, graphite and coal together with the attenuation characteristics were determined by the activation analysis and pulse height response spectrometry techniques. From the results obtained, an experimental geometry has been proposed for on-line elemental analysis of coal and other minerals. Similarly the total hydrogen content and 0+C/H atomic ratio in household and motor oils as well as crude oil samples of different origins were measured by an improved experimental arrangement based on the thermal neutron reflection technique. A detection limit of 0.12 w % was obtained for hydrogen indicating the possible adaptation of this technique for quality control of petroleum products

  16. Mathematical model of radon activity measurements

    Energy Technology Data Exchange (ETDEWEB)

    Paschuk, Sergei A.; Correa, Janine N.; Kappke, Jaqueline; Zambianchi, Pedro, E-mail: sergei@utfpr.edu.br, E-mail: janine_nicolosi@hotmail.com [Universidade Tecnologica Federal do Parana (UTFPR), Curitiba, PR (Brazil); Denyak, Valeriy, E-mail: denyak@gmail.com [Instituto de Pesquisa Pele Pequeno Principe, Curitiba, PR (Brazil)

    2015-07-01

    Present work describes a mathematical model that quantifies the time dependent amount of {sup 222}Rn and {sup 220}Rn altogether and their activities within an ionization chamber as, for example, AlphaGUARD, which is used to measure activity concentration of Rn in soil gas. The differential equations take into account tree main processes, namely: the injection of Rn into the cavity of detector by the air pump including the effect of the traveling time Rn takes to reach the chamber; Rn release by the air exiting the chamber; and radioactive decay of Rn within the chamber. Developed code quantifies the activity of {sup 222}Rn and {sup 220}Rn isotopes separately. Following the standard methodology to measure Rn activity in soil gas, the air pump usually is turned off over a period of time in order to avoid the influx of Rn into the chamber. Since {sup 220}Rn has a short half-life time, approximately 56s, the model shows that after 7 minutes the activity concentration of this isotope is null. Consequently, the measured activity refers to {sup 222}Rn, only. Furthermore, the model also addresses the activity of {sup 220}Rn and {sup 222}Rn progeny, which being metals represent potential risk of ionization chamber contamination that could increase the background of further measurements. Some preliminary comparison of experimental data and theoretical calculations is presented. Obtained transient and steady-state solutions could be used for planning of Rn in soil gas measurements as well as for accuracy assessment of obtained results together with efficiency evaluation of chosen measurements procedure. (author)

  17. Modeling of reactive chemical transport of leachates from a utility fly-ash disposal site

    International Nuclear Information System (INIS)

    Apps, J.A.; Zhu, M.; Kitanidis, P.K.; Freyberg, D.L.; Ronan, A.D.; Itakagi, S.

    1991-04-01

    Fly ash from fossil-fuel power plants is commonly slurried and pumped to disposal sites. The utility industry is interested in finding out whether any hazardous constituents might leach from the accumulated fly ash and contaminate ground and surface waters. To evaluate the significance of this problem, a representative site was selected for modeling. FASTCHEM, a computer code developed for the Electric Power Research Institute, was utilized for the simulation of the transport and fate of the fly-ash leachate. The chemical evolution of the leachate was modeled as it migrated along streamtubes defined by the flow model. The modeling predicts that most of the leachate seeps through the dam confining the ash pond. With the exception of ferrous, manganous, sulfate and small amounts of nickel ions, all other dissolved constituents are predicted to discharge at environmentally acceptable concentrations

  18. FIM measurement properties and Rasch model details.

    Science.gov (United States)

    Wright, B D; Linacre, J M; Smith, R M; Heinemann, A W; Granger, C V

    1997-12-01

    To summarize, we take issue with the criticisms of Dickson & Köhler for two main reasons: 1. Rasch analysis provides a model from which to approach the analysis of the FIM, an ordinal scale, as an interval scale. The existence of examples of items or individuals which do not fit the model does not disprove the overall efficacy of the model; and 2. the principal components analysis of FIM motor items as presented by Dickson & Köhler tends to undermine rather than support their argument. Their own analyses produce a single major factor explaining between 58.5 and 67.1% of the variance, depending upon the sample, with secondary factors explaining much less variance. Finally, analysis of item response, or latent trait, is a powerful method for understanding the meaning of a measure. However, it presumes that item scores are accurate. Another concern is that Dickson & Köhler do not address the issue of reliability of scoring the FIM items on which they report, a critical point in comparing results. The Uniform Data System for Medical Rehabilitation (UDSMRSM) expends extensive effort in the training of clinicians of subscribing facilities to score items accurately. This is followed up with a credentialing process. Phase 1 involves the testing of individual clinicians who are submitting data to determine if they have achieved mastery over the use of the FIM instrument. Phase 2 involves examining the data for outlying values. When Dickson & Köhler investigate more carefully the application of the Rasch model to their FIM data, they will discover that the results presented in their paper support rather than contradict their application of the Rasch model! This paper is typical of supposed refutations of Rasch model applications. Dickson & Köhler will find that idiosyncrasies in their data and misunderstandings of the Rasch model are the only basis for a claim to have disproven the relevance of the model to FIM data. The Rasch model is a mathematical theorem (like

  19. Developing a clinical utility framework to evaluate prediction models in radiogenomics

    Science.gov (United States)

    Wu, Yirong; Liu, Jie; Munoz del Rio, Alejandro; Page, David C.; Alagoz, Oguzhan; Peissig, Peggy; Onitilo, Adedayo A.; Burnside, Elizabeth S.

    2015-03-01

    Combining imaging and genetic information to predict disease presence and behavior is being codified into an emerging discipline called "radiogenomics." Optimal evaluation methodologies for radiogenomics techniques have not been established. We aim to develop a clinical decision framework based on utility analysis to assess prediction models for breast cancer. Our data comes from a retrospective case-control study, collecting Gail model risk factors, genetic variants (single nucleotide polymorphisms-SNPs), and mammographic features in Breast Imaging Reporting and Data System (BI-RADS) lexicon. We first constructed three logistic regression models built on different sets of predictive features: (1) Gail, (2) Gail+SNP, and (3) Gail+SNP+BI-RADS. Then, we generated ROC curves for three models. After we assigned utility values for each category of findings (true negative, false positive, false negative and true positive), we pursued optimal operating points on ROC curves to achieve maximum expected utility (MEU) of breast cancer diagnosis. We used McNemar's test to compare the predictive performance of the three models. We found that SNPs and BI-RADS features augmented the baseline Gail model in terms of the area under ROC curve (AUC) and MEU. SNPs improved sensitivity of the Gail model (0.276 vs. 0.147) and reduced specificity (0.855 vs. 0.912). When additional mammographic features were added, sensitivity increased to 0.457 and specificity to 0.872. SNPs and mammographic features played a significant role in breast cancer risk estimation (p-value < 0.001). Our decision framework comprising utility analysis and McNemar's test provides a novel framework to evaluate prediction models in the realm of radiogenomics.

  20. Utility of Social Modeling in Assessment of a State's Propensity for Nuclear Proliferation

    International Nuclear Information System (INIS)

    Coles, Garill A.; Brothers, Alan J.; Whitney, Paul D.; Dalton, Angela C.; Olson, Jarrod; White, Amanda M.; Cooley, Scott K.; Youchak, Paul M.; Stafford, Samuel V.

    2011-01-01

    This report is the third and final report out of a set of three reports documenting research for the U.S. Department of Energy (DOE) National Security Administration (NASA) Office of Nonproliferation Research and Development NA-22 Simulations, Algorithms, and Modeling program that investigates how social modeling can be used to improve proliferation assessment for informing nuclear security, policy, safeguards, design of nuclear systems and research decisions. Social modeling has not to have been used to any significant extent in a proliferation studies. This report focuses on the utility of social modeling as applied to the assessment of a State's propensity to develop a nuclear weapons program.

  1. Utility of Social Modeling in Assessment of a State’s Propensity for Nuclear Proliferation

    Energy Technology Data Exchange (ETDEWEB)

    Coles, Garill A.; Brothers, Alan J.; Whitney, Paul D.; Dalton, Angela C.; Olson, Jarrod; White, Amanda M.; Cooley, Scott K.; Youchak, Paul M.; Stafford, Samuel V.

    2011-06-01

    This report is the third and final report out of a set of three reports documenting research for the U.S. Department of Energy (DOE) National Security Administration (NASA) Office of Nonproliferation Research and Development NA-22 Simulations, Algorithms, and Modeling program that investigates how social modeling can be used to improve proliferation assessment for informing nuclear security, policy, safeguards, design of nuclear systems and research decisions. Social modeling has not to have been used to any significant extent in a proliferation studies. This report focuses on the utility of social modeling as applied to the assessment of a State's propensity to develop a nuclear weapons program.

  2. Target-oriented utility theory for modeling the deterrent effects of counterterrorism

    International Nuclear Information System (INIS)

    Bier, Vicki M.; Kosanoglu, Fuat

    2015-01-01

    Optimal resource allocation in security has been a significant challenge for critical infrastructure protection. Numerous studies use game theory as the method of choice, because of the fact that an attacker can often observe the defender’s investment in security and adapt his choice of strategies accordingly. However, most of these models do not explicitly consider deterrence, with the result that they may lead to wasted resources if less investment would be sufficient to deter an attack. In this paper, we assume that the defender is uncertain about the level of defensive investment that would deter an attack, and use the target-oriented utility to optimize the level of defensive investment, taking into account the probability of deterrence. - Highlights: • We propose a target-oriented utility model for attacker deterrence. • We model attack deterrence as a function of attacker success probability. • We compare target-oriented utility model and conventional game-theoretical model. • Results show that our model results better value of the defender’s objective function. • Results support that defending series systems is more difficult than parallel systems

  3. Radiation budget measurement/model interface

    Science.gov (United States)

    Vonderhaar, T. H.; Ciesielski, P.; Randel, D.; Stevens, D.

    1983-01-01

    This final report includes research results from the period February, 1981 through November, 1982. Two new results combine to form the final portion of this work. They are the work by Hanna (1982) and Stevens to successfully test and demonstrate a low-order spectral climate model and the work by Ciesielski et al. (1983) to combine and test the new radiation budget results from NIMBUS-7 with earlier satellite measurements. Together, the two related activities set the stage for future research on radiation budget measurement/model interfacing. Such combination of results will lead to new applications of satellite data to climate problems. The objectives of this research under the present contract are therefore satisfied. Additional research reported herein includes the compilation and documentation of the radiation budget data set a Colorado State University and the definition of climate-related experiments suggested after lengthy analysis of the satellite radiation budget experiments.

  4. Measuring Visual Closeness of 3-D Models

    KAUST Repository

    Gollaz Morales, Jose Alejandro

    2012-09-01

    Measuring visual closeness of 3-D models is an important issue for different problems and there is still no standardized metric or algorithm to do it. The normal of a surface plays a vital role in the shading of a 3-D object. Motivated by this, we developed two applications to measure visualcloseness, introducing normal difference as a parameter in a weighted metric in Metro’s sampling approach to obtain the maximum and mean distance between 3-D models using 3-D and 6-D correspondence search structures. A visual closeness metric should provide accurate information on what the human observers would perceive as visually close objects. We performed a validation study with a group of people to evaluate the correlation of our metrics with subjective perception. The results were positive since the metrics predicted the subjective rankings more accurately than the Hausdorff distance.

  5. Why environmental and resource economists should care about non-expected utility models

    Energy Technology Data Exchange (ETDEWEB)

    Shaw, W. Douglass; Woodward, Richard T. [Department of Agricultural Economics, Texas A and M University (United States)

    2008-01-15

    Experimental and theoretical analysis has shown that the conventional expected utility (EU) and subjective expected utility (SEU) models, which are linear in probabilities, have serious limitations in certain situations. We argue here that these limitations are often highly relevant to the work that environmental and natural resource economists do. We discuss some of the experimental evidence and alternatives to the SEU. We consider the theory used, the problems studied, and the methods employed by resource economists. Finally, we highlight some recent work that has begun to use some of the alternatives to the EU and SEU frameworks and discuss areas where much future work is needed. (author)

  6. Measurements and numerical simulations for optimization of the combustion process in a utility boiler

    Energy Technology Data Exchange (ETDEWEB)

    Vikhansky, A.; Bar-Ziv, E. [Ben-Gurion Univ. of the Negev, Dept. of Biotechnology and Environmental Engineering, Beer-Sheva (Israel); Chudnovsky, B.; Talanker, A. [Israel Electric Corp. (IEC),, Mechanical Systems Div., Haifa (Israel); Eddings, E.; Sarofim, A. [Reaction Engineering International, Salt Lake City, UT (United States); Utah Univ., Dept. of Chemical and Fuel Engineering, Salt Lake City, UT (United States)

    2004-07-01

    A three-dimensional computational fluid dynamics code was used to analyse the performance of 550MW pulverized coal combustion opposite a wall-fired boiler (of IEC) at different operation modes. The main objective of this study was to prove that connecting plant measurements with three-dimensional furnace modelling is a cost-effective method for design, optimization and problem solving in power plant operation. Heat flux results from calculations were compared with measurements in the boiler and showed good agreement. Consequently, the code was used to study hydrodynamic aspects of air-flue gases mixing in the upper part of the boiler. It was demonstrated that effective mixing between flue gases and overfire air is of essential importance for CO reburning. From our complementary experimental-numerical effort, IEC considers a possibility to improve the boiler performance by replacing the existing OFA nozzles by those with higher penetration depth of the air jets, with the aim to ensure proper mixing to achieve better CO reburning. (Author)

  7. Measurements and numerical simulations for optimization of the combustion process in a utility boiler

    Energy Technology Data Exchange (ETDEWEB)

    A. Vikhansky; E. Bar-Ziv; B. Chudnovsky; A. Talanker; E. Eddings; A. Sarofim [Ben-Gurion University of the Negev, Beer-Sheva (Israel). Department of Biotechnology and Environmental Engineering

    2004-04-01

    A three-dimensional computational fluid dynamics code was used to analyse the performance of 550MW pulverized coal combustion opposite a wall-fired boiler (of the Israel Electric Corporation (IEC)) at different operation modes. The main objective of this study was to prove that connecting plant measurements with three-dimensional furnace modelling is a cost-effective method for design, optimization and problem solving in power plant operation. Heat flux results from calculations were compared with measurements in the boiler and showed good agreement. Consequently, the code was used to study hydrodynamic aspects of air-flue gases mixing in the upper part of the boiler. It was demonstrated that effective mixing between flue gases and overfire air is of essential importance for CO reburning. From the complementary experimental-numerical effort, IEC considers a possibility to improve the boiler performance by replacing the existing OFA nozzles by those with higher penetration depth of the air jets, with the aim to ensure proper mixing to achieve better CO reburning. 7 refs., 7 figs., 1 tab.

  8. Left ventricular fluid dynamics in heart failure: echocardiographic measurement and utilities of vortex formation time.

    Science.gov (United States)

    Poh, Kian Keong; Lee, Li Ching; Shen, Liang; Chong, Eric; Tan, Yee Leng; Chai, Ping; Yeo, Tiong Cheng; Wood, Malissa J

    2012-05-01

    In clinical heart failure (HF), inefficient propagation of blood through the left ventricle (LV) may result from suboptimal vortex formation (VF) ability of the LV during early diastole. We aim to (i) validate echocardiographic-derived vortex formation time (adapted) (VFTa) in control subjects and (ii) examine its utility in both systolic and diastolic HF. Transthoracic echocardiography was performed in 32 normal subjects and in 130 patients who were hospitalized with HF [91, reduced ejection fraction (rEF) and 39, preserved ejection fraction (pEF)]. In addition to biplane left ventricular ejection fraction (LVEF) and conventional parameters, the Tei index and tissue Doppler (TD) indices were measured. VFTa was obtained using the formula: 4 × (1 - β)/π × α³ × LVEF, where β is the fraction of total transmitral diastolic stroke volume contributed by atrial contraction (assessed by time velocity integral of the mitral E- and A-waves) and α is the biplane end-diastolic volume (EDV)(1/3) divided by mitral annular diameter during early diastole. VFTa was correlated with demographic, cardiac parameters, and a composite clinical endpoint comprising cardiac death and repeat hospitalization for HF. Mean VFTa was 2.67 ± 0.8 in control subjects; reduced in HF, preserved EF HF, 2.21 ± 0.8; HF with reduced EF, 1.25 ± 0.6 (PTD early diastolic myocardial velocities (E', septal, r = 0.46; lateral, r = 0.43), systolic myocardial velocities (S', septal, r = 0.47; lateral, r = 0.41), and inversely with the Tei index (r = -0.41); all Ps < 0.001. Sixty-two HF patients (49%) met the composite endpoint. VFTa of <1.32 was associated with significantly reduced event-free survival (Kaplan Meier log rank = 16.3, P= 0.0001) and predicted the endpoint with a sensitivity and specificity of 65 and 72%, respectively. VFTa, a dimensionless index, incorporating LV geometry, systolic and diastolic parameters, may be useful in the diagnosis and prognosis of HF.

  9. Estimators for longitudinal latent exposure models: examining measurement model assumptions.

    Science.gov (United States)

    Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D

    2017-06-15

    Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Research on the Prediction Model of CPU Utilization Based on ARIMA-BP Neural Network

    Directory of Open Access Journals (Sweden)

    Wang Jina

    2016-01-01

    Full Text Available The dynamic deployment technology of the virtual machine is one of the current cloud computing research focuses. The traditional methods mainly work after the degradation of the service performance that usually lag. To solve the problem a new prediction model based on the CPU utilization is constructed in this paper. A reference offered by the new prediction model of the CPU utilization is provided to the VM dynamic deployment process which will speed to finish the deployment process before the degradation of the service performance. By this method it not only ensure the quality of services but also improve the server performance and resource utilization. The new prediction method of the CPU utilization based on the ARIMA-BP neural network mainly include four parts: preprocess the collected data, build the predictive model of ARIMA-BP neural network, modify the nonlinear residuals of the time series by the BP prediction algorithm and obtain the prediction results by analyzing the above data comprehensively.

  11. Validation of Storm Water Management Model Storm Control Measures Modules

    Science.gov (United States)

    Simon, M. A.; Platz, M. C.

    2017-12-01

    EPA's Storm Water Management Model (SWMM) is a computational code heavily relied upon by industry for the simulation of wastewater and stormwater infrastructure performance. Many municipalities are relying on SWMM results to design multi-billion-dollar, multi-decade infrastructure upgrades. Since the 1970's, EPA and others have developed five major releases, the most recent ones containing storm control measures modules for green infrastructure. The main objective of this study was to quantify the accuracy with which SWMM v5.1.10 simulates the hydrologic activity of previously monitored low impact developments. Model performance was evaluated with a mathematical comparison of outflow hydrographs and total outflow volumes, using empirical data and a multi-event, multi-objective calibration method. The calibration methodology utilized PEST++ Version 3, a parameter estimation tool, which aided in the selection of unmeasured hydrologic parameters. From the validation study and sensitivity analysis, several model improvements were identified to advance SWMM LID Module performance for permeable pavements, infiltration units and green roofs, and these were performed and reported herein. Overall, it was determined that SWMM can successfully simulate low impact development controls given accurate model confirmation, parameter measurement, and model calibration.

  12. Measured, modeled, and causal conceptions of fitness

    Science.gov (United States)

    Abrams, Marshall

    2012-01-01

    This paper proposes partial answers to the following questions: in what senses can fitness differences plausibly be considered causes of evolution?What relationships are there between fitness concepts used in empirical research, modeling, and abstract theoretical proposals? How does the relevance of different fitness concepts depend on research questions and methodological constraints? The paper develops a novel taxonomy of fitness concepts, beginning with type fitness (a property of a genotype or phenotype), token fitness (a property of a particular individual), and purely mathematical fitness. Type fitness includes statistical type fitness, which can be measured from population data, and parametric type fitness, which is an underlying property estimated by statistical type fitnesses. Token fitness includes measurable token fitness, which can be measured on an individual, and tendential token fitness, which is assumed to be an underlying property of the individual in its environmental circumstances. Some of the paper's conclusions can be outlined as follows: claims that fitness differences do not cause evolution are reasonable when fitness is treated as statistical type fitness, measurable token fitness, or purely mathematical fitness. Some of the ways in which statistical methods are used in population genetics suggest that what natural selection involves are differences in parametric type fitnesses. Further, it's reasonable to think that differences in parametric type fitness can cause evolution. Tendential token fitnesses, however, are not themselves sufficient for natural selection. Though parametric type fitnesses are typically not directly measurable, they can be modeled with purely mathematical fitnesses and estimated by statistical type fitnesses, which in turn are defined in terms of measurable token fitnesses. The paper clarifies the ways in which fitnesses depend on pragmatic choices made by researchers. PMID:23112804

  13. Thermal effects in shales: measurements and modeling

    International Nuclear Information System (INIS)

    McKinstry, H.A.

    1977-01-01

    Research is reported concerning thermal and physical measurements and theoretical modeling relevant to the storage of radioactive wastes in a shale. Reference thermal conductivity measurements are made at atmospheric pressure in a commercial apparatus; and equipment for permeability measurements has been developed, and is being extended with respect to measurement ranges. Thermal properties of shales are being determined as a function of temperature and pressures. Apparatus was developed to measure shales in two different experimental configurations. In the first, a disk 15 mm in diameter of the material is measured by a steady state technique using a reference material to measure the heat flow within the system. The sample is sandwiched between two disks of a reference material (single crystal quartz is being used initially as reference material). The heat flow is determined twice in order to determine that steady state conditions prevail; the temperature drop over the two references is measured. When these indicate an equal heat flow, the thermal conductivity of the sample can be calculated from the temperature difference of the two faces. The second technique is for determining effect of temperature in a water saturated shale on a larger scale. Cylindrical shale (or siltstone) specimens that are being studied (large for a laboratory sample) are to be heated electrically at the center, contained in a pressure vessel that will maintain a fixed water pressure around it. The temperature is monitored at many points within the shale sample. The sample dimensions are 25 cm diameter, 20 cm long. A micro computer system has been constructed to monitor 16 thermocouples to record variation of temperature distribution with time

  14. USING RESPIROMETRY TO MEASURE HYDROGEN UTILIZATION IN SULFATE REDUCING BACTERIA IN THE PRESENCE OF COPPER AND ZINC

    Science.gov (United States)

    A respirometric method has been developed to measure hydrogen utilization by sulfate reducing bacteria (SRB). One application of this method has been to test inhibitory metals effects on the SRB culture used in a novel acid mine drainage treatment technology. As a control param...

  15. SEE Action Guide for States: Evaluation, Measurement, and Verification Frameworks$-$Guidance for Energy Efficiency Portfolios Funded by Utility Customers

    Energy Technology Data Exchange (ETDEWEB)

    Li, Michael [Dept. of Energy (DOE), Washington DC (United States); Dietsch, Niko [US Environmental Protection Agency (EPA), Cincinnati, OH (United States)

    2018-01-01

    This guide describes frameworks for evaluation, measurement, and verification (EM&V) of utility customer–funded energy efficiency programs. The authors reviewed multiple frameworks across the United States and gathered input from experts to prepare this guide. This guide provides the reader with both the contents of an EM&V framework, along with the processes used to develop and update these frameworks.

  16. The utilization of cranial models created using rapid prototyping techniques in the development of models for navigation training.

    Science.gov (United States)

    Waran, V; Pancharatnam, Devaraj; Thambinayagam, Hari Chandran; Raman, Rajagopal; Rathinam, Alwin Kumar; Balakrishnan, Yuwaraj Kumar; Tung, Tan Su; Rahman, Z A

    2014-01-01

    Navigation in neurosurgery has expanded rapidly; however, suitable models to train end users to use the myriad software and hardware that come with these systems are lacking. Utilizing three-dimensional (3D) industrial rapid prototyping processes, we have been able to create models using actual computed tomography (CT) data from patients with pathology and use these models to simulate a variety of commonly performed neurosurgical procedures with navigation systems. To assess the possibility of utilizing models created from CT scan dataset obtained from patients with cranial pathology to simulate common neurosurgical procedures using navigation systems. Three patients with pathology were selected (hydrocephalus, right frontal cortical lesion, and midline clival meningioma). CT scan data following an image-guidance surgery protocol in DIACOM format and a Rapid Prototyping Machine were taken to create the necessary printed model with the corresponding pathology embedded. The ability in registration, planning, and navigation of two navigation systems using a variety of software and hardware provided by these platforms was assessed. We were able to register all models accurately using both navigation systems and perform the necessary simulations as planned. Models with pathology utilizing 3D rapid prototyping techniques accurately reflect data of actual patients and can be used in the simulation of neurosurgical operations using navigation systems. Georg Thieme Verlag KG Stuttgart · New York.

  17. The Utilization of University Students as an Effective Measure for Reducing STIs among Teens

    Science.gov (United States)

    Spain, Adam

    2017-01-01

    Nearly 50% of all new sexually transmitted infections were found in teen and young adult populations in 2015, with the number of new infections expected to keep rising. This study evaluated the knowledge and opinions of university students to determine if changes should be made to the current sexual health education curricula utilized in high…

  18. Utilization Measurement: Focusing on the "U" in "D & U." Special Report.

    Science.gov (United States)

    Southwest Educational Development Lab., Austin, TX.

    One of a series of booklets on disability research, this paper is intended as an introduction to the role of evaluation in the utilization process. Its purpose is to help disability researchers grasp the importance of incorporating a focus on assessing use into plans for disseminating research outcomes. The paper begins by examining basic…

  19. The Utility of Cognitive Plausibility in Language Acquisition Modeling: Evidence From Word Segmentation.

    Science.gov (United States)

    Phillips, Lawrence; Pearl, Lisa

    2015-11-01

    The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's cognitive plausibility. We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition model can aim to be cognitively plausible in multiple ways. We discuss these cognitive plausibility checkpoints generally and then apply them to a case study in word segmentation, investigating a promising Bayesian segmentation strategy. We incorporate cognitive plausibility by using an age-appropriate unit of perceptual representation, evaluating the model output in terms of its utility, and incorporating cognitive constraints into the inference process. Our more cognitively plausible model shows a beneficial effect of cognitive constraints on segmentation performance. One interpretation of this effect is as a synergy between the naive theories of language structure that infants may have and the cognitive constraints that limit the fidelity of their inference processes, where less accurate inference approximations are better when the underlying assumptions about how words are generated are less accurate. More generally, these results highlight the utility of incorporating cognitive plausibility more fully into computational models of language acquisition. Copyright © 2015 Cognitive Science Society, Inc.

  20. On the Path to SunShot - Utility Regulatory Business Model Reforms forAddressing the Financial Impacts of Distributed Solar on Utilities

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2016-05-01

    Net-energy metering (NEM) with volumetric retail electricity pricing has enabled rapid proliferation of distributed photovoltaics (DPV) in the United States. However, this transformation is raising concerns about the potential for higher electricity rates and cost-shifting to non-solar customers, reduced utility shareholder profitability, reduced utility earnings opportunities, and inefficient resource allocation. Although DPV deployment in most utility territories remains too low to produce significant impacts, these concerns have motivated real and proposed reforms to utility regulatory and business models, with profound implications for future DPV deployment. This report explores the challenges and opportunities associated with such reforms in the context of the U.S. Department of Energy’s SunShot Initiative. As such, the report focuses on a subset of a broader range of reforms underway in the electric utility sector. Drawing on original analysis and existing literature, we analyze the significance of DPV’s financial impacts on utilities and non-solar ratepayers under current NEM rules and rate designs, the projected effects of proposed NEM and rate reforms on DPV deployment, and alternative reforms that could address utility and ratepayer concerns while supporting continued DPV growth. We categorize reforms into one or more of four conceptual strategies. Understanding how specific reforms map onto these general strategies can help decision makers identify and prioritize options for addressing specific DPV concerns that balance stakeholder interests.

  1. Advances in fluid modeling and turbulence measurements

    International Nuclear Information System (INIS)

    Wada, Akira; Ninokata, Hisashi; Tanaka, Nobukazu

    2002-01-01

    The context of this book consists of four fields: Environmental Fluid Mechanics; Industrial Fluid Mechanics; Fundamentals of Fluid Mechanics; and Turbulence Measurements. Environmental Fluid Mechanics includes free surface flows in channels, rivers, seas, and estuaries. It also discusses wind engineering issues, ocean circulation model and dispersion problems in atmospheric, water and ground water environments. In Industrial Fluid Mechanics, fluid phenomena in energy exchanges, modeling of turbulent two- or multi-phase flows, swirling flows, flows in combustors, variable density flows and reacting flows, flows in turbo-machines, pumps and piping systems, and fluid-structure interaction are discussed. In Fundamentals of Fluid Mechanics, progress in modeling turbulent flows and heat/mass transfers, computational fluid dynamics/numerical techniques, parallel computing algorithms, applications of chaos/fractal theory in turbulence are reported. In Turbulence Measurements, experimental studies of turbulent flows, experimental and post-processing techniques, quantitative and qualitative flow visualization techniques are discussed. Separate abstracts were presented for 15 of the papers in this issue. The remaining 89 were considered outside the subject scope of INIS. (J.P.N.)

  2. Extending the Utility of the Parabolic Approximation in Medical Ultrasound Using Wide-Angle Diffraction Modeling.

    Science.gov (United States)

    Soneson, Joshua E

    2017-04-01

    Wide-angle parabolic models are commonly used in geophysics and underwater acoustics but have seen little application in medical ultrasound. Here, a wide-angle model for continuous-wave high-intensity ultrasound beams is derived, which approximates the diffraction process more accurately than the commonly used Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation without increasing implementation complexity or computing time. A method for preventing the high spatial frequencies often present in source boundary conditions from corrupting the solution is presented. Simulations of shallowly focused axisymmetric beams using both the wide-angle and standard parabolic models are compared to assess the accuracy with which they model diffraction effects. The wide-angle model proposed here offers improved focusing accuracy and less error throughout the computational domain than the standard parabolic model, offering a facile method for extending the utility of existing KZK codes.

  3. Federal and State Structures to Support Financing Utility-Scale Solar Projects and the Business Models Designed to Utilize Them

    Energy Technology Data Exchange (ETDEWEB)

    Mendelsohn, M.; Kreycik, C.

    2012-04-01

    Utility-scale solar projects have grown rapidly in number and size over the last few years, driven in part by strong renewable portfolio standards (RPS) and federal incentives designed to stimulate investment in renewable energy technologies. This report provides an overview of such policies, as well as the project financial structures they enable, based on industry literature, publicly available data, and questionnaires conducted by the National Renewable Energy Laboratory (NREL).

  4. Standing Height and its Estimation Utilizing Foot Length Measurements in Adolescents from Western Region in Kosovo

    Directory of Open Access Journals (Sweden)

    Stevo Popović

    2017-10-01

    Full Text Available The purpose of this research is to examine standing height in both Kosovan genders in the Western Region as well as its association with foot length, as an alternative to estimating standing height. A total of 664 individuals (338 male and 326 female participated in this research. The anthropometric measurements were taken according to the protocol of ISAK. The relationships between body height and foot length were determined using simple correlation coefficients at a ninety-five percent confidence interval. A comparison of means of standing height and foot length between genders was performed using a t-test. After that a linear regression analysis were carried out to examine extent to which foot length can reliably predict standing height. Results displayed that Western Kosovan male are 179.71±6.00cm tall and have a foot length of 26.73±1.20cm, while Western Kosovan female are 166.26±5.23cm tall and have a foot length of 23.66±1.06cm. The results have shown that both genders made Western-Kosovans a tall group, a little bit taller that general Kosovan population. Moreover, the foot length reliably predicts standing height in both genders; but, not reliably enough as arm span. This study also confirms the necessity for developing separate height models for each region in Kosovo as the results from Western-Kosovans don’t correspond to the general values.

  5. Beta measurements and modeling the Tevatron

    International Nuclear Information System (INIS)

    Gelfand, N.M.

    1993-06-01

    The Tevatron collider is now able with two low-β (β*=0.25--0.5m) interaction regions denoted as B0 and D0. This lattice allows independent operation of the interaction regions which required that the previous collider lattice, used in 1988--89, had to be modified. In order to see how well the lattice conforms to the design, measurements of the β function have been carried out at 15 locations in the new Tevatron collider lattice. Agreement can be obtained between the measurements and a computer model for the Tevatron, based on the design, only if the strengths of the gradients in the quadrupoles in the low-β triplet are allowed to differ from their design values. It is also observed that the lattice is very sensitive to the precise values of the gradients in these magnets

  6. Electrostatic sensor modeling for torque measurements

    Directory of Open Access Journals (Sweden)

    M. Mika

    2017-09-01

    Full Text Available Torque load measurements play an important part in various engineering applications, as for automotive industry, in which the drive torque of a motor has to be determined. A widely used measuring method are strain gauges. A thin flexible foil, which supports a metallic pattern, is glued to the surface of the object the torque is being applied to. In case of a deformation due to the torque load, the change in the electrical resistance is measured. With the combination of constitutive equations the applied torque load is determined by the change of electrical resistance. The creep of the glue and the foil material, together with the temperature and humidity dependence, may become an obstacle for some applications Kapralov and Fesenko(1984. Thus, there have been optical and magnetical, as well as capacitive sensors introduced . This paper discusses the general idea behind an electrostatic capacitive sensor based on a simple draft of an exemplary measurement setup. For better understanding an own electrostatical, geometrical and mechanical model of this setup has been developed.

  7. Electrostatic sensor modeling for torque measurements

    Science.gov (United States)

    Mika, Michał; Dannert, Mirjam; Mett, Felix; Weber, Harry; Mathis, Wolfgang; Nackenhorst, Udo

    2017-09-01

    Torque load measurements play an important part in various engineering applications, as for automotive industry, in which the drive torque of a motor has to be determined. A widely used measuring method are strain gauges. A thin flexible foil, which supports a metallic pattern, is glued to the surface of the object the torque is being applied to. In case of a deformation due to the torque load, the change in the electrical resistance is measured. With the combination of constitutive equations the applied torque load is determined by the change of electrical resistance. The creep of the glue and the foil material, together with the temperature and humidity dependence, may become an obstacle for some applications Kapralov and Fesenko (1984). Thus, there have been optical and magnetical, as well as capacitive sensors introduced). This paper discusses the general idea behind an electrostatic capacitive sensor based on a simple draft of an exemplary measurement setup. For better understanding an own electrostatical, geometrical and mechanical model of this setup has been developed.

  8. Academic Self-Concept: Modeling and Measuring for Science

    Science.gov (United States)

    Hardy, Graham

    2014-08-01

    In this study, the author developed a model to describe academic self-concept (ASC) in science and validated an instrument for its measurement. Unlike previous models of science ASC, which envisage science as a homogenous single global construct, this model took a multidimensional view by conceiving science self-concept as possessing distinctive facets including conceptual and procedural elements. In the first part of the study, data were collected from 1,483 students attending eight secondary schools in England, through the use of a newly devised Secondary Self-Concept Science Instrument, and structural equation modeling was employed to test and validate a model. In the second part of the study, the data were analysed within the new self-concept framework to examine learners' ASC profiles across the domains of science, with particular attention paid to age- and gender-related differences. The study found that the proposed science self-concept model exhibited robust measures of fit and construct validity, which were shown to be invariant across gender and age subgroups. The self-concept profiles were heterogeneous in nature with the component relating to self-concept in physics, being surprisingly positive in comparison to other aspects of science. This outcome is in stark contrast to data reported elsewhere and raises important issues about the nature of young learners' self-conceptions about science. The paper concludes with an analysis of the potential utility of the self-concept measurement instrument as a pedagogical device for science educators and learners of science.

  9. The utility of comparative models and the local model quality for protein crystal structure determination by Molecular Replacement

    Directory of Open Access Journals (Sweden)

    Pawlowski Marcin

    2012-11-01

    Full Text Available Abstract Background Computational models of protein structures were proved to be useful as search models in Molecular Replacement (MR, a common method to solve the phase problem faced by macromolecular crystallography. The success of MR depends on the accuracy of a search model. Unfortunately, this parameter remains unknown until the final structure of the target protein is determined. During the last few years, several Model Quality Assessment Programs (MQAPs that predict the local accuracy of theoretical models have been developed. In this article, we analyze whether the application of MQAPs improves the utility of theoretical models in MR. Results For our dataset of 615 search models, the real local accuracy of a model increases the MR success ratio by 101% compared to corresponding polyalanine templates. On the contrary, when local model quality is not utilized in MR, the computational models solved only 4.5% more MR searches than polyalanine templates. For the same dataset of the 615 models, a workflow combining MR with predicted local accuracy of a model found 45% more correct solution than polyalanine templates. To predict such accuracy MetaMQAPclust, a “clustering MQAP” was used. Conclusions Using comparative models only marginally increases the MR success ratio in comparison to polyalanine structures of templates. However, the situation changes dramatically once comparative models are used together with their predicted local accuracy. A new functionality was added to the GeneSilico Fold Prediction Metaserver in order to build models that are more useful for MR searches. Additionally, we have developed a simple method, AmIgoMR (Am I good for MR?, to predict if an MR search with a template-based model for a given template is likely to find the correct solution.

  10. Model measurements for the switched power linac

    International Nuclear Information System (INIS)

    Aronson, S.; Caspers, F.; Haseroth, H.; Knott, J.; Willis, W.

    1987-01-01

    To study some aspects of the structure of the switched power linac (or wakefield transformer), a scaled-up model with 2.4 m diameter has been built. Measurements were performed with real-time and synthetic pulses with spectral components up to 5 GHz. Results are obtained for the achievable transformer ratio as a function of the spectral composition of the pulses and for the influence of discrete feeding at the circumference of the transformer disk. The effects of asymmetric feeding in space and time were also investigated experimentally as well as the influence of the central geometry

  11. Measures for increased nutrition and utilization of non-conventional food resources during disasters in Africa.

    Science.gov (United States)

    Nur, I M

    1999-01-01

    The basic causes of the poor performance of the food and agricultural sector in the different parts of Africa are external, internal, and natural. The general recession in the Continent limits the capacity of the respective countries to import food to supplement inadequate domestic production and supplies. There are a number of nutritious food resources, both cultivated and gathered in the different ecological zones of Africa, whose production and consumption can be increased to ensure adequate food security and a nutritious diet, especially during disasters. These food resources could include: cereals, legumes, fruits, vegetables, fish, and insects. These food resources already are available over wide geographical areas in Africa and are utilized or utilized to a limited extent. Therefore, strategies to increase food supply, eradicate hunger and malnutrition, and keep people alive in times of disasters should have as a priority, the cultivation and consumption of non-conventional food resources in the respective communities and countries.

  12. Concave utility, transaction costs, and risk in measuring discounting of delayed rewards.

    Science.gov (United States)

    Kirby, Kris N; Santiesteban, Mariana

    2003-01-01

    Research has consistently found that the decline in the present values of delayed rewards as delay increases is better fit by hyperbolic than by exponential delay-discounting functions. However, concave utility, transaction costs, and risk each could produce hyperbolic-looking data, even when the underlying discounting function is exponential. In Experiments 1 (N = 45) and 2 (N = 103), participants placed bids indicating their present values of real future monetary rewards in computer-based 2nd-price auctions. Both experiments suggest that utility is not sufficiently concave to account for the superior fit of hyperbolic functions. Experiment 2 provided no evidence that the effects of transaction costs and risk are large enough to account for the superior fit of hyperbolic functions.

  13. The waiting time distribution as a graphical approach to epidemiologic measures of drug utilization

    DEFF Research Database (Denmark)

    Hallas, J; Gaist, D; Bjerrum, L

    1997-01-01

    that effectively conveys some essential utilization parameters for a drug. The waiting time distribution for a group of drug users is a charting of their first prescription presentations within a specified time window. For a drug used for chronic treatment, most current users will be captured at the beginning...... of the window. After a few months, the graph will be dominated by new, incident users. As examples, we present waiting time distributions for insulin, ulcer drugs, systemic corticosteroids, antidepressants, and disulfiram. Appropriately analyzed and interpreted, the waiting time distributions can provide...... information about the period prevalence, point prevalence, incidence, duration of use, seasonality, and rate of prescription renewal or relapse for specific drugs. Each of these parameters has a visual correlate. The waiting time distributions may be an informative supplement to conventional drug utilization...

  14. Information as a Measure of Model Skill

    Science.gov (United States)

    Roulston, M. S.; Smith, L. A.

    2002-12-01

    Physicist Paul Davies has suggested that rather than the quest for laws that approximate ever more closely to "truth", science should be regarded as the quest for compressibility. The goodness of a model can be judged by the degree to which it allows us to compress data describing the real world. The "logarithmic scoring rule" is a method for evaluating probabilistic predictions of reality that turns this philosophical position into a practical means of model evaluation. This scoring rule measures the information deficit or "ignorance" of someone in possession of the prediction. A more applied viewpoint is that the goodness of a model is determined by its value to a user who must make decisions based upon its predictions. Any form of decision making under uncertainty can be reduced to a gambling scenario. Kelly showed that the value of a probabilistic prediction to a gambler pursuing the maximum return on their bets depends on their "ignorance", as determined from the logarithmic scoring rule, thus demonstrating a one-to-one correspondence between data compression and gambling returns. Thus information theory provides a way to think about model evaluation, that is both philosophically satisfying and practically oriented. P.C.W. Davies, in "Complexity, Entropy and the Physics of Information", Proceedings of the Santa Fe Institute, Addison-Wesley 1990 J. Kelly, Bell Sys. Tech. Journal, 35, 916-926, 1956.

  15. On the use of prior information in modelling metabolic utilization of energy in growing pigs

    DEFF Research Database (Denmark)

    Strathe, Anders Bjerring; Jørgensen, Henry; Fernández, José Adalberto

    2011-01-01

    Construction of models that provide a realistic representation of metabolic utilization of energy in growing animals tend to be over-parameterized because data generated from individual metabolic studies are often sparse. In the Bayesian framework prior information can enter the data analysis......, PD and LD) made on a given pig at a given time followed a multivariate normal distribution. Two different equation systems were adopted from Strathe et al. (2010), generating the expected values in the multivariate normal distribution. Non-informative prior distributions were assigned for all model......, kp and kf, respectively. Utilizing both sets of priors showed that the maintenance component was sensitive to the statement of prior belief and, hence, that the estimate of 0.91 MJkg0.60d1 (95% CI: 0.78; 1.09) should be interpreted with caution. It was shown that boars were superior in depositing...

  16. The utilization of electronic computers for bone density measurements with iodine 125 profile scanner

    International Nuclear Information System (INIS)

    Reiners, C.

    1974-01-01

    The utilization of electronic computers in the determination of the mineral content in bone with the 125 I profile scanner offers many advantages. The computer considerably lessens intensive work of routine evaluation. It enables the direct calculation of the attenuation coefficients. This means a greater accuracy and correctness of the results compared to the former 'graphical' method, as the approximations are eliminated and reference errors are avoided. (orig./LH) [de

  17. A Steam Utility Network Model for the Evaluation of Heat Integration Retrofits – A Case Study of an Oil Refinery

    Directory of Open Access Journals (Sweden)

    Sofie Marton

    2017-12-01

    Full Text Available This paper presents a real industrial example in which the steam utility network of a refinery is modelled in order to evaluate potential Heat Integration retrofits proposed for the site. A refinery, typically, has flexibility to optimize the operating strategy for the steam system depending on the operation of the main processes. This paper presents a few examples of Heat Integration retrofit measures from a case study of a large oil refinery. In order to evaluate expected changes in fuel and electricity imports to the refinery after implementation of the proposed retrofits, a steam system model has been developed. The steam system model has been tested and validated with steady state data from three different operating scenarios and can be used to evaluate how changes to steam balances at different pressure levels would affect overall steam balances, generation of shaft power in turbines, and the consumption of fuel gas.

  18. Modeling the development and utilization of bioenergy and exploring the environmental economic benefits

    International Nuclear Information System (INIS)

    Song, Junnian; Yang, Wei; Higano, Yoshiro; Wang, Xian’en

    2015-01-01

    Highlights: • A complete bioenergy flow is schemed to industrialize bioenergy utilization. • An input–output optimization simulation model is developed. • Energy supply and demand and bioenergy industries’ development are optimized. • Carbon tax and subsidies are endogenously derived by the model. • Environmental economic benefits of bioenergy utilization are explored dynamically. - Abstract: This paper outlines a complete bioenergy flow incorporating bioresource procurement, feedstock supply, conversion technologies and energy consumption to industrialize the development and utilization of bioenergy. An input–output optimization simulation model is developed to introduce bioenergy industries into the regional socioeconomy and energy production and consumption system and dynamically explore the economic, energy and environmental benefits. 16-term simulation from 2010 to 2025 is performed in scenarios preset based on bioenergy industries, carbon tax-subsidization policy and distinct levels of greenhouse gas emission constraints. An empirical study is conducted to validate and apply the model. In the optimal scenario, both industrial development and energy supply and demand are optimized contributing to a 8.41% average gross regional product growth rate and a 39.9% reduction in accumulative greenhouse gas emission compared with the base scenario. By 2025 the consumption ratio of bioenergy in total primary energy could be increased from 0.5% to 8.2%. Energy self-sufficiency rate could be increased from 57.7% to 77.9%. A dynamic carbon tax rate and the extent to which bioenergy industrial development could be promoted are also elaborated. Regional economic development and greenhouse gas mitigation can be potentially promoted simultaneously by bioenergy utilization and a proper greenhouse gas emission constraint. The methodology presented is capable of introducing new industries or policies related to energy planning and detecting the best tradeoffs of

  19. Changes in fibrinogen availability and utilization in an animal model of traumatic coagulopathy

    DEFF Research Database (Denmark)

    Hagemo, Jostein S; Jørgensen, Jørgen; Ostrowski, Sisse R

    2013-01-01

    Impaired haemostasis following shock and tissue trauma is frequently detected in the trauma setting. These changes occur early, and are associated with increased mortality. The mechanism behind trauma-induced coagulopathy (TIC) is not clear. Several studies highlight the crucial role of fibrinogen...... in posttraumatic haemorrhage. This study explores the coagulation changes in a swine model of early TIC, with emphasis on fibrinogen levels and utilization of fibrinogen....

  20. Utility Function for modeling Group Multicriteria Decision Making problems as games

    OpenAIRE

    Alexandre Bevilacqua Leoneti

    2016-01-01

    To assist in the decision making process, several multicriteria methods have been proposed. However, the existing methods assume a single decision-maker and do not consider decision under risk, which is better addressed by Game Theory. Hence, the aim of this research is to propose a Utility Function that makes it possible to model Group Multicriteria Decision Making problems as games. The advantage of using Game Theory for solving Group Multicriteria Decision Making problems is to evaluate th...

  1. A novel GUI modeled fuzzy logic controller for a solar powered energy utilization scheme

    International Nuclear Information System (INIS)

    Altas, I. H.; Sharaf, A. M.

    2007-01-01

    Photovoltaic PVA-solar powered electrical systems comprise different components and subsystems to be controlled separately. Since the generated solar power is dependant on uncontrollable environmental conditions, it requires extra caution to design controllers that handle unpredictable events and maintain efficient load matching power. In this study, a photovoltaic (PV) solar array model is developed for Matlab/Simulink GUI environment and controlled using a fuzzy logic controller (FLC), which is also developed for GUI environment. The FLC is also used to control the DC load bus voltage at constant value as well as controlling the speed of a PMDC motor as one of the loads being fed. The FLC controller designed using the Matlab/Simuling GUI environment has flexible design criteria's so that it can easily be modified and extended for controlling different systems. The proposed FLC is used in three different parts of the PVA stand alone utilization scheme here. One of these parts is the speed control of the PMDC load, one of the other parts is controlling the DC load bus voltage, and the third part is the maximum power point (MPPT) tracking control, which is used to operate the PVA at its available maximum power as the solar insolation and ambient temperature change. This paper presents a study of a standalone Photovoltaic energy utilization system feeding a DC and AC hybrid electric load and is fully controlled by a novel and simple on-line fuzzy logic based dynamic search, detection and tracking controller that ensures maximum power point operation under excursions in Solar Insolation, Ambient temperature and electric load variations. The maximum power point MPP-Search and Detection algorithm is fully dynamic in nature and operates without any required direct measurement or forecasted PV array information about the irradiation and temperature. An added Search sensitivity measure is defined and also used in the MPP search algorithm to sense and dynamic response for

  2. Classifier utility modeling and analysis of hypersonic inlet start/unstart considering training data costs

    Science.gov (United States)

    Chang, Juntao; Hu, Qinghua; Yu, Daren; Bao, Wen

    2011-11-01

    Start/unstart detection is one of the most important issues of hypersonic inlets and is also the foundation of protection control of scramjet. The inlet start/unstart detection can be attributed to a standard pattern classification problem, and the training sample costs have to be considered for the classifier modeling as the CFD numerical simulations and wind tunnel experiments of hypersonic inlets both cost time and money. To solve this problem, the CFD simulation of inlet is studied at first step, and the simulation results could provide the training data for pattern classification of hypersonic inlet start/unstart. Then the classifier modeling technology and maximum classifier utility theories are introduced to analyze the effect of training data cost on classifier utility. In conclusion, it is useful to introduce support vector machine algorithms to acquire the classifier model of hypersonic inlet start/unstart, and the minimum total cost of hypersonic inlet start/unstart classifier can be obtained by the maximum classifier utility theories.

  3. Expected Utility and Entropy-Based Decision-Making Model for Large Consumers in the Smart Grid

    Directory of Open Access Journals (Sweden)

    Bingtuan Gao

    2015-09-01

    Full Text Available In the smart grid, large consumers can procure electricity energy from various power sources to meet their load demands. To maximize its profit, each large consumer needs to decide their energy procurement strategy under risks such as price fluctuations from the spot market and power quality issues. In this paper, an electric energy procurement decision-making model is studied for large consumers who can obtain their electric energy from the spot market, generation companies under bilateral contracts, the options market and self-production facilities in the smart grid. Considering the effect of unqualified electric energy, the profit model of large consumers is formulated. In order to measure the risks from the price fluctuations and power quality, the expected utility and entropy is employed. Consequently, the expected utility and entropy decision-making model is presented, which helps large consumers to minimize their expected profit of electricity procurement while properly limiting the volatility of this cost. Finally, a case study verifies the feasibility and effectiveness of the proposed model.

  4. Optimizing Availability of a Framework in Series Configuration Utilizing Markov Model and Monte Carlo Simulation Techniques

    Directory of Open Access Journals (Sweden)

    Mansoor Ahmed Siddiqui

    2017-06-01

    Full Text Available This research work is aimed at optimizing the availability of a framework comprising of two units linked together in series configuration utilizing Markov Model and Monte Carlo (MC Simulation techniques. In this article, effort has been made to develop a maintenance model that incorporates three distinct states for each unit, while taking into account their different levels of deterioration. Calculations are carried out using the proposed model for two distinct cases of corrective repair, namely perfect and imperfect repairs, with as well as without opportunistic maintenance. Initially, results are accomplished using an analytical technique i.e., Markov Model. Validation of the results achieved is later carried out with the help of MC Simulation. In addition, MC Simulation based codes also work well for the frameworks that follow non-exponential failure and repair rates, and thus overcome the limitations offered by the Markov Model.

  5. Business Model Innovation for Local Energy Management: A Perspective from Swiss Utilities

    Energy Technology Data Exchange (ETDEWEB)

    Facchinetti, Emanuele, E-mail: emanuele.facchinetti@hslu.ch [Lucerne Competence Center for Energy Research, Lucerne University of Applied Science and Arts, Horw (Switzerland); Eid, Cherrelle [Faculty of Technology, Policy and Management, Delft University of Technology, Delft (Netherlands); Bollinger, Andrew [Urban Energy Systems Laboratory, EMPA, Dübendorf (Switzerland); Sulzer, Sabine [Lucerne Competence Center for Energy Research, Lucerne University of Applied Science and Arts, Horw (Switzerland)

    2016-08-04

    The successful deployment of the energy transition relies on a deep reorganization of the energy market. Business model innovation is recognized as a key driver of this process. This work contributes to this topic by providing to potential local energy management (LEM) stakeholders and policy makers a conceptual framework guiding the LEM business model innovation. The main determinants characterizing LEM concepts and impacting its business model innovation are identified through literature reviews on distributed generation typologies and customer/investor preferences related to new business opportunities emerging with the energy transition. Afterwards, the relation between the identified determinants and the LEM business model solution space is analyzed based on semi-structured interviews with managers of Swiss utilities companies. The collected managers’ preferences serve as explorative indicators supporting the business model innovation process and provide insights into policy makers on challenges and opportunities related to LEM.

  6. Business model innovation for Local Energy Management: a perspective from Swiss utilities

    Directory of Open Access Journals (Sweden)

    Emanuele Facchinetti

    2016-08-01

    Full Text Available The successful deployment of the energy transition relies on a deep reorganization of the energy market. Business model innovation is recognized as a key driver of this process. This work contributes to this topic by providing to potential Local Energy Management stakeholders and policy makers a conceptual framework guiding the Local Energy Management business model innovation. The main determinants characterizing Local Energy Management concepts and impacting its business model innovation are identified through literature reviews on distributed generation typologies and customer/investor preferences related to new business opportunities emerging with the energy transition. Afterwards, the relation between the identified determinants and the Local Energy Management business model solution space is analyzed based on semi-structured interviews with managers of Swiss utilities companies. The collected managers’ preferences serve as explorative indicators supporting the business model innovation process and provide insights to policy makers on challenges and opportunities related to Local Energy Management.

  7. Business Model Innovation for Local Energy Management: A Perspective from Swiss Utilities

    International Nuclear Information System (INIS)

    Facchinetti, Emanuele; Eid, Cherrelle; Bollinger, Andrew; Sulzer, Sabine

    2016-01-01

    The successful deployment of the energy transition relies on a deep reorganization of the energy market. Business model innovation is recognized as a key driver of this process. This work contributes to this topic by providing to potential local energy management (LEM) stakeholders and policy makers a conceptual framework guiding the LEM business model innovation. The main determinants characterizing LEM concepts and impacting its business model innovation are identified through literature reviews on distributed generation typologies and customer/investor preferences related to new business opportunities emerging with the energy transition. Afterwards, the relation between the identified determinants and the LEM business model solution space is analyzed based on semi-structured interviews with managers of Swiss utilities companies. The collected managers’ preferences serve as explorative indicators supporting the business model innovation process and provide insights into policy makers on challenges and opportunities related to LEM.

  8. Modeling and optimization of a utility system containing multiple extractions steam turbines

    International Nuclear Information System (INIS)

    Luo, Xianglong; Zhang, Bingjian; Chen, Ying; Mo, Songping

    2011-01-01

    Complex turbines with multiple controlled and/or uncontrolled extractions are popularly used in the processing industry and cogeneration plants to provide steam of different levels, electric power, and driving power. To characterize thermodynamic behavior under varying conditions, nonlinear mathematical models are developed based on energy balance, thermodynamic principles, and semi-empirical equations. First, the complex turbine is decomposed into several simple turbines from the controlled extraction stages and modeled in series. THM (The turbine hardware model) developing concept is applied to predict the isentropic efficiency of the decomposed simple turbines. Stodola's formulation is also used to simulate the uncontrolled extraction steam parameters. The thermodynamic properties of steam and water are regressed through linearization or piece-wise linearization. Second, comparison between the simulated results using the proposed model and the data in the working condition diagram provided by the manufacturer is conducted over a wide range of operations. The simulation results yield small deviation from the data in the working condition diagram where the maximum modeling error is 0.87% among the compared seven operation conditions. Last, the optimization model of a utility system containing multiple extraction turbines is established and a detailed case is analyzed. Compared with the conventional operation strategy, a maximum of 5.47% of the total operation cost is saved using the proposed optimization model. -- Highlights: → We develop a complete simulation model for steam turbine with multiple extractions. → We test the simulation model using the performance data of commercial turbines. → The simulation error of electric power generation is no more than 0.87%. → We establish a utility system operational optimization model. → The optimal industrial operation scheme featured with 5.47% of cost saving.

  9. Flavor release measurement from gum model system.

    Science.gov (United States)

    Ovejero-López, Isabel; Haahr, Anne-Mette; van den Berg, Frans; Bredie, Wender L P

    2004-12-29

    Flavor release from a mint-flavored chewing gum model system was measured by atmospheric pressure chemical ionization mass spectroscopy (APCI-MS) and sensory time-intensity (TI). A data analysis method for handling the individual curves from both methods is presented. The APCI-MS data are ratio-scaled using the signal from acetone in the breath of subjects. Next, APCI-MS and sensory TI curves are smoothed by low-pass filtering. Principal component analysis of the individual curves is used to display graphically the product differentiation by APCI-MS or TI signals. It is shown that differences in gum composition can be measured by both instrumental and sensory techniques, providing comparable information. The peppermint oil level (0.5-2% w/w) in the gum influenced both the retronasal concentration and the perceived peppermint flavor. The sweeteners' (sorbitol or xylitol) effect is less apparent. Sensory adaptation and sensitivity differences of human perception versus APCI-MS detection might explain the divergence between the two dynamic measurement methods.

  10. Inservice inspection a preventative measure for a utility to improve availability

    International Nuclear Information System (INIS)

    Hausermann, R.

    1985-01-01

    The wish of an everlasting good performance of a machine is as old as the dream to invent the perpetum mobile. The real technical world is different in that the material in use for a structure or a machine depends greatly on the fabrication process, the environment and the loads within the period of use of the equipment. In this paper a method is discussed how, by applying a well balanced maintenance strategy coupled with NDT (Non Destructive Testing), the utility goal, to reach a high work availability, and the authority goal to keep a high safety readiness, can be achieved. The aspect of the NDT is discussed in more detail

  11. Measurement of the thyroid's iodine absorption utilizing minimal /sup 131/I dose

    Energy Technology Data Exchange (ETDEWEB)

    Paz A, B.; Villegas A, J.; Delgado B, C. (Universidad Nacional San Agustin de Arequipa (Peru). Departamento de Bioquimica)

    1981-03-01

    We utilize a minimal dose of /sup 131/I thus limiting the contact of the thyroid tissues with the isotopic materials to determine the absorption of /sup 131/I by the thyroid from 6 to 24 hours in 90 pupils of the locality of Arequipa. The average rate of absorption in 6 and 24 hours in the case considered are of 24.15% and 35.42% respectively, with a standard deviation of 6.93% and 9.61%. No significant differences were reported from the results of those of adults and our own results in all the probes which were undertaken.

  12. The Joint Venture Model of Knowledge Utilization: a guide for change in nursing.

    Science.gov (United States)

    Edgar, Linda; Herbert, Rosemary; Lambert, Sylvie; MacDonald, Jo-Ann; Dubois, Sylvie; Latimer, Margot

    2006-05-01

    Knowledge utilization (KU) is an essential component of today's nursing practice and healthcare system. Despite advances in knowledge generation, the gap in knowledge transfer from research to practice continues. KU models have moved beyond factors affecting the individual nurse to a broader perspective that includes the practice environment and the socio-political context. This paper proposes one such theoretical model the Joint Venture Model of Knowledge Utilization (JVMKU). Key components of the JVMKU that emerged from an extensive multidisciplinary review of the literature include leadership, emotional intelligence, person, message, empowered workplace and the socio-political environment. The model has a broad and practical application and is not specific to one type of KU or one population. This paper provides a description of the JVMKU, its development and suggested uses at both local and organizational levels. Nurses in both leadership and point-of-care positions will recognize the concepts identified and will be able to apply this model for KU in their own workplace for assessment of areas requiring strengthening and support.

  13. Capacitor Voltages Measurement and Balancing in Flying Capacitor Multilevel Converters Utilizing a Single Voltage Sensor

    DEFF Research Database (Denmark)

    Farivar, Glen; Ghias, Amer M. Y. M.; Hredzak, Branislav

    2017-01-01

    This paper proposes a new method for measuring capacitor voltages in multilevel flying capacitor (FC) converters that requires only one voltage sensor per phase leg. Multiple dc voltage sensors traditionally used to measure the capacitor voltages are replaced with a single voltage sensor at the ac...... side of the phase leg. The proposed method is subsequently used to balance the capacitor voltages using only the measured ac voltage. The operation of the proposed measurement and balancing method is independent of the number of the converter levels. Experimental results presented for a five-level FC...

  14. A Proposed Conceptual Model to Measure Unwarranted Practice Variation

    National Research Council Canada - National Science Library

    Barr, Andrew M

    2007-01-01

    .... Employing a unit of analysis of the U.S. Army healthcare system and utilizing research by Wennberg and the Institute of Medicine, a model describing healthcare quality in terms of unwarranted practice variation and healthcare outcomes...

  15. Utilization of Model Predictive Control to Balance Power Absorption Against Load Accumulation

    Energy Technology Data Exchange (ETDEWEB)

    Abbas, Nikhar [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Tom, Nathan M [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-06-03

    Wave energy converter (WEC) control strategies have been primarily focused on maximizing power absorption. The use of model predictive control strategies allows for a finite-horizon, multiterm objective function to be solved. This work utilizes a multiterm objective function to maximize power absorption while minimizing the structural loads on the WEC system. Furthermore, a Kalman filter and autoregressive model were used to estimate and forecast the wave exciting force and predict the future dynamics of the WEC. The WEC's power-take-off time-averaged power and structural loads under a perfect forecast assumption in irregular waves were compared against results obtained from the Kalman filter and autoregressive model to evaluate model predictive control performance.

  16. Utilization of Model Predictive Control to Balance Power Absorption Against Load Accumulation: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Abbas, Nikhar; Tom, Nathan

    2017-09-01

    Wave energy converter (WEC) control strategies have been primarily focused on maximizing power absorption. The use of model predictive control strategies allows for a finite-horizon, multiterm objective function to be solved. This work utilizes a multiterm objective function to maximize power absorption while minimizing the structural loads on the WEC system. Furthermore, a Kalman filter and autoregressive model were used to estimate and forecast the wave exciting force and predict the future dynamics of the WEC. The WEC's power-take-off time-averaged power and structural loads under a perfect forecast assumption in irregular waves were compared against results obtained from the Kalman filter and autoregressive model to evaluate model predictive control performance.

  17. Evaluating components of dental care utilization among adults with diabetes and matched controls via hurdle models

    Directory of Open Access Journals (Sweden)

    Chaudhari Monica

    2012-07-01

    Full Text Available Abstract Background About one-third of adults with diabetes have severe oral complications. However, limited previous research has investigated dental care utilization associated with diabetes. This project had two purposes: to develop a methodology to estimate dental care utilization using claims data and to use this methodology to compare utilization of dental care between adults with and without diabetes. Methods Data included secondary enrollment and demographic data from Washington Dental Service (WDS and Group Health Cooperative (GH, clinical data from GH, and dental-utilization data from WDS claims during 2002–2006. Dental and medical records from WDS and GH were linked for enrolees continuously and dually insured during the study. We employed hurdle models in a quasi-experimental setting to assess differences between adults with and without diabetes in 5-year cumulative utilization of dental services. Propensity score matching adjusted for differences in baseline covariates between the two groups. Results We found that adults with diabetes had lower odds of visiting a dentist (OR = 0.74, p  0.001. Among those with a dental visit, diabetes patients had lower odds of receiving prophylaxes (OR = 0.77, fillings (OR = 0.80 and crowns (OR = 0.84 (p 0.005 for all and higher odds of receiving periodontal maintenance (OR = 1.24, non-surgical periodontal procedures (OR = 1.30, extractions (OR = 1.38 and removable prosthetics (OR = 1.36 (p  Conclusions Patients with diabetes are less likely to use dental services. Those who do are less likely to use preventive care and more likely to receive periodontal care and tooth-extractions. Future research should address the possible effectiveness of additional prevention in reducing subsequent severe oral disease in patients with diabetes.

  18. A Review of Acculturation Measures and Their Utility in Studies Promoting Latino Health

    Science.gov (United States)

    Wallace, Phyllis M.; Pomery, Elizabeth A.; Latimer, Amy E.; Martinez, Josefa L.; Salovey, Peter

    2010-01-01

    The authors reviewed the acculturation literature with the goal of identifying measures used to assess acculturation in Hispanic populations in the context of studies of health knowledge, attitudes, and behavior change. Twenty-six acculturation measures were identified and summarized. As the Hispanic population continues to grow in the United…

  19. Psychosocial Predictors of Adverse Events in Heart Failure: The Utility of Multiple Measurements

    Science.gov (United States)

    2015-09-17

    reactivity to psychological challenge: conceptual and measurement considerations. Psychosomatic medicine 65:9-21 39. Katz AM. 2000. Heart Failure...Thesis submitted to the Faculty of the Medical Psychology Graduate Program Uniformed Services University of the Health Sciences...Measurements" Name of Candidate: Samantha Leigh Wronski, Master of Science in Medical Psychology , Date 09/17/2015 THESIS AND ABSTRACT APPROVED: DATE

  20. Assessing the utility of frequency dependent nudging for reducing biases in biogeochemical models

    Science.gov (United States)

    Lagman, Karl B.; Fennel, Katja; Thompson, Keith R.; Bianucci, Laura

    2014-09-01

    Bias errors, resulting from inaccurate boundary and forcing conditions, incorrect model parameterization, etc. are a common problem in environmental models including biogeochemical ocean models. While it is important to correct bias errors wherever possible, it is unlikely that any environmental model will ever be entirely free of such errors. Hence, methods for bias reduction are necessary. A widely used technique for online bias reduction is nudging, where simulated fields are continuously forced toward observations or a climatology. Nudging is robust and easy to implement, but suppresses high-frequency variability and introduces artificial phase shifts. As a solution to this problem Thompson et al. (2006) introduced frequency dependent nudging where nudging occurs only in prescribed frequency bands, typically centered on the mean and the annual cycle. They showed this method to be effective for eddy resolving ocean circulation models. Here we add a stability term to the previous form of frequency dependent nudging which makes the method more robust for non-linear biological models. Then we assess the utility of frequency dependent nudging for biological models by first applying the method to a simple predator-prey model and then to a 1D ocean biogeochemical model. In both cases we only nudge in two frequency bands centered on the mean and the annual cycle, and then assess how well the variability in higher frequency bands is recovered. We evaluate the effectiveness of frequency dependent nudging in comparison to conventional nudging and find significant improvements with the former.

  1. Energy Utilization Evaluation of Carbon Performance in Public Projects by FAHP and Cloud Model

    Directory of Open Access Journals (Sweden)

    Lin Li

    2016-07-01

    Full Text Available With the low-carbon economy advocated all over the world, how to use energy reasonably and efficiently in public projects has become a major issue. It has brought many open questions, including which method is more reasonable in evaluating the energy utilization of carbon performance in public projects when the evaluation information is fuzzy; whether an indicator system can be constructed; and which indicators have more impact on carbon performance. This article aims to solve these problems. We propose a new carbon performance evaluation system for energy utilization based on project processes (design, construction, and operation. Fuzzy Analytic Hierarchy Process (FAHP is used to accumulate the indicator weights and cloud model is incorporated when the indicator value is fuzzy. Finally, we apply our indicator system to a case study of the Xiangjiang River project in China, which demonstrates the applicability and efficiency of our method.

  2. Brain metabolism in autism. Resting cerebral glucose utilization rates as measured with positron emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Rumsey, J.M.; Duara, R.; Grady, C.; Rapoport, J.L.; Margolin, R.A.; Rapoport, S.I.; Cutler, N.R.

    1985-05-01

    The cerebral metabolic rate for glucose was studied in ten men (mean age = 26 years) with well-documented histories of infantile autism and in 15 age-matched normal male controls using positron emission tomography and (F-18) 2-fluoro-2-deoxy-D-glucose. Positron emission tomography was completed during rest, with reduced visual and auditory stimulation. While the autistic group as a whole showed significantly elevated glucose utilization in widespread regions of the brain, there was considerable overlap between the two groups. No brain region showed a reduced metabolic rate in the autistic group. Significantly more autistic, as compared with control, subjects showed extreme relative metabolic rates (ratios of regional metabolic rates to whole brain rates and asymmetries) in one or more brain regions.

  3. Brain metabolism in autism. Resting cerebral glucose utilization rates as measured with positron emission tomography

    International Nuclear Information System (INIS)

    Rumsey, J.M.; Duara, R.; Grady, C.; Rapoport, J.L.; Margolin, R.A.; Rapoport, S.I.; Cutler, N.R.

    1985-01-01

    The cerebral metabolic rate for glucose was studied in ten men (mean age = 26 years) with well-documented histories of infantile autism and in 15 age-matched normal male controls using positron emission tomography and (F-18) 2-fluoro-2-deoxy-D-glucose. Positron emission tomography was completed during rest, with reduced visual and auditory stimulation. While the autistic group as a whole showed significantly elevated glucose utilization in widespread regions of the brain, there was considerable overlap between the two groups. No brain region showed a reduced metabolic rate in the autistic group. Significantly more autistic, as compared with control, subjects showed extreme relative metabolic rates (ratios of regional metabolic rates to whole brain rates and asymmetries) in one or more brain regions

  4. Measurement of thermal conductivity and diffusivity in situ: Literature survey and theoretical modelling of measurements

    Energy Technology Data Exchange (ETDEWEB)

    Kukkonen, I.; Suppala, I. [Geological Survey of Finland, Espoo (Finland)

    1999-01-01

    In situ measurements of thermal conductivity and diffusivity of bedrock were investigated with the aid of a literature survey and theoretical simulations of a measurement system. According to the surveyed literature, in situ methods can be divided into `active` drill hole methods, and `passive` indirect methods utilizing other drill hole measurements together with cutting samples and petrophysical relationships. The most common active drill hole method is a cylindrical heat producing probe whose temperature is registered as a function of time. The temperature response can be calculated and interpreted with the aid of analytical solutions of the cylindrical heat conduction equation, particularly the solution for an infinite perfectly conducting cylindrical probe in a homogeneous medium, and the solution for a line source of heat in a medium. Using both forward and inverse modellings, a theoretical measurement system was analysed with an aim at finding the basic parameters for construction of a practical measurement system. The results indicate that thermal conductivity can be relatively well estimated with borehole measurements, whereas thermal diffusivity is much more sensitive to various disturbing factors, such as thermal contact resistance and variations in probe parameters. In addition, the three-dimensional conduction effects were investigated to find out the magnitude of axial `leak` of heat in long-duration experiments. The radius of influence of a drill hole measurement is mainly dependent on the duration of the experiment. Assuming typical conductivity and diffusivity values of crystalline rocks, the measurement yields information within less than a metre from the drill hole, when the experiment lasts about 24 hours. We propose the following factors to be taken as basic parameters in the construction of a practical measurement system: the probe length 1.5-2 m, heating power 5-20 Wm{sup -1}, temperature recording with 5-7 sensors placed along the probe, and

  5. Measurement of thermal conductivity and diffusivity in situ: Literature survey and theoretical modelling of measurements

    International Nuclear Information System (INIS)

    Kukkonen, I.; Suppala, I.

    1999-01-01

    In situ measurements of thermal conductivity and diffusivity of bedrock were investigated with the aid of a literature survey and theoretical simulations of a measurement system. According to the surveyed literature, in situ methods can be divided into 'active' drill hole methods, and 'passive' indirect methods utilizing other drill hole measurements together with cutting samples and petrophysical relationships. The most common active drill hole method is a cylindrical heat producing probe whose temperature is registered as a function of time. The temperature response can be calculated and interpreted with the aid of analytical solutions of the cylindrical heat conduction equation, particularly the solution for an infinite perfectly conducting cylindrical probe in a homogeneous medium, and the solution for a line source of heat in a medium. Using both forward and inverse modellings, a theoretical measurement system was analysed with an aim at finding the basic parameters for construction of a practical measurement system. The results indicate that thermal conductivity can be relatively well estimated with borehole measurements, whereas thermal diffusivity is much more sensitive to various disturbing factors, such as thermal contact resistance and variations in probe parameters. In addition, the three-dimensional conduction effects were investigated to find out the magnitude of axial 'leak' of heat in long-duration experiments. The radius of influence of a drill hole measurement is mainly dependent on the duration of the experiment. Assuming typical conductivity and diffusivity values of crystalline rocks, the measurement yields information within less than a metre from the drill hole, when the experiment lasts about 24 hours. We propose the following factors to be taken as basic parameters in the construction of a practical measurement system: the probe length 1.5-2 m, heating power 5-20 Wm -1 , temperature recording with 5-7 sensors placed along the probe, and

  6. Spring constant measurement using a MEMS force and displacement sensor utilizing paralleled piezoresistive cantilevers

    Science.gov (United States)

    Kohyama, Sumihiro; Takahashi, Hidetoshi; Yoshida, Satoru; Onoe, Hiroaki; Hirayama-Shoji, Kayoko; Tsukagoshi, Takuya; Takahata, Tomoyuki; Shimoyama, Isao

    2018-04-01

    This paper reports on a method to measure a spring constant on site using a micro electro mechanical systems (MEMS) force and displacement sensor. The proposed sensor consists of a force-sensing cantilever and a displacement-sensing cantilever. Each cantilever is composed of two beams with a piezoresistor on the sidewall for measuring the in-plane lateral directional force and displacement. The force resolution and displacement resolution of the fabricated sensor were less than 0.8 µN and 0.1 µm, respectively. We measured the spring constants of two types of hydrogel microparticles to demonstrate the effectiveness of the proposed sensor, with values of approximately 4.3 N m-1 and 15.1 N m-1 obtained. The results indicated that the proposed sensor is effective for on-site spring constant measurement.

  7. Chemical kinetic model uncertainty minimization through laminar flame speed measurements

    Science.gov (United States)

    Park, Okjoo; Veloo, Peter S.; Sheen, David A.; Tao, Yujie; Egolfopoulos, Fokion N.; Wang, Hai

    2016-01-01

    Laminar flame speed measurements were carried for mixture of air with eight C3-4 hydrocarbons (propene, propane, 1,3-butadiene, 1-butene, 2-butene, iso-butene, n-butane, and iso-butane) at the room temperature and ambient pressure. Along with C1-2 hydrocarbon data reported in a recent study, the entire dataset was used to demonstrate how laminar flame speed data can be utilized to explore and minimize the uncertainties in a reaction model for foundation fuels. The USC Mech II kinetic model was chosen as a case study. The method of uncertainty minimization using polynomial chaos expansions (MUM-PCE) (D.A. Sheen and H. Wang, Combust. Flame 2011, 158, 2358–2374) was employed to constrain the model uncertainty for laminar flame speed predictions. Results demonstrate that a reaction model constrained only by the laminar flame speed values of methane/air flames notably reduces the uncertainty in the predictions of the laminar flame speeds of C3 and C4 alkanes, because the key chemical pathways of all of these flames are similar to each other. The uncertainty in model predictions for flames of unsaturated C3-4 hydrocarbons remain significant without considering fuel specific laminar flames speeds in the constraining target data set, because the secondary rate controlling reaction steps are different from those in the saturated alkanes. It is shown that the constraints provided by the laminar flame speeds of the foundation fuels could reduce notably the uncertainties in the predictions of laminar flame speeds of C4 alcohol/air mixtures. Furthermore, it is demonstrated that an accurate prediction of the laminar flame speed of a particular C4 alcohol/air mixture is better achieved through measurements for key molecular intermediates formed during the pyrolysis and oxidation of the parent fuel. PMID:27890938

  8. Utilization of an electronic portal imaging device for measurement of dynamic wedge data

    International Nuclear Information System (INIS)

    Elder, Eric S.; Miner, Marc S.; Butker, Elizabeth K.; Sutton, Danny S.; Davis, Lawrence W.

    1996-01-01

    Purpose/Objective: Due to the motion of the collimator during dynamic wedge treatments, the conventional method of collecting comprehensive wedge data with a water tank and a scanning ionization chamber is obsolete. It is the objective of this work to demonstrate the use of an electronic portal imaging device (EPID) and software to accomplish this task. Materials and Methods: A Varian Clinac[reg] 2300 C/D, equipped with a PortalVision TM EPID and Dosimetry Research Mode experimental software, was used to produce the radiation field. The Dosimetry Research Mode experimental software allows for a band of 10 of 256 high voltage electrodes to be continuously read and averaged by the 256 electrometer electrodes. The file that is produced contains data relating to the integrated ionization at each of the 256 points, essentially the cross plane beam profile. Software was developed using Microsoft C ++ to reformat the data for import into a Microsoft Excel spreadsheet allowing for easy mathematical manipulation and graphical display. Beam profiles were measured by the EPID with a 100 cm TSD for various field sizes. Each field size was measured open, steel wedged, and dynamically wedged. Scanning ionization chamber measurements performed in a water tank were compared to the open and steel wedged fields. Ionization chamber measurements taken in a water tank were compared with the dynamically wedged measurements. For the EPID measurements the depth was varied using Gammex RMI Solid Water TM placed directly above the EPID sensitive volume. Bolus material was placed between the Solid Water TM and the EPID to avoid an air gap. Results: Comparison of EPID measurements with those from an ion chamber in a water tank showed a discrepancy of ∼5%. Scans were successfully obtained for open, steel wedged and dynamically wedged beams. Software has been developed to allow for easy graphical display of beam profiles. Conclusions: Measurement of dynamic wedge data proves to be easily

  9. A laboratory-calibrated model of coho salmon growth with utility for ecological analyses

    Science.gov (United States)

    Manhard, Christopher V.; Som, Nicholas A.; Perry, Russell W.; Plumb, John M.

    2018-01-01

    We conducted a meta-analysis of laboratory- and hatchery-based growth data to estimate broadly applicable parameters of mass- and temperature-dependent growth of juvenile coho salmon (Oncorhynchus kisutch). Following studies of other salmonid species, we incorporated the Ratkowsky growth model into an allometric model and fit this model to growth observations from eight studies spanning ten different populations. To account for changes in growth patterns with food availability, we reparameterized the Ratkowsky model to scale several of its parameters relative to ration. The resulting model was robust across a wide range of ration allocations and experimental conditions, accounting for 99% of the variation in final body mass. We fit this model to growth data from coho salmon inhabiting tributaries and constructed ponds in the Klamath Basin by estimating habitat-specific indices of food availability. The model produced evidence that constructed ponds provided higher food availability than natural tributaries. Because of their simplicity (only mass and temperature are required as inputs) and robustness, ration-varying Ratkowsky models have utility as an ecological tool for capturing growth in freshwater fish populations.

  10. Study on Emission Measurement of Vehicle on Road Based on Binomial Logit Model

    OpenAIRE

    Aly, Sumarni Hamid; Selintung, Mary; Ramli, Muhammad Isran; Sumi, Tomonori

    2011-01-01

    This research attempts to evaluate emission measurement of on road vehicle. In this regard, the research develops failure probability model of vehicle emission test for passenger car which utilize binomial logit model. The model focuses on failure of CO and HC emission test for gasoline cars category and Opacity emission test for diesel-fuel cars category as dependent variables, while vehicle age, engine size, brand and type of the cars as independent variables. In order to imp...

  11. Utilization of Short-Simulations for Tuning High-Resolution Climate Model

    Science.gov (United States)

    Lin, W.; Xie, S.; Ma, P. L.; Rasch, P. J.; Qian, Y.; Wan, H.; Ma, H. Y.; Klein, S. A.

    2016-12-01

    Many physical parameterizations in atmospheric models are sensitive to resolution. Tuning the models that involve a multitude of parameters at high resolution is computationally expensive, particularly when relying primarily on multi-year simulations. This work describes a complementary set of strategies for tuning high-resolution atmospheric models, using ensembles of short simulations to reduce the computational cost and elapsed time. Specifically, we utilize the hindcast approach developed through the DOE Cloud Associated Parameterization Testbed (CAPT) project for high-resolution model tuning, which is guided by a combination of short (tests have been found to be effective in numerous previous studies in identifying model biases due to parameterized fast physics, and we demonstrate that it is also useful for tuning. After the most egregious errors are addressed through an initial "rough" tuning phase, longer simulations are performed to "hone in" on model features that evolve over longer timescales. We explore these strategies to tune the DOE ACME (Accelerated Climate Modeling for Energy) model. For the ACME model at 0.25° resolution, it is confirmed that, given the same parameters, major biases in global mean statistics and many spatial features are consistent between Atmospheric Model Intercomparison Project (AMIP)-type simulations and CAPT-type hindcasts, with just a small number of short-term simulations for the latter over the corresponding season. The use of CAPT hindcasts to find parameter choice for the reduction of large model biases dramatically improves the turnaround time for the tuning at high resolution. Improvement seen in CAPT hindcasts generally translates to improved AMIP-type simulations. An iterative CAPT-AMIP tuning approach is therefore adopted during each major tuning cycle, with the former to survey the likely responses and narrow the parameter space, and the latter to verify the results in climate context along with assessment in

  12. Do generic utility measures capture what is important to the quality of life of people with multiple sclerosis?

    Science.gov (United States)

    Kuspinar, Ayse; Mayo, Nancy E

    2013-04-25

    The three most widely used utility measures are the Health Utilities Index Mark 2 and 3 (HUI2 and HUI3), the EuroQol-5D (EQ-5D) and the Short-Form-6D (SF-6D). In line with guidelines for economic evaluation from agencies such as the National Institute for Health and Clinical Excellence (NICE) and the Canadian Agency for Drugs and Technologies in Health (CADTH), these measures are currently being used to evaluate the cost-effectiveness of different interventions in MS. However, the challenge of using such measures in people with a specific health condition, such as MS, is that they may not capture all of the domains that are impacted upon by the condition. If important domains are missing from the generic measures, the value derived will be higher than the real impact creating invalid comparisons across interventions and populations. Therefore, the objective of this study is to estimate the extent to which generic utility measures capture important domains that are affected by MS. The available study population consisted of men and women who had been registered after 1994 in three participating MS clinics in Greater Montreal, Quebec, Canada. Subjects were first interviewed on an individualized measure of quality of life (QOL) called the Patient Generated Index (PGI). The domains identified with the PGI were then classified and grouped together using the World Health Organization's International Classification of Functioning, Disability and Health (ICF), and mapped onto the HUI2, HUI3, EQ-5D and SF-6D. A total of 185 persons with MS were interviewed on the PGI. The sample was relatively young (mean age 43) and predominantly female. Both men and women had mild disability with a median Expanded Disability Status Scale (EDSS) score of 2. The top 10 domains that patients identified to be the most affected by their MS were, work (62%), fatigue (48%), sports (39%), social life (28%), relationships (23%), walking/mobility (22%), cognition (21%), balance (14%), housework (12

  13. Radiation damage studies on natural and synthetic rock salt utilizing measurements made during electron irradiation

    International Nuclear Information System (INIS)

    Swyler, K.J.; Levy, P.W.

    1977-01-01

    The numerous radiation damage effects which will occur in the rock salt surrounding radioactive waste disposal canisters are being investigated with unique apparatus for making optical and other measurements during 1 to 3 MeV electron irradiation. This equipment, consists of a computer controlled double beam spectrophotometer which simultaneously records 256 point absorption and radioluminescence spectra, in either the 200 to 400 or 400 to 800 nm region, every 40 seconds. Most often the measurements commence as the irradiation is started and continue after it is terminated. This procedure provides information on the kinetics and other details of the damage formation process and, when the irradiation is terminated, on both the transient and stable damage components. The exposure rates may be varied between 10 2 or 10 3 to more than 10 8 rad per hour and the sample temperature maintained between 25 and 800 or 900 0 C. Although this project was started recently, measurements have been made on synthetic NaCl and on natural rock salt from two disposal sites and two mines. Both unstrained and purposely strained samples have been used. Most recently, measurements at temperatures between 25 and 200 0 C have been started. The few measurements completed to date indicate that the damage formation kinetics in natural rock salt are quite different from those observed in synthetic NaCl

  14. Utilizing measure-based feedback in control-mastery theory: A clinical error.

    Science.gov (United States)

    Snyder, John; Aafjes-van Doorn, Katie

    2016-09-01

    Clinical errors and ruptures are an inevitable part of clinical practice. Often times, therapists are unaware that a clinical error or rupture has occurred, leaving no space for repair, and potentially leading to patient dropout and/or less effective treatment. One way to overcome our blind spots is by frequently and systematically collecting measure-based feedback from the patient. Patient feedback measures that focus on the process of psychotherapy such as the Patient's Experience of Attunement and Responsiveness scale (PEAR) can be used in conjunction with treatment outcome measures such as the Outcome Questionnaire 45.2 (OQ-45.2) to monitor the patient's therapeutic experience and progress. The regular use of these types of measures can aid clinicians in the identification of clinical errors and the associated patient deterioration that might otherwise go unnoticed and unaddressed. The current case study describes an instance of clinical error that occurred during the 2-year treatment of a highly traumatized young woman. The clinical error was identified using measure-based feedback and subsequently understood and addressed from the theoretical standpoint of the control-mastery theory of psychotherapy. An alternative hypothetical response is also presented and explained using control-mastery theory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. Design and construction of safety devices utilizing methods of measurement and control engineering

    Energy Technology Data Exchange (ETDEWEB)

    Greiner, B; Weidlich, S

    1982-08-01

    This article considers a proposed concept for the design and construction of measurement and control devices for the safety of chemical plants with the aim of preventing danger to persons and the environment and damage. Such measurement and control devices are generally employed when primary measures adopted for plant safety, such as safety valves, collection vessels, etc. are not applicable or insufficient by themselves. The concept regards the new sheet no. 3 of the VDI/VDE code draft 2180 ''Safety of chemical engineering plant'' and proposes a further subdivision of class A into safety classes A0, A1, and A2. Overall, it is possible, on the basis of the measures for raising the availability of measurement and control equipment which are presented in this article, to make selection appropriate to the potential danger involved. The proposed procedure should not, however, be regarded as a rigid scheme but rather as leading to a systematic view and supporting decisions resting on sound operating experience.

  16. Exposure to electromagnetic fields from smart utility meters in GB; part I) laboratory measurements.

    Science.gov (United States)

    Peyman, Azadeh; Addison, Darren; Mee, Terry; Goiceanu, Cristian; Maslanyj, Myron; Mann, Simon

    2017-05-01

    Laboratory measurements of electric fields have been carried out around examples of smart meter devices used in Great Britain. The aim was to quantify exposure of people to radiofrequency signals emitted from smart meter devices operating at 2.4 GHz, and then to compare this with international (ICNIRP) health-related guidelines and with exposures from other telecommunication sources such as mobile phones and Wi-Fi devices. The angular distribution of the electric fields from a sample of 39 smart meter devices was measured in a controlled laboratory environment. The angular direction where the power density was greatest was identified and the equivalent isotropically radiated power was determined in the same direction. Finally, measurements were carried out as a function of distance at the angles where maximum field strengths were recorded around each device. The maximum equivalent power density measured during transmission around smart meter devices at 0.5 m and beyond was 15 mWm -2 , with an estimation of maximum duty factor of only 1%. One outlier device had a maximum power density of 91 mWm -2 . All power density measurements reported in this study were well below the 10 W m -2 ICNIRP reference level for the general public. Bioelectromagnetics. 2017;38:280-294. © 2017 Crown copyright. BIOELECTROMAGNETICS © 2017 Wiley Periodicals, Inc. © 2017 Crown copyright. BIOELECTROMAGNETICS © 2017 Wiley Periodicals, Inc.

  17. Utilization of a pressure sensor guidewire to measure bileaflet mechanical valve gradients: hemodynamic and echocardiographic sequelae.

    Science.gov (United States)

    Doorey, Andrew J; Gakhal, Mandip; Pasquale, Michael J

    2006-04-01

    Suspected prosthetic valve dysfunction is a difficult clinical problem, because of the high risk of repeat valvular surgery. Echocardiographic measurements of prosthetic valvular dysfunction can be misleading, especially with bileaflet valves. Direct measurement of trans-valvular gradients is problematic because of potentially serious catheter entrapment issues. We report a case in which a high-fidelity pressure sensor angioplasty guidewire was used to cross prosthetic mitral and aortic valves in a patient, with hemodynamic and echocardiographic assessment. This technique was safe and effective, refuting the inaccurate non-invasive tests that over-estimated the aortic valvular gradient.

  18. Coevaporation of Y, BaF2, and Cu utilizing a quadrupole mass spectrometer as a rate measuring probe

    International Nuclear Information System (INIS)

    Hudner, J.; Oestling, M.; Ohlsen, H.; Stolt, L.

    1991-01-01

    An ultrahigh vacuum coevaporator equipped with three sources for preparation of Y--BaF 2 --Cu--O thin films is described. Evaporation rates of Y, BaF 2 , and Cu were controlled using a quadrupole mass spectrometer operating in a multiplexed mode. To evaluate the method depositions have been performed using different source configurations and evaporation rates. Utilizing Rutherford backscattering spectrometry absolute values of the actual evaporation rates were determined. It was observed that the mass-spectrometer sensitivity is highest for Y, followed by BaF 2 (BaF + is the measured ion) and Cu. A partial pressure of oxygen during evaporation of Y, BaF 2 , and Cu affected mainly the rate of Y. It is shown that the mass spectrometer can be utilized to precisely control the film composition

  19. Full utilization of silt density index (SDI) measurements for seawater pre-treatment

    KAUST Repository

    Wei, Chunhai; Laborie, Sté phanie; Ben Aï m, Roger M.; Amy, Gary L.

    2012-01-01

    according to the standard protocol for SDI measurement, in which two kinds of 0.45μm membranes of different material and seawater samples from the Mediterranean including raw seawater and seawater pre-treated by coagulation followed by sand filtration (CSF

  20. Utility of telomere length measurements for age determination of humpback whales

    NARCIS (Netherlands)

    Olsen, Morten T.; Robbins, Jooke; Bérubé, Martine; Rew, Mary Beth; Palsboll, Per

    2014-01-01

    This study examines the applicability of telomere length measurements by quantitative PCR as a tool for minimally invasive age determination of free-ranging cetaceans. We analysed telomere length in skin samples from 28 North Atlantic humpback whales (Megaptera novaeangliae), ranging from 0 to 26

  1. UTILIZING THE PAKS METHOD FOR MEASURING ACROLEIN AND OTHER ALDEHYDES IN DEARS

    Science.gov (United States)

    Acrolein is a hazardous air pollutant of high priority due to its high irritation potency and other potential adverse health effects. However, a reliable method is currently unavailable for measuring airborne acrolein at typical environmental levels. In the Detroit Exposure and A...

  2. High magnetic field measurement utilizing Faraday rotation in SF11 glass in simplified diagnostics.

    Science.gov (United States)

    Dey, Premananda; Shukla, Rohit; Venkateswarlu, D

    2017-04-01

    With the commercialization of powerful solid-state lasers as pointer lasers, it is becoming simpler nowadays for the launch and free-space reception of polarized light for polarimetric applications. Additionally, because of the high power of such laser diodes, the alignment of the received light on the small sensor area of a photo-diode with a high bandwidth response is also greatly simplified. A plastic sheet polarizer taken from spectacles of 3D television (commercially available) is simply implemented as an analyzer before the photo-receiver. SF11 glass is used as a magneto-optic modulating medium for the measurement of the magnetic field. A magnetic field of magnitude more than 8 Tesla, generated by a solenoid has been measured using this simple assembly. The measured Verdet constant of 12.46 rad/T-m is obtained at the wavelength of 672 nm for the SF11 glass. The complete measurement system is a cost-effective solution.

  3. An Information Theoretic Approach for Measuring Data Discovery and Utilization During Analytical and Decision Making Processes

    Science.gov (United States)

    2015-07-31

    and make the expected decision outcomes. The scenario is based around a scripted storyboard where an organized crime network is operating in a city to...interdicted by law enforcement to disrupt the network. The scenario storyboard was used to develop a probabilistic vehicle traffic model in order to

  4. Model Predictive Control for Integrating Traffic Control Measures

    NARCIS (Netherlands)

    Hegyi, A.

    2004-01-01

    Dynamic traffic control measures, such as ramp metering and dynamic speed limits, can be used to better utilize the available road capacity. Due to the increasing traffic volumes and the increasing number of traffic jams the interaction between the control measures has increased such that local

  5. Discussing Landscape Compositional Scenarios Generated with Maximization of Non-Expected Utility Decision Models Based on Weighted Entropies

    Directory of Open Access Journals (Sweden)

    José Pinto Casquilho

    2017-02-01

    Full Text Available The search for hypothetical optimal solutions of landscape composition is a major issue in landscape planning and it can be outlined in a two-dimensional decision space involving economic value and landscape diversity, the latter being considered as a potential safeguard to the provision of services and externalities not accounted in the economic value. In this paper, we use decision models with different utility valuations combined with weighted entropies respectively incorporating rarity factors associated to Gini-Simpson and Shannon measures. A small example of this framework is provided and discussed for landscape compositional scenarios in the region of Nisa, Portugal. The optimal solutions relative to the different cases considered are assessed in the two-dimensional decision space using a benchmark indicator. The results indicate that the likely best combination is achieved by the solution using Shannon weighted entropy and a square root utility function, corresponding to a risk-averse behavior associated to the precautionary principle linked to safeguarding landscape diversity, anchoring for ecosystem services provision and other externalities. Further developments are suggested, mainly those relative to the hypothesis that the decision models here outlined could be used to revisit the stability-complexity debate in the field of ecological studies.

  6. On the Path to SunShot. Utility Regulatory and Business Model Reforms for Addressing the Financial Impacts of Distributed Solar on Utilities

    Energy Technology Data Exchange (ETDEWEB)

    Barbose, Galen [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Miller, John [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sigrin, Ben [National Renewable Energy Lab. (NREL), Golden, CO (United States); Reiter, Emerson [National Renewable Energy Lab. (NREL), Golden, CO (United States); Cory, Karlynn [National Renewable Energy Lab. (NREL), Golden, CO (United States); McLaren, Joyce [National Renewable Energy Lab. (NREL), Golden, CO (United States); Seel, Joachim [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Mills, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Darghouth, Naim [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Satchwell, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-05-01

    Net-energy metering (NEM) has helped drive the rapid growth of distributed PV (DPV) but has raised concerns about electricity cost shifts, utility financial losses, and inefficient resource allocation. These concerns have motivated real and proposed reforms to utility regulatory and business models. This report explores the challenges and opportunities associated with such reforms in the context of the U.S. Department of Energy's SunShot Initiative. Most of the reforms to date address NEM concerns by reducing the benefits provided to DPV customers and thus constraining DPV deployment. Eliminating NEM nationwide, by compensating exports of PV electricity at wholesale rather than retail rates, could cut cumulative DPV deployment by 20% in 2050 compared with a continuation of current policies. This would slow the PV cost reductions that arise from larger scale and market certainty. It could also thwart achievement of the SunShot deployment goals even if the initiative's cost targets are achieved. This undesirable prospect is stimulating the development of alternative reform strategies that address concerns about distributed PV compensation without inordinately harming PV economics and growth. These alternatives fall into the categories of facilitating higher-value DPV deployment, broadening customer access to solar, and aligning utility profits and earnings with DPV. Specific strategies include utility ownership and financing of DPV, community solar, distribution network operators, services-driven utilities, performance-based incentives, enhanced utility system planning, pricing structures that incentivize high-value DPV configurations, and decoupling and other ratemaking reforms that reduce regulatory lag. These approaches represent near- and long-term solutions for preserving the legacy of the SunShot Initiative.

  7. Stability of Teacher Value-Added Rankings across Measurement Model and Scaling Conditions

    Science.gov (United States)

    Hawley, Leslie R.; Bovaird, James A.; Wu, ChaoRong

    2017-01-01

    Value-added assessment methods have been criticized by researchers and policy makers for a number of reasons. One issue includes the sensitivity of model results across different outcome measures. This study examined the utility of incorporating multivariate latent variable approaches within a traditional value-added framework. We evaluated the…

  8. Heat Loss Measurements in Buildings Utilizing a U-value Meter

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt

    Heating of buildings in Denmark accounts for approximately 40% of the entire national energy consumption. For this reason, a reduction of heat losses from building envelopes are of great importance in order to reach the Bologna CO2 emission reduction targets. Upgrading of the energy performance...... of buildings is a topic of huge global interest these years. Not only heating in the temperate and arctic regions are important, but also air conditioning and mechanical ventilation in the tropical countries contribute to an enormous energy consumption and corresponding CO2 emission. In order to establish...... the best basis for upgrading the energy performance, it is important to measure the heat losses at different locations on a building facade, in order to optimize the energy performance. The author has invented a U-value meter, enabling measurements of heat transfer coefficients. The meter has been used...

  9. Radon and radon daughter measurements and methods utilized by EPA's Eastern Environmental Radiation Facility

    International Nuclear Information System (INIS)

    Phillips, C.R.

    1977-01-01

    The Eastern Environmental Radiation Facility (EERF), Office of Radiation Programs, has the responsibility for conducting the Environmental Protection Agency's study of the radiological impact of the phosphate industry. Numerous measurements in structures constructed on land reclaimed from phosphate mining showed that working levels in these structures range from 0.001 to 0.9 WL. Sampling is performed by drawing air through a 0.8 micrometer pore size, 25 mm diameter filter at a flow rate of 10 to 15 liters/minute for from 5 to 20 minutes, depending on the daughter levels anticipated. The detection system consists of a ruggedized silicon surface barrier detector (450 mm 2 -100 micrometer depletion) connected through an appropriate pre-amplifier-amplifier to a 1024-channel multichannel analyzer. Other measurement methods are also discussed

  10. Utility of telomere length measurements for age determination of humpback whales

    Directory of Open Access Journals (Sweden)

    Morten Tange Olsen

    2014-12-01

    Full Text Available This study examines the applicability of telomere length measurements by quantitative PCR as a tool for minimally invasive age determination of free-ranging cetaceans. We analysed telomere length in skin samples from 28 North Atlantic humpback whales (Megaptera novaeangliae, ranging from 0 to 26 years of age. The results suggested a significant correlation between telomere length and age in humpback whales. However, telomere length was highly variable among individuals of similar age, suggesting that telomere length measured by quantitative PCR is an imprecise determinant of age in humpback whales. The observed variation in individual telomere length was found to be a function of both experimental and biological variability, with the latter perhaps reflecting patterns of inheritance, resource allocation trade-offs, and stochasticity of the marine environment.

  11. Modeling and Analysis Compute Environments, Utilizing Virtualization Technology in the Climate and Earth Systems Science domain

    Science.gov (United States)

    Michaelis, A.; Nemani, R. R.; Wang, W.; Votava, P.; Hashimoto, H.

    2010-12-01

    Given the increasing complexity of climate modeling and analysis tools, it is often difficult and expensive to build or recreate an exact replica of the software compute environment used in past experiments. With the recent development of new technologies for hardware virtualization, an opportunity exists to create full modeling, analysis and compute environments that are “archiveable”, transferable and may be easily shared amongst a scientific community or presented to a bureaucratic body if the need arises. By encapsulating and entire modeling and analysis environment in a virtual machine image, others may quickly gain access to the fully built system used in past experiments, potentially easing the task and reducing the costs of reproducing and verify past results produced by other researchers. Moreover, these virtual machine images may be used as a pedagogical tool for others that are interested in performing an academic exercise but don't yet possess the broad expertise required. We built two virtual machine images, one with the Community Earth System Model (CESM) and one with Weather Research Forecast Model (WRF), then ran several small experiments to assess the feasibility, performance overheads costs, reusability, and transferability. We present a list of the pros and cons as well as lessoned learned from utilizing virtualization technology in the climate and earth systems modeling domain.

  12. Evaluation of utility monitoring and preoperational hydrothermal modeling at three nuclear power plant sites

    International Nuclear Information System (INIS)

    Marmer, G.J.; Policastro, A.J.

    1977-01-01

    This paper evaluates the preoperational hydrothermal modeling and operational monitoring carried out by utilities as three nuclear-power-plant sites using once-through cooling. Our work was part of a larger study to assess the environmental impact of operating plants for the Nuclear Regulatory Commission (NRC) and the suitability of the NRC Environmental Technical Specifications (Tech Specs) as set up for these plants. The study revealed that the plume mappings at the Kewaunee, Zion, and Quad Cities sites were generally satisfactory in terms of delineating plume size and other characteristics. Unfortunately, monitoring was not carried out during the most critical periods when largest plume size would be expected. At Kewaunee and Zion, preoperational predictions using analytical models were found to be rather poor. At Kewaunee (surface discharge), the Pritchard Model underestimated plume size in the near field, but grossly overestimated the plume's far-field extent. Moreover, lake-level variations affected plume dispersion, yet were not considered in preoperational predictions. At Zion (submerged discharge) the Pritchard Model was successful only in special, simple cases (single-unit operation, no stratification, no reversing currents, no recirculation). Due to neglect of the above-mentioned phenomena, the model underpredicted plume size. At Quad Cities (submerged discharge), the undistorted laboratory model predicted plume dispersion for low river flows. These low flow predictions appear to be reasonable extrapolations of the field data acquired at higher flows

  13. Utilization of the research and measurement reactor Braunschweig for neutron metrology

    International Nuclear Information System (INIS)

    Alberts, W.G.

    1982-01-01

    The objectives of the Physikalisch-Technische Bundesanstalt (PTB) with regard to neutron metrology are briefly described. The use of the PTB's Research and Measuring Reactor as neutron source for metrological purposes is discussed. Reference neutron beams are described which serve as irradiation facilities for the calibration of detectors for radiation protection purposes in the frame of the legal metrology work in the PTB. (orig.) [de

  14. Multidimensional inverse heat conduction problem: optimization of sensor locations and utilization of thermal-strain measurements

    International Nuclear Information System (INIS)

    Blanc, Gilles

    1996-01-01

    This work is devoted to the solution of the inverse multidimensional heat conduction problem. The first part is the determination of a methodology for determining the minimum number of sensors and the best sensor locations. The method is applied to a 20 problem but the extension to 30 problems is quite obvious. This methodology is based on the study of the rate of representation. This new concept allows to determine the quantity and the quality of the information obtain from the various sensors. The rate of representation is a useful tool for experimental design. lt can be determined very quickly by the transposed matrix method. This approach was validated with an experimental set-up. The second part is the development of a method that uses thermal strain measurement instead of temperature measurements to estimate the unknown thermal boundary conditions. We showed that this new sensor has two advantages in comparison with the classical temperature measurements: higher frequency can be estimated and smaller number of sensors can be used for 20 problems. The main weakness is, presently, the fact that the method can only be applied to beams. The results obtained from the numerical simulations were validated by the analysis of experimental data obtained on an experimental set-up especially designed and built for this study. (author) [fr

  15. INVESTIGATION OF QUANTIFICATION OF FLOOD CONTROL AND WATER UTILIZATION EFFECT OF RAINFALL INFILTRATION FACILITY BY USING WATER BALANCE ANALYSIS MODEL

    OpenAIRE

    文, 勇起; BUN, Yuki

    2013-01-01

    In recent years, many flood damage and drought attributed to urbanization has occurred. At present infiltration facility is suggested for the solution of these problems. Based on this background, the purpose of this study is investigation of quantification of flood control and water utilization effect of rainfall infiltration facility by using water balance analysis model. Key Words : flood control, water utilization , rainfall infiltration facility

  16. Modeling and design of light powered biomimicry micropump utilizing transporter proteins

    Science.gov (United States)

    Liu, Jin; Sze, Tsun-Kay Jackie; Dutta, Prashanta

    2014-11-01

    The creation of compact micropumps to provide steady flow has been an on-going challenge in the field of microfluidics. We present a mathematical model for a micropump utilizing Bacteriorhodopsin and sugar transporter proteins. This micropump utilizes transporter proteins as method to drive fluid flow by converting light energy into chemical potential. The fluid flow through a microchannel is simulated using the Nernst-Planck, Navier-Stokes, and continuity equations. Numerical results show that the micropump is capable of generating usable pressure. Designing parameters influencing the performance of the micropump are investigated including membrane fraction, lipid proton permeability, illumination, and channel height. The results show that there is a substantial membrane fraction region at which fluid flow is maximized. The use of lipids with low membrane proton permeability allows illumination to be used as a method to turn the pump on and off. This capability allows the micropump to be activated and shut off remotely without bulky support equipment. This modeling work provides new insights on mechanisms potentially useful for fluidic pumping in self-sustained bio-mimic microfluidic pumps. This work is supported in part by the National Science Fundation Grant CBET-1250107.

  17. Cost utility analysis of endoscopic biliary stent in unresectable hilar cholangiocarcinoma: decision analytic modeling approach.

    Science.gov (United States)

    Sangchan, Apichat; Chaiyakunapruk, Nathorn; Supakankunti, Siripen; Pugkhem, Ake; Mairiang, Pisaln

    2014-01-01

    Endoscopic biliary drainage using metal and plastic stent in unresectable hilar cholangiocarcinoma (HCA) is widely used but little is known about their cost-effectiveness. This study evaluated the cost-utility of endoscopic metal and plastic stent drainage in unresectable complex, Bismuth type II-IV, HCA patients. Decision analytic model, Markov model, was used to evaluate cost and quality-adjusted life year (QALY) of endoscopic biliary drainage in unresectable HCA. Costs of treatment and utilities of each Markov state were retrieved from hospital charges and unresectable HCA patients from tertiary care hospital in Thailand, respectively. Transition probabilities were derived from international literature. Base case analyses and sensitivity analyses were performed. Under the base-case analysis, metal stent is more effective but more expensive than plastic stent. An incremental cost per additional QALY gained is 192,650 baht (US$ 6,318). From probabilistic sensitivity analysis, at the willingness to pay threshold of one and three times GDP per capita or 158,000 baht (US$ 5,182) and 474,000 baht (US$ 15,546), the probability of metal stent being cost-effective is 26.4% and 99.8%, respectively. Based on the WHO recommendation regarding the cost-effectiveness threshold criteria, endoscopic metal stent drainage is cost-effective compared to plastic stent in unresectable complex HCA.

  18. Analytic model comparing the cost utility of TVT versus duloxetine in women with urinary stress incontinence.

    Science.gov (United States)

    Jacklin, Paul; Duckett, Jonathan; Renganathan, Arasee

    2010-08-01

    The purpose of this study was to assess cost utility of duloxetine versus tension-free vaginal tape (TVT) as a second-line treatment for urinary stress incontinence. A Markov model was used to compare the cost utility based on a 2-year follow-up period. Quality-adjusted life year (QALY) estimation was performed by assuming a disutility rate of 0.05. Under base-case assumptions, although duloxetine was a cheaper option, TVT gave a considerably higher QALY gain. When a longer follow-up period was considered, TVT had an incremental cost-effectiveness ratio (ICER) of pound 7,710 ($12,651) at 10 years. If the QALY gain from cure was 0.09, then the ICER for duloxetine and TVT would both fall within the indicative National Institute for Health and Clinical Excellence willingness to pay threshold at 2 years, but TVT would be the cost-effective option having extended dominance over duloxetine. This model suggests that TVT is a cost-effective treatment for stress incontinence.

  19. Utilization of building information modeling in infrastructure’s design and construction

    Science.gov (United States)

    Zak, Josef; Macadam, Helen

    2017-09-01

    Building Information Modeling (BIM) is a concept that has gained its place in the design, construction and maintenance of buildings in Czech Republic during recent years. This paper deals with description of usage, applications and potential benefits and disadvantages connected with implementation of BIM principles in the preparation and construction of infrastructure projects. Part of the paper describes the status of BIM implementation in Czech Republic, and there is a review of several virtual design and construction practices in Czech Republic. Examples of best practice are presented from current infrastructure projects. The paper further summarizes experiences with new technologies gained from the application of BIM related workflows. The focus is on the BIM model utilization for the machine control systems on site, quality assurance, quality management and construction management.

  20. Development of Nonlinear Flight Mechanical Model of High Aspect Ratio Light Utility Aircraft

    Science.gov (United States)

    Bahri, S.; Sasongko, R. A.

    2018-04-01

    The implementation of Flight Control Law (FCL) for Aircraft Electronic Flight Control System (EFCS) aims to reduce pilot workload, while can also enhance the control performance during missions that require long endurance flight and high accuracy maneuver. In the development of FCL, a quantitative representation of the aircraft dynamics is needed for describing the aircraft dynamics characteristic and for becoming the basis of the FCL design. Hence, a 6 Degree of Freedom nonlinear model of a light utility aircraft dynamics, also called the nonlinear Flight Mechanical Model (FMM), is constructed. This paper shows the construction of FMM from mathematical formulation, the architecture design of FMM, the trimming process and simulations. The verification of FMM is done by analysis of aircraft behaviour in selected trimmed conditions.

  1. Utility of a human-mouse xenograft model and in vivo near-infrared fluorescent imaging for studying wound healing.

    Science.gov (United States)

    Shanmugam, Victoria K; Tassi, Elena; Schmidt, Marcel O; McNish, Sean; Baker, Stephen; Attinger, Christopher; Wang, Hong; Shara, Nawar; Wellstein, Anton

    2015-12-01

    To study the complex cellular interactions involved in wound healing, it is essential to have an animal model that adequately mimics the human wound microenvironment. Currently available murine models are limited because wound contraction introduces bias into wound surface area measurements. The purpose of this study was to demonstrate utility of a human-mouse xenograft model for studying human wound healing. Normal human skin was harvested from elective abdominoplasty surgery, xenografted onto athymic nude (nu/nu) mice, and allowed to engraft for 3 months. The graft was then wounded using a 2-mm punch biopsy. Wounds were harvested on sequential days to allow tissue-based markers of wound healing to be followed sequentially. On the day of wound harvest, mice were injected with XenoLight RediJect cyclooxygenase-2 (COX-2) probe and imaged according to package instructions. Immunohistochemistry confirms that this human-mouse xenograft model is effective for studying human wound healing in vivo. Additionally, in vivo fluorescent imaging for inducible COX-2 demonstrated upregulation from baseline to day 4 (P = 0·03) with return to baseline levels by day 10, paralleling the reepithelialisation of the wound. This human-mouse xenograft model, combined with in vivo fluorescent imaging provides a useful mechanism for studying molecular pathways of human wound healing. © 2013 The Authors. International Wound Journal © 2013 Medicalhelplines.com Inc and John Wiley & Sons Ltd.

  2. Measurement of ERP Utilization Level of Enterprises: The Sample of Province Aydın

    Directory of Open Access Journals (Sweden)

    Özel Sebetci

    2014-06-01

    Full Text Available The aim of this study is to measure ERP usage level of enterprises in Aydın. Data was obtained from 83 enterprises in Aydın via questionnaires. Data analysis showed that the enterprises had mostly high level of the computer integration and technologies on production. However, it was found that these enterprises did not use ERP systemsat high levels. Correlations between enterprise size by the number of employees, company revenue at 2012 and ERP usage levels were established by chi square test. Correlation analysis showed that there was a significant and positive correlation between ERP characteristics and strategic advantages about ERP.

  3. Measurement and modelling in anthropo-radiometry

    International Nuclear Information System (INIS)

    Carlan, Loic de

    2011-01-01

    In this HDR (Accreditation to supervise researches) report, the author gives an overview of his research activities, gives a summary of his research thesis (feasibility study of an actinide measurement system in the case of lungs), and proposes a research report on the different aspects of anthropo-radiometric measurement: context (principles, significance, sampling phantoms), development of digital phantoms (software presentation and validation), interface development and validation, application to actinide measurement in lung, taking biokinetic data into account for anthropo-radiometric measurement

  4. Utilizing photon number parity measurements to demonstrate quantum computation with cat-states in a cavity

    Science.gov (United States)

    Petrenko, A.; Ofek, N.; Vlastakis, B.; Sun, L.; Leghtas, Z.; Heeres, R.; Sliwa, K. M.; Mirrahimi, M.; Jiang, L.; Devoret, M. H.; Schoelkopf, R. J.

    2015-03-01

    Realizing a working quantum computer requires overcoming the many challenges that come with coupling large numbers of qubits to perform logical operations. These include improving coherence times, achieving high gate fidelities, and correcting for the inevitable errors that will occur throughout the duration of an algorithm. While impressive progress has been made in all of these areas, the difficulty of combining these ingredients to demonstrate an error-protected logical qubit, comprised of many physical qubits, still remains formidable. With its large Hilbert space, superior coherence properties, and single dominant error channel (single photon loss), a superconducting 3D resonator acting as a resource for a quantum memory offers a hardware-efficient alternative to multi-qubit codes [Leghtas et.al. PRL 2013]. Here we build upon recent work on cat-state encoding [Vlastakis et.al. Science 2013] and photon-parity jumps [Sun et.al. 2014] by exploring the effects of sequential measurements on a cavity state. Employing a transmon qubit dispersively coupled to two superconducting resonators in a cQED architecture, we explore further the application of parity measurements to characterizing such a hybrid qubit/cat state architecture. In so doing, we demonstrate the promise of integrating cat states as central constituents of future quantum codes.

  5. Smith-Purcell experiment utilizing a field-emitter array cathode: measurements of radiation

    International Nuclear Information System (INIS)

    Ishizuka, H.; Kawamura, Y.; Yokoo, K.; Shimawaki, H.; Hosono, A.

    2001-01-01

    Smith-Purcell (SP) radiation at wavelengths of 350-750 nm was produced in a tabletop experiment using a field-emitter array (FEA) cathode. The electron gun was 5 cm long, and a 25 mmx25 mm holographic replica grating was placed behind the slit provided in the anode. A regulated DC power supply accelerated electron beams in excess of 10 μA up to 45 keV, while a small Van de Graaff generator accelerated smaller currents to higher energies. The grating had a 0.556 μm period, 30 deg. blaze and a 0.2 μm thick aluminum coating. Spectral characteristics of the radiation were measured both manually and automatically; in the latter case, the spectrometer was driven by a stepping motor to scan the wavelength, and AD-converted signals from a photomultiplier tube were processed by a personal computer. The measurement, made at 80 deg. relative to the electron beam, showed good agreement with theoretical wavelengths of the SP radiation. Diffraction orders were -2 and -3 for beam energies higher than 45 keV, -3 to -5 at 15-25 keV, and -2 to -4 in between. The experiment has thus provided evidence for the practical applicability of FEAs to compact radiation sources

  6. Smith-Purcell experiment utilizing a field-emitter array cathode measurements of radiation

    CERN Document Server

    Ishizuka, H; Yokoo, K; Shimawaki, H; Hosono, A

    2001-01-01

    Smith-Purcell (SP) radiation at wavelengths of 350-750 nm was produced in a tabletop experiment using a field-emitter array (FEA) cathode. The electron gun was 5 cm long, and a 25 mmx25 mm holographic replica grating was placed behind the slit provided in the anode. A regulated DC power supply accelerated electron beams in excess of 10 mu A up to 45 keV, while a small Van de Graaff generator accelerated smaller currents to higher energies. The grating had a 0.556 mu m period, 30 deg. blaze and a 0.2 mu m thick aluminum coating. Spectral characteristics of the radiation were measured both manually and automatically; in the latter case, the spectrometer was driven by a stepping motor to scan the wavelength, and AD-converted signals from a photomultiplier tube were processed by a personal computer. The measurement, made at 80 deg. relative to the electron beam, showed good agreement with theoretical wavelengths of the SP radiation. Diffraction orders were -2 and -3 for beam energies higher than 45 keV, -3 to -5 ...

  7. Cost/schedule performance measurement system utilized on the Fast Flux Test Facility project

    International Nuclear Information System (INIS)

    Brown, R.K.; Frost, R.A.; Zimmerman, F.M.

    1976-01-01

    An Earned Value-Integrated Cost/Schedule Performance Measurement System has been applied to a major nonmilitary nuclear design and construction project. This system is similar to the Department of Defense Cost/Schedule Performance Measurement System. The project is the Fast Flux Test Facility (a Fuels and Materials test reactor for the Liquid Metal Fast Breeder Reactor Program) being built at the Hanford Engineering Development Laboratory, Richland, Washington, by Westinghouse Hanford Company for the U. S. Energy Research and Development Administration. Because the project was well into the construction phase when the Earned Value System was being considered, it was decided that the principles of DOD's Cost/Schedule Control System Criteria would be applied to the extent possible but no major changes in accounting practices or management systems were imposed. Implementation of this system enabled the following questions to be answered: For work performed, how do actual costs compare with the budget for that work. What is the impact of cost and schedule variances at an overall project level composed of different kinds of activities. Without the Earned Value system, these questions could be answered in a qualitative, subjective manner at best

  8. Lack of utility of measuring serum bilirubin concentration in distinguishing perforation status of pediatric appendicitis.

    Science.gov (United States)

    Bonadio, William; Bruno, Santina; Attaway, David; Dharmar, Logesh; Tam, Derek; Homel, Peter

    2017-06-01

    Pediatric appendicitis is a common, potentially serious condition. Determining perforation status is crucial to planning effective management. Determine the efficacy of serum total bilirubin concentration [STBC] in distinguishing perforation status in children with appendicitis. Retrospective review of 257 cases of appendicitis who received abdominal CT scan and measurement of STBC. There were 109 with perforation vs 148 without perforation. Although elevated STBC was significantly more common in those with [36%] vs without perforation [22%], the mean difference in elevated values between groups [0.1mg/dL] was clinically insignificant. Higher degrees of hyperbilirubinemia [>2mg/dL] were rarely encountered [5%]. Predictive values for elevated STBC in distinguishing perforation outcome were imprecise [sensitivity 38.5%, specificity 78.4%, PPV 56.8%, NPV 63.4%]. ROC curve analysis of multiple clinical and other laboratory factors for predicting perforation status was unenhanced by adding the STBC variable. Specific analysis of those with perforated appendicitis and percutaneously-drained intra-abdominal abscess which was culture-positive for Escherichia coli showed an identical rate of STBC elevation compared to all with perforation. The routine measurement of STBC does not accurately distinguish perforation status in children with appendicitis, nor discern infecting organism in those with perforation and intra-abdominal abscess. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. A novel transition radiation detector utilizing superconducting microspheres for measuring the energy of relativistic high-energy charged particles

    International Nuclear Information System (INIS)

    Yuan, Luke C.L.; Chen, C.P.; Huang, C.Y.; Lee, S.C.; Waysand, G.; Perrier, P.; Limagne, D.; Jeudy, V.; Girard, T.

    2000-01-01

    A novel transition radiation detector (TRD) utilizing superheated superconducting microspheres of tin of 22-26, 27-32 and 32-38 μm in diameter, respectively, has been constructed which is capable of measuring accurately the energy of relativistic high-energy charged particles. The test has been conducted in a high-energy electron beam facility at the CERN PS in the energy range of 1-10 GeV showing an energy dependence of the TR X-ray photon produced and hence the value γ=E/mc 2 of the charged particle

  10. Mapping to Estimate Health-State Utility from Non-Preference-Based Outcome Measures: An ISPOR Good Practices for Outcomes Research Task Force Report.

    Science.gov (United States)

    Wailoo, Allan J; Hernandez-Alava, Monica; Manca, Andrea; Mejia, Aurelio; Ray, Joshua; Crawford, Bruce; Botteman, Marc; Busschbach, Jan

    2017-01-01

    Economic evaluation conducted in terms of cost per quality-adjusted life-year (QALY) provides information that decision makers find useful in many parts of the world. Ideally, clinical studies designed to assess the effectiveness of health technologies would include outcome measures that are directly linked to health utility to calculate QALYs. Often this does not happen, and even when it does, clinical studies may be insufficient for a cost-utility assessment. Mapping can solve this problem. It uses an additional data set to estimate the relationship between outcomes measured in clinical studies and health utility. This bridges the evidence gap between available evidence on the effect of a health technology in one metric and the requirement for decision makers to express it in a different one (QALYs). In 2014, ISPOR established a Good Practices for Outcome Research Task Force for mapping studies. This task force report provides recommendations to analysts undertaking mapping studies, those that use the results in cost-utility analysis, and those that need to critically review such studies. The recommendations cover all areas of mapping practice: the selection of data sets for the mapping estimation, model selection and performance assessment, reporting standards, and the use of results including the appropriate reflection of variability and uncertainty. This report is unique because it takes an international perspective, is comprehensive in its coverage of the aspects of mapping practice, and reflects the current state of the art. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  11. Combining GPS measurements and IRI model predictions

    International Nuclear Information System (INIS)

    Hernandez-Pajares, M.; Juan, J.M.; Sanz, J.; Bilitza, D.

    2002-01-01

    The free electrons distributed in the ionosphere (between one hundred and thousands of km in height) produce a frequency-dependent effect on Global Positioning System (GPS) signals: a delay in the pseudo-orange and an advance in the carrier phase. These effects are proportional to the columnar electron density between the satellite and receiver, i.e. the integrated electron density along the ray path. Global ionospheric TEC (total electron content) maps can be obtained with GPS data from a network of ground IGS (international GPS service) reference stations with an accuracy of few TEC units. The comparison with the TOPEX TEC, mainly measured over the oceans far from the IGS stations, shows a mean bias and standard deviation of about 2 and 5 TECUs respectively. The discrepancies between the STEC predictions and the observed values show an RMS typically below 5 TECUs (which also includes the alignment code noise). he existence of a growing database 2-hourly global TEC maps and with resolution of 5x2.5 degrees in longitude and latitude can be used to improve the IRI prediction capability of the TEC. When the IRI predictions and the GPS estimations are compared for a three month period around the Solar Maximum, they are in good agreement for middle latitudes. An over-determination of IRI TEC has been found at the extreme latitudes, the IRI predictions being, typically two times higher than the GPS estimations. Finally, local fits of the IRI model can be done by tuning the SSN from STEC GPS observations

  12. Effects of atmospheric variability on energy utilization and conservation. [Space heating energy demand modeling; Program HEATLOAD

    Energy Technology Data Exchange (ETDEWEB)

    Reiter, E.R.; Johnson, G.R.; Somervell, W.L. Jr.; Sparling, E.W.; Dreiseitly, E.; Macdonald, B.C.; McGuirk, J.P.; Starr, A.M.

    1976-11-01

    Research conducted between 1 July 1975 and 31 October 1976 is reported. A ''physical-adaptive'' model of the space-conditioning demand for energy and its response to changes in weather regimes was developed. This model includes parameters pertaining to engineering factors of building construction, to weather-related factors, and to socio-economic factors. Preliminary testing of several components of the model on the city of Greeley, Colorado, yielded most encouraging results. Other components, especially those pertaining to socio-economic factors, are still under development. Expansion of model applications to different types of structures and larger regions is presently underway. A CRT-display model for energy demand within the conterminous United States also has passed preliminary tests. A major effort was expended to obtain disaggregated data on energy use from utility companies throughout the United States. The study of atmospheric variability revealed that the 22- to 26-day vacillation in the potential and kinetic energy modes of the Northern Hemisphere is related to the behavior of the planetary long-waves, and that the midwinter dip in zonal available potential energy is reflected in the development of blocking highs. Attempts to classify weather patterns over the eastern and central United States have proceeded satisfactorily to the point where testing of our method for longer time periods appears desirable.

  13. On the utility of land surface models for agricultural drought monitoring

    Directory of Open Access Journals (Sweden)

    W. T. Crow

    2012-09-01

    Full Text Available The lagged rank cross-correlation between model-derived root-zone soil moisture estimates and remotely sensed vegetation indices (VI is examined between January 2000 and December 2010 to quantify the skill of various soil moisture models for agricultural drought monitoring. Examined modeling strategies range from a simple antecedent precipitation index to the application of modern land surface models (LSMs based on complex water and energy balance formulations. A quasi-global evaluation of lagged VI/soil moisture cross-correlation suggests, when globally averaged across the entire annual cycle, soil moisture estimates obtained from complex LSMs provide little added skill (< 5% in relative terms in anticipating variations in vegetation condition relative to a simplified water accounting procedure based solely on observed precipitation. However, larger amounts of added skill (5–15% in relative terms can be identified when focusing exclusively on the extra-tropical growing season and/or utilizing soil moisture values acquired by averaging across a multi-model ensemble.

  14. Surplus thermal energy model of greenhouses and coefficient analysis for effective utilization

    Energy Technology Data Exchange (ETDEWEB)

    Yang, S.H.; Son, J.E.; Lee, S.D.; Cho, S.I.; Ashtiani-Araghi, A.; Rhee, J.Y.

    2016-11-01

    If a greenhouse in the temperate and subtropical regions is maintained in a closed condition, the indoor temperature commonly exceeds that required for optimal plant growth, even in the cold season. This study considered this excess energy as surplus thermal energy (STE), which can be recovered, stored and used when heating is necessary. To use the STE economically and effectively, the amount of STE must be estimated before designing a utilization system. Therefore, this study proposed an STE model using energy balance equations for the three steps of the STE generation process. The coefficients in the model were determined by the results of previous research and experiments using the test greenhouse. The proposed STE model produced monthly errors of 17.9%, 10.4% and 7.4% for December, January and February, respectively. Furthermore, the effects of the coefficients on the model accuracy were revealed by the estimation error assessment and linear regression analysis through fixing dynamic coefficients. A sensitivity analysis of the model coefficients indicated that the coefficients have to be determined carefully. This study also provides effective ways to increase the amount of STE. (Author)

  15. Surplus thermal energy model of greenhouses and coefficient analysis for effective utilization

    Directory of Open Access Journals (Sweden)

    Seung-Hwan Yang

    2016-03-01

    Full Text Available If a greenhouse in the temperate and subtropical regions is maintained in a closed condition, the indoor temperature commonly exceeds that required for optimal plant growth, even in the cold season. This study considered this excess energy as surplus thermal energy (STE, which can be recovered, stored and used when heating is necessary. To use the STE economically and effectively, the amount of STE must be estimated before designing a utilization system. Therefore, this study proposed an STE model using energy balance equations for the three steps of the STE generation process. The coefficients in the model were determined by the results of previous research and experiments using the test greenhouse. The proposed STE model produced monthly errors of 17.9%, 10.4% and 7.4% for December, January and February, respectively. Furthermore, the effects of the coefficients on the model accuracy were revealed by the estimation error assessment and linear regression analysis through fixing dynamic coefficients. A sensitivity analysis of the model coefficients indicated that the coefficients have to be determined carefully. This study also provides effective ways to increase the amount of STE.

  16. Improved utilization of ADAS-cog assessment data through item response theory based pharmacometric modeling.

    Science.gov (United States)

    Ueckert, Sebastian; Plan, Elodie L; Ito, Kaori; Karlsson, Mats O; Corrigan, Brian; Hooker, Andrew C

    2014-08-01

    This work investigates improved utilization of ADAS-cog data (the primary outcome in Alzheimer's disease (AD) trials of mild and moderate AD) by combining pharmacometric modeling and item response theory (IRT). A baseline IRT model characterizing the ADAS-cog was built based on data from 2,744 individuals. Pharmacometric methods were used to extend the baseline IRT model to describe longitudinal ADAS-cog scores from an 18-month clinical study with 322 patients. Sensitivity of the ADAS-cog items in different patient populations as well as the power to detect a drug effect in relation to total score based methods were assessed with the IRT based model. IRT analysis was able to describe both total and item level baseline ADAS-cog data. Longitudinal data were also well described. Differences in the information content of the item level components could be quantitatively characterized and ranked for mild cognitively impairment and mild AD populations. Based on clinical trial simulations with a theoretical drug effect, the IRT method demonstrated a significantly higher power to detect drug effect compared to the traditional method of analysis. A combined framework of IRT and pharmacometric modeling permits a more effective and precise analysis than total score based methods and therefore increases the value of ADAS-cog data.

  17. Brain in flames – animal models of psychosis: utility and limitations

    Directory of Open Access Journals (Sweden)

    Mattei D

    2015-05-01

    Full Text Available Daniele Mattei,1 Regina Schweibold,1,2 Susanne A Wolf1 1Department of Cellular Neuroscience, Max-Delbrueck-Center for Molecular Medicine, Berlin, Germany; 2Department of Neurosurgery, Helios Clinics, Berlin, Germany Abstract: The neurodevelopmental hypothesis of schizophrenia posits that schizophrenia is a psychopathological condition resulting from aberrations in neurodevelopmental processes caused by a combination of environmental and genetic factors which proceed long before the onset of clinical symptoms. Many studies discuss an immunological component in the onset and progression of schizophrenia. We here review studies utilizing animal models of schizophrenia with manipulations of genetic, pharmacologic, and immunological origin. We focus on the immunological component to bridge the studies in terms of evaluation and treatment options of negative, positive, and cognitive symptoms. Throughout the review we link certain aspects of each model to the situation in human schizophrenic patients. In conclusion we suggest a combination of existing models to better represent the human situation. Moreover, we emphasize that animal models represent defined single or multiple symptoms or hallmarks of a given disease. Keywords: inflammation, schizophrenia, microglia, animal models 

  18. Designing management strategies for carbon dioxide storage and utilization under uncertainty using inexact modelling

    Science.gov (United States)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2017-06-01

    Effective application of carbon capture, utilization and storage (CCUS) systems could help to alleviate the influence of climate change by reducing carbon dioxide (CO2) emissions. The research objective of this study is to develop an equilibrium chance-constrained programming model with bi-random variables (ECCP model) for supporting the CCUS management system under random circumstances. The major advantage of the ECCP model is that it tackles random variables as bi-random variables with a normal distribution, where the mean values follow a normal distribution. This could avoid irrational assumptions and oversimplifications in the process of parameter design and enrich the theory of stochastic optimization. The ECCP model is solved by an equilibrium change-constrained programming algorithm, which provides convenience for decision makers to rank the solution set using the natural order of real numbers. The ECCP model is applied to a CCUS management problem, and the solutions could be useful in helping managers to design and generate rational CO2-allocation patterns under complexities and uncertainties.

  19. Dispersion modeling of accidental releases of toxic gases - Comparison of the models and their utility for the fire brigades.

    Science.gov (United States)

    Stenzel, S.; Baumann-Stanzer, K.

    2009-04-01

    Dispersion modeling of accidental releases of toxic gases - Comparison of the models and their utility for the fire brigades. Sirma Stenzel, Kathrin Baumann-Stanzer In the case of accidental release of hazardous gases in the atmosphere, the emergency responders need a reliable and fast tool to assess the possible consequences and apply the optimal countermeasures. For hazard prediction and simulation of the hazard zones a number of air dispersion models are available. The most model packages (commercial or free of charge) include a chemical database, an intuitive graphical user interface (GUI) and automated graphical output for display the results, they are easy to use and can operate fast and effective during stress situations. The models are designed especially for analyzing different accidental toxic release scenarios ("worst-case scenarios"), preparing emergency response plans and optimal countermeasures as well as for real-time risk assessment and management. There are also possibilities for model direct coupling to automatic meteorological stations, in order to avoid uncertainties in the model output due to insufficient or incorrect meteorological data. Another key problem in coping with accidental toxic release is the relative width spectrum of regulations and values, like IDLH, ERPG, AEGL, MAK etc. and the different criteria for their application. Since the particulate emergency responders and organizations require for their purposes unequal regulations and values, it is quite difficult to predict the individual hazard areas. There are a quite number of research studies and investigations coping with the problem, anyway the end decision is up to the authorities. The research project RETOMOD (reference scenarios calculations for toxic gas releases - model systems and their utility for the fire brigade) was conducted by the Central Institute for Meteorology and Geodynamics (ZAMG) in cooperation with the Vienna fire brigade, OMV Refining & Marketing GmbH and

  20. Computational model of precision grip in Parkinson’s disease: A Utility based approach

    Directory of Open Access Journals (Sweden)

    Ankur eGupta

    2013-12-01

    Full Text Available We propose a computational model of Precision Grip (PG performance in normal subjects and Parkinson’s Disease (PD patients. Prior studies on grip force generation in PD patients show an increase in grip force during ON medication and an increase in the variability of the grip force during OFF medication (Fellows et al 1998; Ingvarsson et al 1997. Changes in grip force generation in dopamine-deficient PD conditions strongly suggest contribution of the Basal Ganglia, a deep brain system having a crucial role in translating dopamine signals to decision making. The present approach is to treat the problem of modeling grip force generation as a problem of action selection, which is one of the key functions of the Basal Ganglia. The model consists of two components: 1 the sensory-motor loop component, and 2 the Basal Ganglia component. The sensory-motor loop component converts a reference position and a reference grip force, into lift force and grip force profiles, respectively. These two forces cooperate in grip-lifting a load. The sensory-motor loop component also includes a plant model that represents the interaction between two fingers involved in PG, and the object to be lifted. The Basal Ganglia component is modeled using Reinforcement Learning with the significant difference that the action selection is performed using utility distribution instead of using purely Value-based distribution, thereby incorporating risk-based decision making. The proposed model is able to account for the precision grip results from normal and PD patients accurately (Fellows et. al. 1998; Ingvarsson et. al. 1997. To our knowledge the model is the first model of precision grip in PD conditions.

  1. Measures of metacognition on signal-detection theoretic models.

    Science.gov (United States)

    Barrett, Adam B; Dienes, Zoltan; Seth, Anil K

    2013-12-01

    Analyzing metacognition, specifically knowledge of accuracy of internal perceptual, memorial, or other knowledge states, is vital for many strands of psychology, including determining the accuracy of feelings of knowing and discriminating conscious from unconscious cognition. Quantifying metacognitive sensitivity is however more challenging than quantifying basic stimulus sensitivity. Under popular signal-detection theory (SDT) models for stimulus classification tasks, approaches based on Type II receiver-operating characteristic (ROC) curves or Type II d-prime risk confounding metacognition with response biases in either the Type I (classification) or Type II (metacognitive) tasks. A new approach introduces meta-d': The Type I d-prime that would have led to the observed Type II data had the subject used all the Type I information. Here, we (a) further establish the inconsistency of the Type II d-prime and ROC approaches with new explicit analyses of the standard SDT model and (b) analyze, for the first time, the behavior of meta-d' under nontrivial scenarios, such as when metacognitive judgments utilize enhanced or degraded versions of the Type I evidence. Analytically, meta-d' values typically reflect the underlying model well and are stable under changes in decision criteria; however, in relatively extreme cases, meta-d' can become unstable. We explore bias and variance of in-sample measurements of meta-d' and supply MATLAB code for estimation in general cases. Our results support meta-d' as a useful measure of metacognition and provide rigorous methodology for its application. Our recommendations are useful for any researchers interested in assessing metacognitive accuracy. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  2. Utilization of the Rutherford backscattering technique for precise measurements of thickness by an alternative procedure

    International Nuclear Information System (INIS)

    Chagas, E.F.

    1985-01-01

    This technique was used to determine the thickness of targets on very thick substrates as this type of target is used a lot in nuclear physics, and especially in γ spectroscpy. The difficulties introduced in this case, occur because of the appearance of a small peak on a very high continuous background, due to the fact that the atomic number of the substrate is bigger than that of the targets. These difficulties are overcome using an alternative procedure to determine precisely the loss of energy of the beam whilst crossing the refered target. Targets of 59 Co, 46-48-50 Ti and 10 B on substrate of Pd and Ta, with thicknesses betwnn 30μg/cm 2 and 500μg/cm 2 were measured with a precision of 5%. The biggest sources of imprecision are the amounts of dE/dX. (Author) [pt

  3. Meteorological utilization of measurements of the artificial radioactivity on the air and precipitation

    Energy Technology Data Exchange (ETDEWEB)

    Neuwirth, R

    1955-01-01

    German, French, and American measurements of the rainfall and air activity are being evaluated. For that purpose, trajectories from the experimental grounds for bomb tests in Nevada to Western Germany are drawn. By means of intermediate values, the test possibilities of air paths first only scheduled are given. The so-called deposit spaces and meridional circulations, which are significant particularly in divergence regions, prove to be of especial importance. The mechanism of activation of precipitation is discussed. A connexion between the activity of precipitation and air masses could only be found in individual cases. But it seems that semitropical air masses dispose of a higher specific activity in comparison with the polar air masses.

  4. Differential Absorption Lidar (DIAL) Measurements of Atmospheric Water Vapor Utilizing Robotic Aircraft

    Science.gov (United States)

    Hoang, Ngoc; DeYoung, Russell J.; Prasad, Coorg R.; Laufer, Gabriel

    1998-01-01

    A new unpiloted air vehicle (UAV) based water vapor DIAL system will be described. This system is expected to offer lower operating costs, longer test duration and severe weather capabilities. A new high-efficiency, compact, light weight, diode-pumped, tunable Cr:LiSAF laser will be developed to meet the UAV payload weight and size limitations and its constraints in cooling capacity, physical size and payload. Similarly, a new receiver system using a single mirror telescope and an avalanche photo diode (APD) will be developed. Projected UAV parameters are expected to allow operation at altitudes up to 20 km, endurance of 24 hrs and speed of 400 km/hr. At these conditions measurements of water vapor at an uncertainty of 2-10% with a vertical resolution of 200 m and horizontal resolution of 10 km will be possible.

  5. Radiation transmission type pipe wall thinning detection device and measuring instruments utilizing ionizing radiation

    International Nuclear Information System (INIS)

    Higashi, Yasuhiko

    2009-01-01

    We developed the device to detect thinning of pipe thorough heat insulation in Power Plant, etc, even while the plant is under operation. It is necessary to test many parts of many pipes for pipe wall thinning management, but it is difficult within a limited time of the routine test. This device consists of detector and radiation source, which can detect the pipe (less than 500 mm in external diameter, less than 50 mm in thickness) with 1.6%-reproducibility (in a few-minutes measurement), based on the attenuation rate. Operation is easy and effective without removing the heat insulation. We will expand this thinning detection system, and contribute the safety of the Plant. (author)

  6. Measuring similarity between business process models

    NARCIS (Netherlands)

    Dongen, van B.F.; Dijkman, R.M.; Mendling, J.

    2007-01-01

    Quality aspects become increasingly important when business process modeling is used in a large-scale enterprise setting. In order to facilitate a storage without redundancy and an efficient retrieval of relevant process models in model databases it is required to develop a theoretical understanding

  7. The emperor’s new measurement model

    NARCIS (Netherlands)

    Zand Scholten, A.; Maris, G.; Borsboom, D.

    2011-01-01

    In this article the author discusses professor Stephen M. Humphry's critical attitude with respect to psychometric modeling. The author criticizes Humphry's model stating that the model is theoretically interesting but cannot be tested as it is not identified. The author also states that Humphry's

  8. A numerical model for ultrasonic measurements of swelling and mechanical properties of a swollen PVA hydrogel.

    Science.gov (United States)

    Lohakan, M; Jamnongkan, T; Pintavirooj, C; Kaewpirom, S; Boonsang, S

    2010-08-01

    This paper presents a numerical model for the evaluation of mechanical properties of a relatively thin hydrogel. The model utilizes a system identification method to evaluate the acoustical parameters from ultrasonic measurement data. The model involves the calculation of the forward model based on an ultrasonic wave propagation incorporating diffraction effect. Ultrasonic measurements of a hydrogel are also performed in a reflection mode. A Nonlinear Least Square (NLS) algorithm is employed to minimize difference between the results from the model and the experimental data. The acoustical parameters associated with the model are effectively modified to achieve the minimum error. As a result, the parameters of PVA hydrogels namely thickness, density, an ultrasonic attenuation coefficient and dispersion velocity are effectively determined. In order to validate the model, the conventional density measurements of hydrogels were also performed. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  9. Measurement of cardiac troponin I utilizing a point of care analyzer in healthy alpacas.

    Science.gov (United States)

    Blass, Keith A; Kraus, Marc S; Rishniw, Mark; Mann, Sabine; Mitchell, Lisa M; Divers, Thomas J

    2011-12-01

    Myocardial disease in camelids is poorly characterized. Nutritional (selenium deficiency) and toxic (ionophore toxicity) myocardial disease have been reported in camelids. Diagnosis and management of these and other myocardial diseases might be enhanced by evaluating cardiac troponin I (cTnI) concentrations. No information about cTnI reference intervals in camelids is currently available. (A) To determine cTnI concentrations obtained using a point of care i-STAT(®)1 analyzer (Heska Corporation) in healthy alpacas; (B) to compare alpaca cTnI concentrations between heparinized whole blood and plasma samples and between 2 different storage conditions (4 °C for 24 h or -80 °C for 30 days); (C) to examine assay reproducibility using the i-STAT(®)1. 23 healthy alpacas were evaluated. Blood and plasma samples were analyzed by the i-STAT(®)1 within 1 h of collection. Aliquots of plasma were stored at either 4 °C for 24 h or -80 °C for 30 days, and then analyzed. Assay reproducibility was determined by comparing 2 plasma or whole blood cTnI concentrations measured on the same sample over a 10 min period. Analyzer-specific plasma cTnI concentrations in clinically normal alpacas had a median of blood concentrations showed good agreement. Storage did not affect cTnI concentrations (p > 0.75). Plasma cTnI concentrations had coefficient of repeatability of 0.02 ng/mL. The i-STAT(®)1 can measure cTnI in alpacas on both plasma and whole blood and provides similar values for both samples. Storage at 4 °C for 24 h or -80 °C for 30 days does not affect estimates of plasma cTnI. Evaluation of cTnI might be of value in assessing cardiac disease in this species. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Full utilization of silt density index (SDI) measurements for seawater pre-treatment

    KAUST Repository

    Wei, Chunhai

    2012-07-01

    In order to clarify the fouling mechanism during silt density index (SDI) measurements of seawater in the seawater reverse osmosis (SWRO) desalination process, 11 runs were conducted under constant-pressure (207kPa) dead-end filtration mode according to the standard protocol for SDI measurement, in which two kinds of 0.45μm membranes of different material and seawater samples from the Mediterranean including raw seawater and seawater pre-treated by coagulation followed by sand filtration (CSF) and coagulation followed by microfiltration (CMF) technologies were tested. Fouling mechanisms based on the constant-pressure filtration equation were fully analyzed. For all runs, only t/(V/A)∼t showed very good linearity (correlation coefficient R 2>0.99) since the first moment of the filtration, indicating that standard blocking rather than cake filtration was the dominant fouling mechanism during the entire filtration process. The very low concentration of suspended solids rejected by MF of 0.45μm in seawater was the main reason why a cake layer was not formed. High turbidity removal during filtration indicated that organic colloids retained on and/or adsorbed in membrane pores governed the filtration process (i.e., standard blocking) due to the important contribution of organic substances to seawater turbidity in this study. Therefore the standard blocking coefficient k s, i.e., the slope of t/(V/A)∼t, could be used as a good fouling index for seawater because it showed good linearity with feed seawater turbidity. The correlation of SDI with k s and feed seawater quality indicated that SDI could be reliably used for seawater with low fouling potential (SDI 15min<5) like pre-treated seawater in this study. From both k s and SDI, the order of fouling potential was raw seawater>seawater pre-treated by CSF>seawater pre-treated by CMF, indicating the better performance of CMF than CSF. © 2012 Elsevier B.V.

  11. Interaural multiple frequency tympanometry measures: clinical utility for unilateral conductive hearing loss.

    Science.gov (United States)

    Norrix, Linda W; Burgan, Briana; Ramirez, Nicholas; Velenovsky, David S

    2013-03-01

    Tympanometry is a routine clinical measurement of the acoustic immittance of the ear as a function of ear canal air pressure. The 226 Hz tympanogram can provide clinical evidence for conditions such as a tympanic membrane perforation, Eustachian tube dysfunction, middle ear fluid, and ossicular discontinuity. Multiple frequency tympanometry using a range of probe tone frequencies from low to high has been shown to be more sensitive than a single probe tone tympanogram in distinguishing between mass- and stiffness-related middle ear pathologies (Colletti, 1975; Funasaka et al, 1984; Van Camp et al, 1986). In this study we obtained normative measures of middle ear resonance by using multiple probe tone frequency tympanometry. Ninety percent ranges for middle ear resonance and for interaural differences were calculated. In a mixed design, normative data were collected from both ears of male and female adults. Twelve male and 12 female adults with normal hearing and normal middle ear function participated in the study. Multiple frequency tympanograms were recorded with a commercially available immittance instrument (GSI Tympstar) to obtain estimates of middle ear resonant frequency (RF) using ΔB, positive tail, and negative tail methods. Data were analyzed using three-way mixed analyses of variance with gender as a between-subject variable and ear and method as within-subject variables. T-tests were performed, using the Bonferroni adjustment, to determine significant differences between means. Using the positive and negative tail methods, a wide range of approximately 500 Hz was found for middle ear resonance in adults with normal hearing and normal middle ear function. The difference in RF between an individual's ears is small with 90% ranges of approximately ±200 Hz, indicating that the right ear RF should be either 200 Hz higher or lower in frequency compared to the left ear. This was true for both negative and positive tail methods. Ninety percent ranges were

  12. A decision modeling for phasor measurement unit location selection in smart grid systems

    Science.gov (United States)

    Lee, Seung Yup

    As a key technology for enhancing the smart grid system, Phasor Measurement Unit (PMU) provides synchronized phasor measurements of voltages and currents of wide-area electric power grid. With various benefits from its application, one of the critical issues in utilizing PMUs is the optimal site selection of units. The main aim of this research is to develop a decision support system, which can be used in resource allocation task for smart grid system analysis. As an effort to suggest a robust decision model and standardize the decision modeling process, a harmonized modeling framework, which considers operational circumstances of component, is proposed in connection with a deterministic approach utilizing integer programming. With the results obtained from the optimal PMU placement problem, the advantages and potential that the harmonized modeling process possesses are assessed and discussed.

  13. Quantitative utilization of prior biological knowledge in the Bayesian network modeling of gene expression data

    Directory of Open Access Journals (Sweden)

    Gao Shouguo

    2011-08-01

    Full Text Available Abstract Background Bayesian Network (BN is a powerful approach to reconstructing genetic regulatory networks from gene expression data. However, expression data by itself suffers from high noise and lack of power. Incorporating prior biological knowledge can improve the performance. As each type of prior knowledge on its own may be incomplete or limited by quality issues, integrating multiple sources of prior knowledge to utilize their consensus is desirable. Results We introduce a new method to incorporate the quantitative information from multiple sources of prior knowledge. It first uses the Naïve Bayesian classifier to assess the likelihood of functional linkage between gene pairs based on prior knowledge. In this study we included cocitation in PubMed and schematic similarity in Gene Ontology annotation. A candidate network edge reservoir is then created in which the copy number of each edge is proportional to the estimated likelihood of linkage between the two corresponding genes. In network simulation the Markov Chain Monte Carlo sampling algorithm is adopted, and samples from this reservoir at each iteration to generate new candidate networks. We evaluated the new algorithm using both simulated and real gene expression data including that from a yeast cell cycle and a mouse pancreas development/growth study. Incorporating prior knowledge led to a ~2 fold increase in the number of known transcription regulations recovered, without significant change in false positive rate. In contrast, without the prior knowledge BN modeling is not always better than a random selection, demonstrating the necessity in network modeling to supplement the gene expression data with additional information. Conclusion our new development provides a statistical means to utilize the quantitative information in prior biological knowledge in the BN modeling of gene expression data, which significantly improves the performance.

  14. Electrical Maxwell demon and Szilard engine utilizing Johnson noise, measurement, logic and control.

    Directory of Open Access Journals (Sweden)

    Laszlo Bela Kish

    Full Text Available We introduce a purely electrical version of Maxwell's demon which does not involve mechanically moving parts such as trapdoors, etc. It consists of a capacitor, resistors, amplifiers, logic circuitry and electronically controlled switches and uses thermal noise in resistors (Johnson noise to pump heat. The only types of energy of importance in this demon are electrical energy and heat. We also demonstrate an entirely electrical version of Szilard's engine, i.e., an information-controlled device that can produce work by employing thermal fluctuations. The only moving part is a piston that executes work, and the engine has purely electronic controls and it is free of the major weakness of the original Szilard engine in not requiring removal and repositioning the piston at the end of the cycle. For both devices, the energy dissipation in the memory and other binary informatics components are insignificant compared to the exponentially large energy dissipation in the analog part responsible for creating new information by measurement and decision. This result contradicts the view that the energy dissipation in the memory during erasure is the most essential dissipation process in a demon. Nevertheless the dissipation in the memory and information processing parts is sufficient to secure the Second Law of Thermodynamics.

  15. Electrical Maxwell demon and Szilard engine utilizing Johnson noise, measurement, logic and control.

    Science.gov (United States)

    Kish, Laszlo Bela; Granqvist, Claes-Göran

    2012-01-01

    We introduce a purely electrical version of Maxwell's demon which does not involve mechanically moving parts such as trapdoors, etc. It consists of a capacitor, resistors, amplifiers, logic circuitry and electronically controlled switches and uses thermal noise in resistors (Johnson noise) to pump heat. The only types of energy of importance in this demon are electrical energy and heat. We also demonstrate an entirely electrical version of Szilard's engine, i.e., an information-controlled device that can produce work by employing thermal fluctuations. The only moving part is a piston that executes work, and the engine has purely electronic controls and it is free of the major weakness of the original Szilard engine in not requiring removal and repositioning the piston at the end of the cycle. For both devices, the energy dissipation in the memory and other binary informatics components are insignificant compared to the exponentially large energy dissipation in the analog part responsible for creating new information by measurement and decision. This result contradicts the view that the energy dissipation in the memory during erasure is the most essential dissipation process in a demon. Nevertheless the dissipation in the memory and information processing parts is sufficient to secure the Second Law of Thermodynamics.

  16. Electrical Maxwell Demon and Szilard Engine Utilizing Johnson Noise, Measurement, Logic and Control

    Science.gov (United States)

    Kish, Laszlo Bela; Granqvist, Claes-Göran

    2012-01-01

    We introduce a purely electrical version of Maxwell's demon which does not involve mechanically moving parts such as trapdoors, etc. It consists of a capacitor, resistors, amplifiers, logic circuitry and electronically controlled switches and uses thermal noise in resistors (Johnson noise) to pump heat. The only types of energy of importance in this demon are electrical energy and heat. We also demonstrate an entirely electrical version of Szilard's engine, i.e., an information-controlled device that can produce work by employing thermal fluctuations. The only moving part is a piston that executes work, and the engine has purely electronic controls and it is free of the major weakness of the original Szilard engine in not requiring removal and repositioning the piston at the end of the cycle. For both devices, the energy dissipation in the memory and other binary informatics components are insignificant compared to the exponentially large energy dissipation in the analog part responsible for creating new information by measurement and decision. This result contradicts the view that the energy dissipation in the memory during erasure is the most essential dissipation process in a demon. Nevertheless the dissipation in the memory and information processing parts is sufficient to secure the Second Law of Thermodynamics. PMID:23077525

  17. Compressor Part I: Measurement and Design Modeling

    Directory of Open Access Journals (Sweden)

    Thomas W. Bein

    1999-01-01

    method used to design the 125-ton compressor is first reviewed and some related performance curves are predicted based on a quasi-3D method. In addition to an overall performance measurement, a series of instruments were installed on the compressor to identify where the measured performance differs from the predicted performance. The measurement techniques for providing the diagnostic flow parameters are also described briefly. Part II of this paper provides predictions of flow details in the areas of the compressor where there were differences between the measured and predicted performance.

  18. Utility Function and Optimum Consumption in the models with Habit Formation and Catching up with the Joneses

    OpenAIRE

    Naryshkin, Roman; Davison, Matt

    2009-01-01

    This paper analyzes popular time-nonseparable utility functions that describe "habit formation" consumer preferences comparing current consumption with the time averaged past consumption of the same individual and "catching up with the Joneses" (CuJ) models comparing individual consumption with a cross-sectional average consumption level. Few of these models give reasonable optimum consumption time series. We introduce theoretically justified utility specifications leading to a plausible cons...

  19. Functional outcome measures in a surgical model of hip osteoarthritis in dogs

    OpenAIRE

    Little, Dianne; Johnson, Stephen; Hash, Jonathan; Olson, Steven A.; Estes, Bradley T.; Moutos, Franklin T.; Lascelles, B. Duncan X.; Guilak, Farshid

    2016-01-01

    Background The hip is one of the most common sites of osteoarthritis in the body, second only to the knee in prevalence. However, current animal models of hip osteoarthritis have not been assessed using many of the functional outcome measures used in orthopaedics, a characteristic that could increase their utility in the evaluation of therapeutic interventions. The canine hip shares similarities with the human hip, and functional outcome measures are well documented in veterinary medicine, pr...

  20. Model of sustainable utilization of organic solids waste in Cundinamarca, Colombia

    Directory of Open Access Journals (Sweden)

    Solanyi Castañeda Torres

    2017-05-01

    Full Text Available Introduction: This article considers a proposal of a model of use of organic solids waste for the department of Cundinamarca, which responds to the need for a tool to support decision-making for the planning and management of organic solids waste. Objective: To perform an approximation of a conceptual technical and mathematician optimization model to support decision-making in order to minimize environmental impacts. Materials and methods: A descriptive study was applied due to the fact that some fundamental characteristics of the studied homogeneous phenomenon are presented and it is also considered to be quasi experimental. The calculation of the model for plants of the department is based on three axes (environmental, economic and social, that are present in the general equation of optimization. Results: A model of harnessing organic solids waste in the techniques of biological treatment of composting aerobic and worm cultivation is obtained, optimizing the system with the emissions savings of greenhouse gases spread into the atmosphere, and in the reduction of the overall cost of final disposal of organic solids waste in sanitary landfill. Based on the economic principle of utility that determines the environmental feasibility and sustainability in the plants of harnessing organic solids waste to the department, organic fertilizers such as compost and humus capture carbon and nitrogen that reduce the tons of CO2.

  1. Analysis on misconducts and inappropriate practices by Japan's Nuclear Power Utilities and Assessment of their corrective measures

    International Nuclear Information System (INIS)

    Torikai, Seishi; Ozawa, Michihiro; Kanegae, Naomichi; Tani, Masaaki; Miyakoshi, Naoki; Madarame, Haruki

    2010-01-01

    On March 30, 2007, Japan's electric utilities reported the results of a complete review of their powergenerating units to the Nuclear and Industrial Safety Agency of the Ministry of Economy, Trade, and Industry (METI). The Ethics Committee of the Atomic Energy Society of Japan (AESJ) then recommended an assessment method to analyze the seriousness of the problems from multiple perspectives in order to support the public's understanding of the reported problems. Accordingly, the Ethics Committee conducted the assessment. The assessment considered each reported problem associated with nuclear power-generating units and the preventive measures completed between June 2007 and September 2008 (corrective measures continued beyond that period). The results were presented at the autumn conferences of AESJ in 2007 and 2008, and are discussed in this report. (author)

  2. Utilized social support and self-esteem mediate the relationship between perceived social support and suicide ideation. A test of a multiple mediator model.

    Science.gov (United States)

    Kleiman, Evan M; Riskind, John H

    2013-01-01

    While perceived social support has received considerable research as a protective factor for suicide ideation, little attention has been given to the mechanisms that mediate its effects. We integrated two theoretical models, Joiner's (2005) interpersonal theory of suicide and Leary's (Leary, Tambor, Terdal, & Downs, 1995) sociometer theory of self-esteem to investigate two hypothesized mechanisms, utilization of social support and self-esteem. Specifically, we hypothesized that individuals must utilize the social support they perceive that would result in increased self-esteem, which in turn buffers them from suicide ideation. Participants were 172 college students who completed measures of social support, self-esteem, and suicide ideation. Tests of simple mediation indicate that utilization of social support and self-esteem may each individually help to mediate the perceived social support/suicide ideation relationship. Additionally, a test of multiple mediators using bootstrapping supported the hypothesized multiple-mediator model. The use of a cross-sectional design limited our ability to find true cause-and-effect relationships. Results suggested that utilized social support and self-esteem both operate as individual moderators in the social support/self-esteem relationship. Results further suggested, in a comprehensive model, that perceived social support buffers suicide ideation through utilization of social support and increases in self-esteem.

  3. Radiation budget measurement/model interface research

    Science.gov (United States)

    Vonderhaar, T. H.

    1981-01-01

    The NIMBUS 6 data were analyzed to form an up to date climatology of the Earth radiation budget as a basis for numerical model definition studies. Global maps depicting infrared emitted flux, net flux and albedo from processed NIMBUS 6 data for July, 1977, are presented. Zonal averages of net radiation flux for April, May, and June and zonal mean emitted flux and net flux for the December to January period are also presented. The development of two models is reported. The first is a statistical dynamical model with vertical and horizontal resolution. The second model is a two level global linear balance model. The results of time integration of the model up to 120 days, to simulate the January circulation, are discussed. Average zonal wind, meridonal wind component, vertical velocity, and moisture budget are among the parameters addressed.

  4. Modeling of Mean-VaR portfolio optimization by risk tolerance when the utility function is quadratic

    Science.gov (United States)

    Sukono, Sidi, Pramono; Bon, Abdul Talib bin; Supian, Sudradjat

    2017-03-01

    The problems of investing in financial assets are to choose a combination of weighting a portfolio can be maximized return expectations and minimizing the risk. This paper discusses the modeling of Mean-VaR portfolio optimization by risk tolerance, when square-shaped utility functions. It is assumed that the asset return has a certain distribution, and the risk of the portfolio is measured using the Value-at-Risk (VaR). So, the process of optimization of the portfolio is done based on the model of Mean-VaR portfolio optimization model for the Mean-VaR done using matrix algebra approach, and the Lagrange multiplier method, as well as Khun-Tucker. The results of the modeling portfolio optimization is in the form of a weighting vector equations depends on the vector mean return vector assets, identities, and matrix covariance between return of assets, as well as a factor in risk tolerance. As an illustration of numeric, analyzed five shares traded on the stock market in Indonesia. Based on analysis of five stocks return data gained the vector of weight composition and graphics of efficient surface of portfolio. Vector composition weighting weights and efficient surface charts can be used as a guide for investors in decisions to invest.

  5. Measuring Change with the Rating Scale Model.

    Science.gov (United States)

    Ludlow, Larry H.; And Others

    The Rehabilitation Research and Development Laboratory at the United States Veterans Administration Hines Hospital is engaged in a long-term evaluation of blind rehabilitation. One aspect of the evaluation project focuses on the measurement of attitudes toward blindness. Our aim is to measure changes in attitudes toward blindness from…

  6. Migration Flows: Measurement, Analysis and Modeling

    NARCIS (Netherlands)

    Willekens, F.J.; White, Michael J.

    2016-01-01

    This chapter is an introduction to the study of migration flows. It starts with a review of major definition and measurement issues. Comparative studies of migration are particularly difficult because different countries define migration differently and measurement methods are not harmonized.

  7. Explaining regional variations in health care utilization between Swiss cantons using panel econometric models.

    Science.gov (United States)

    Camenzind, Paul A

    2012-03-13

    In spite of a detailed and nation-wide legislation frame, there exist large cantonal disparities in consumed quantities of health care services in Switzerland. In this study, the most important factors of influence causing these regional disparities are determined. The findings can also be productive for discussing the containment of health care consumption in other countries. Based on the literature, relevant factors that cause geographic disparities of quantities and costs in western health care systems are identified. Using a selected set of these factors, individual panel econometric models are calculated to explain the variation of the utilization in each of the six largest health care service groups (general practitioners, specialist doctors, hospital inpatient, hospital outpatient, medication, and nursing homes) in Swiss mandatory health insurance (MHI). The main data source is 'Datenpool santésuisse', a database of Swiss health insurers. For all six health care service groups, significant factors influencing the utilization frequency over time and across cantons are found. A greater supply of service providers tends to have strong interrelations with per capita consumption of MHI services. On the demand side, older populations and higher population densities represent the clearest driving factors. Strategies to contain consumption and costs in health care should include several elements. In the federalist Swiss system, the structure of regional health care supply seems to generate significant effects. However, the extent of driving factors on the demand side (e.g., social deprivation) or financing instruments (e.g., high deductibles) should also be considered.

  8. Integrating utilization-focused evaluation with business process modeling for clinical research improvement.

    Science.gov (United States)

    Kagan, Jonathan M; Rosas, Scott; Trochim, William M K

    2010-10-01

    New discoveries in basic science are creating extraordinary opportunities to design novel biomedical preventions and therapeutics for human disease. But the clinical evaluation of these new interventions is, in many instances, being hindered by a variety of legal, regulatory, policy and operational factors, few of which enhance research quality, the safety of study participants or research ethics. With the goal of helping increase the efficiency and effectiveness of clinical research, we have examined how the integration of utilization-focused evaluation with elements of business process modeling can reveal opportunities for systematic improvements in clinical research. Using data from the NIH global HIV/AIDS clinical trials networks, we analyzed the absolute and relative times required to traverse defined phases associated with specific activities within the clinical protocol lifecycle. Using simple median duration and Kaplan-Meyer survival analysis, we show how such time-based analyses can provide a rationale for the prioritization of research process analysis and re-engineering, as well as a means for statistically assessing the impact of policy modifications, resource utilization, re-engineered processes and best practices. Successfully applied, this approach can help researchers be more efficient in capitalizing on new science to speed the development of improved interventions for human disease.

  9. The comparison of environmental effects on michelson and fabry-perot interferometers utilized for the displacement measurement.

    Science.gov (United States)

    Wang, Yung-Cheng; Shyu, Lih-Horng; Chang, Chung-Ping

    2010-01-01

    The optical structure of general commercial interferometers, e.g., the Michelson interferometers, is based on a non-common optical path. Such interferometers suffer from environmental effects because of the different phase changes induced in different optical paths and consequently the measurement precision will be significantly influenced by tiny variations of the environmental conditions. Fabry-Perot interferometers, which feature common optical paths, are insensitive to environmental disturbances. That would be advantageous for precision displacement measurements under ordinary environmental conditions. To verify and analyze this influence, displacement measurements with the two types of interferometers, i.e., a self-fabricated Fabry-Perot interferometer and a commercial Michelson interferometer, have been performed and compared under various environmental disturbance scenarios. Under several test conditions, the self-fabricated Fabry-Perot interferometer was obviously less sensitive to environmental disturbances than a commercial Michelson interferometer. Experimental results have shown that induced errors from environmental disturbances in a Fabry-Perot interferometer are one fifth of those in a Michelson interferometer. This has proved that an interferometer with the common optical path structure will be much more independent of environmental disturbances than those with a non-common optical path structure. It would be beneficial for the solution of interferometers utilized for precision displacement measurements in ordinary measurement environments.

  10. Measurements and properties of ice particles and carbon dioxide bubbles in aqueous mixture utilizing optical techniques

    Science.gov (United States)

    Diallo, Amadou O.

    Optical techniques are used to determine the size, shape and many other properties of particles ranging from the micro to a nano-level. These techniques have endless applications. This research is based on a project assigned by a "Vendor" that wants anonymity. The Leica optical microscope and the Dark Field Polarizing Metallurgical Microscope is used to determine the size and count of ice crystals (Vendors products) in multiple time frames. Since the ice temperature influences, its symmetry and the shape is subject to changes at room temperature (300 K) and the atmospheric pressure that is exerted on the ice crystals varies. The ice crystals are in a mixture of water, electrolytes and carbon dioxide with the optical spectroscopy (Qpod2) and Spectra suite, the optical density of the ice crystals is established from the absorbance and transmission measurements. The optical density in this case is also referred to as absorption; it is plotted with respect to a frequency (GHz), wavelength (nm) or Raman shift (1/cm) which shows the light colliding with the ice particles and CO2. Depending on the peaks positions, it is possible to profile the ice crystal sizes using a mean distribution plots. The region of absorbency wavelength expected for the ice is in the visible range; the water molecules in the (UV) Ultra-violet range and the CO2 in the (IR) infrared region. It is also possible to obtain the reflection and transmission output as a percentage change with the wavelengths ranging from 200 to 1100 nm. The refractive index of the ice can be correlated to the density based on the optical acoustic theorem, or Mie Scattering Theory. The viscosity of the ice crystals and the solutions from which the ice crystals are made of as well are recorded with the SV-10 viscometer. The baseline viscosity is used as reference and set lower than that of the ice crystals. The Zeta potential of the particles present in the mixture are approximated by first finding the viscosity of the

  11. Utility of the PRE-DELIRIC delirium prediction model in a Scottish ICU cohort.

    Science.gov (United States)

    Paton, Lia; Elliott, Sara; Chohan, Sanjiv

    2016-08-01

    The PREdiction of DELIRium for Intensive Care (PRE-DELIRIC) model reliably predicts at 24 h the development of delirium during intensive care admission. However, the model does not take account of alcohol misuse, which has a high prevalence in Scottish intensive care patients. We used the PRE-DELIRIC model to calculate the risk of delirium for patients in our ICU from May to July 2013. These patients were screened for delirium on each day of their ICU stay using the Confusion Assessment Method for ICU (CAM-ICU). Outcomes were ascertained from the national ICU database. In the 39 patients screened daily, the risk of delirium given by the PRE-DELIRIC model was positively associated with prevalence of delirium, length of ICU stay and mortality. The PRE-DELIRIC model can therefore be usefully applied to a Scottish cohort with a high prevalence of substance misuse, allowing preventive measures to be targeted.

  12. 4M Overturned Pyramid (MOP) Model Utilization: Case Studies on Collision in Indonesian and Japanese Maritime Traffic Systems (MTS)

    OpenAIRE

    Wanginingastuti Mutmainnah; Masao Furusho

    2016-01-01

    4M Overturned Pyramid (MOP) model is a new model, proposed by authors, to characterized MTS which is adopting epidemiological model that determines causes of accidents, including not only active failures but also latent failures and barriers. This model is still being developed. One of utilization of MOP model is characterizing accidents in MTS, i.e. collision in Indonesia and Japan that is written in this paper. The aim of this paper is to show the characteristics of ship collision accidents...

  13. In-House Communication Support System Based on the Information Propagation Model Utilizes Social Network

    Science.gov (United States)

    Takeuchi, Susumu; Teranishi, Yuuichi; Harumoto, Kaname; Shimojo, Shinji

    Almost all companies are now utilizing computer networks to support speedier and more effective in-house information-sharing and communication. However, existing systems are designed to support communications only within the same department. Therefore, in our research, we propose an in-house communication support system which is based on the “Information Propagation Model (IPM).” The IPM is proposed to realize word-of-mouth communication in a social network, and to support information-sharing on the network. By applying the system in a real company, we found that information could be exchanged between different and unrelated departments, and such exchanges of information could help to build new relationships between the users who are apart on the social network.

  14. Mathematical model of a utility firm. Final technical report, Part I

    Energy Technology Data Exchange (ETDEWEB)

    1983-08-21

    Utility companies are in the predicament of having to make forecasts, and draw up plans for the future, in an increasingly fluid and volatile socio-economic environment. The project being reported is to contribute to an understanding of the economic and behavioral processes that take place within a firm, and without it. Three main topics are treated. One is the representation of the characteristics of the members of an organization, to the extent to which characteristics seem pertinent to the processes of interest. The second is the appropriate management of the processes of change by an organization. The third deals with the competitive striving towards an economic equilibrium among the members of a society in the large, on the theory that this process might be modeled in a way which is similar to the one for the intra-organizational ones. This volume covers mainly the first topic.

  15. Prediction of Adequate Prenatal Care Utilization Based on the Extended Parallel Process Model.

    Science.gov (United States)

    Hajian, Sepideh; Imani, Fatemeh; Riazi, Hedyeh; Salmani, Fatemeh

    2017-10-01

    Pregnancy complications are one of the major public health concerns. One of the main causes of preventable complications is the absence of or inadequate provision of prenatal care. The present study was conducted to investigate whether Extended Parallel Process Model's constructs can predict the utilization of prenatal care services. The present longitudinal prospective study was conducted on 192 pregnant women selected through the multi-stage sampling of health facilities in Qeshm, Hormozgan province, from April to June 2015. Participants were followed up from the first half of pregnancy until their childbirth to assess adequate or inadequate/non-utilization of prenatal care services. Data were collected using the structured Risk Behavior Diagnosis Scale. The analysis of the data was carried out in SPSS-22 using one-way ANOVA, linear regression and logistic regression analysis. The level of significance was set at 0.05. Totally, 178 pregnant women with a mean age of 25.31±5.42 completed the study. Perceived self-efficacy (OR=25.23; Pprenatal care. Husband's occupation in the labor market (OR=0.43; P=0.02), unwanted pregnancy (OR=0.352; Pcare for the minors or elderly at home (OR=0.35; P=0.045) were associated with lower odds of receiving prenatal care. The model showed that when perceived efficacy of the prenatal care services overcame the perceived threat, the likelihood of prenatal care usage will increase. This study identified some modifiable factors associated with prenatal care usage by women, providing key targets for appropriate clinical interventions.

  16. Modeling and optimization of processes for clean and efficient pulverized coal combustion in utility boilers

    Directory of Open Access Journals (Sweden)

    Belošević Srđan V.

    2016-01-01

    Full Text Available Pulverized coal-fired power plants should provide higher efficiency of energy conversion, flexibility in terms of boiler loads and fuel characteristics and emission reduction of pollutants like nitrogen oxides. Modification of combustion process is a cost-effective technology for NOx control. For optimization of complex processes, such as turbulent reactive flow in coal-fired furnaces, mathematical modeling is regularly used. The NOx emission reduction by combustion modifications in the 350 MWe Kostolac B boiler furnace, tangentially fired by pulverized Serbian lignite, is investigated in the paper. Numerical experiments were done by an in-house developed three-dimensional differential comprehensive combustion code, with fuel- and thermal-NO formation/destruction reactions model. The code was developed to be easily used by engineering staff for process analysis in boiler units. A broad range of operating conditions was examined, such as fuel and preheated air distribution over the burners and tiers, operation mode of the burners, grinding fineness and quality of coal, boiler loads, cold air ingress, recirculation of flue gases, water-walls ash deposition and combined effect of different parameters. The predictions show that the NOx emission reduction of up to 30% can be achieved by a proper combustion organization in the case-study furnace, with the flame position control. Impact of combustion modifications on the boiler operation was evaluated by the boiler thermal calculations suggesting that the facility was to be controlled within narrow limits of operation parameters. Such a complex approach to pollutants control enables evaluating alternative solutions to achieve efficient and low emission operation of utility boiler units. [Projekat Ministarstva nauke Republike Srbije, br. TR-33018: Increase in energy and ecology efficiency of processes in pulverized coal-fired furnace and optimization of utility steam boiler air preheater by using in

  17. Modeling and estimation of measurement errors

    International Nuclear Information System (INIS)

    Neuilly, M.

    1998-01-01

    Any person in charge of taking measures is aware of the inaccuracy of the results however cautiously he may handle. Sensibility, accuracy, reproducibility define the significance of a result. The use of statistical methods is one of the important tools to improve the quality of measurement. The accuracy due to these methods revealed the little difference in the isotopic composition of uranium ore which led to the discovery of Oklo fossil reactor. This book is dedicated to scientists and engineers interested in measurement whatever their investigation interests are. Experimental results are presented as random variables and their laws of probability are approximated by normal law, Poison law or Pearson distribution. The impact of 1 or more parameters on the total error can be evaluated by drawing factorial plans and by using variance analysis methods. This method is also used in intercomparison procedures between laboratories and to detect any abnormal shift in a series of measurement. (A.C.)

  18. Bayesian Proteoform Modeling Improves Protein Quantification of Global Proteomic Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.; Datta, Susmita; Payne, Samuel H.; Kang, Jiyun; Bramer, Lisa M.; Nicora, Carrie D.; Shukla, Anil K.; Metz, Thomas O.; Rodland, Karin D.; Smith, Richard D.; Tardiff, Mark F.; McDermott, Jason E.; Pounds, Joel G.; Waters, Katrina M.

    2014-12-01

    As the capability of mass spectrometry-based proteomics has matured, tens of thousands of peptides can be measured simultaneously, which has the benefit of offering a systems view of protein expression. However, a major challenge is that with an increase in throughput, protein quantification estimation from the native measured peptides has become a computational task. A limitation to existing computationally-driven protein quantification methods is that most ignore protein variation, such as alternate splicing of the RNA transcript and post-translational modifications or other possible proteoforms, which will affect a significant fraction of the proteome. The consequence of this assumption is that statistical inference at the protein level, and consequently downstream analyses, such as network and pathway modeling, have only limited power for biomarker discovery. Here, we describe a Bayesian model (BP-Quant) that uses statistically derived peptides signatures to identify peptides that are outside the dominant pattern, or the existence of multiple over-expressed patterns to improve relative protein abundance estimates. It is a research-driven approach that utilizes the objectives of the experiment, defined in the context of a standard statistical hypothesis, to identify a set of peptides exhibiting similar statistical behavior relating to a protein. This approach infers that changes in relative protein abundance can be used as a surrogate for changes in function, without necessarily taking into account the effect of differential post-translational modifications, processing, or splicing in altering protein function. We verify the approach using a dilution study from mouse plasma samples and demonstrate that BP-Quant achieves similar accuracy as the current state-of-the-art methods at proteoform identification with significantly better specificity. BP-Quant is available as a MatLab ® and R packages at https://github.com/PNNL-Comp-Mass-Spec/BP-Quant.

  19. Evaluation of remedial alternative of a LNAPL plume utilizing groundwater modeling

    International Nuclear Information System (INIS)

    Johnson, T.; Way, S.; Powell, G.

    1997-01-01

    The TIMES model was utilized to evaluate remedial options for a large LNAPL spill that was impacting the North Platte River in Glenrock, Wyoming. LNAPL was found discharging into the river from the adjoining alluvial aquifer. Subsequent investigations discovered an 18 hectare plume extended across the alluvium and into a sandstone bedrock outcrop to the south of the river. The TIMES model was used to estimate the LNAPL volume and to evaluate options for optimizing LNAPL recovery. Data collected from recovery and monitoring wells were used for model calibration. A LNAPL volume of 5.5 million L was estimated, over 3.0 million L of which is in the sandstone bedrock. An existing product recovery system was evaluated for its effectiveness. Three alternative recovery scenarios were also evaluated to aid in selecting the most cost-effective and efficient recovery system for the site. An active wellfield hydraulically upgradient of the existing recovery system was selected as most appropriate to augment the existing system in recovering LNAPL efficiently

  20. Pharmacokinetic/pharmacodynamic modeling of cardiac toxicity in human acute overdoses: utility and limitations.

    Science.gov (United States)

    Mégarbane, Bruno; Aslani, Arsia Amir; Deye, Nicolas; Baud, Frédéric J

    2008-05-01

    Hypotension, cardiac failure, QT interval prolongation, dysrhythmias, and conduction disturbances are common complications of overdoses with cardiotoxicants. Pharmacokinetic/pharmacodynamic (PK/PD) relationships are useful to assess diagnosis, prognosis, and treatment efficacy in acute poisonings. To review the utility and limits of PK/PD studies of cardiac toxicity. Discussion of various models, mainly those obtained in digitalis, cyanide, venlafaxine and citalopram poisonings. A sigmoidal E(max) model appears adequate to represent the PK/PD relationships in cardiotoxic poisonings. PK/PD correlations investigate the discrepancies between the time course of the effect magnitude and its evolving concentrations. They may help in understanding the mechanisms of occurrence as well as disappearance of a cardiotoxic effect. When data are sparse, population-based PK/PD modeling using computer-intensive algorithms is helpful to estimate population mean values of PK parameters as well as their individual variability. Further PK/PD studies are needed in medical toxicology to allow understanding of the meaning of blood toxicant concentration in acute poisonings and thus improve management.

  1. Optimal parametric modelling of measured short waves

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.

    the importance of selecting a suitable sampling interval for better estimates of parametric modelling and also for better statistical representation. Implementation of the above algorithms in a structural monitoring system has the potential advantage of storing...

  2. Preferences UnderUncertainty and the Deficiencies of the Expected Utility Model

    OpenAIRE

    Murat Tasdemir

    2007-01-01

    In economics, the prevailing framework to explain preferences under uncerta- inty is the Expected Utility theory. Despite its widespread use, the Expected Utility theory is not free from problems. Experimental and empirical works shows that, in real life, the choices of individuals among risky alternatives conflict with the axioms of the Expected Utility theory. This study, in the light of experimental studies, investigates the problems with the Expected Utility theory regarding the individua...

  3. Health-related quality of life of cataract patients: cross-cultural comparisons of utility and psychometric measures.

    Science.gov (United States)

    Lee, Jae Eun; Fos, Peter J; Zuniga, Miguel A; Kastl, Peter R; Sung, Jung Hye

    2003-07-01

    This study was conducted to assess the presence and/or absence of cross-cultural differences or similarities between Korean and United States cataract patients. A systematic assessment was performed using utility and psychometric measures in the study population. A cross-sectional study design was used to examine the comparison of preoperative outcomes measures in cataract patients in Korea and the United States. Study subjects were selected using non-probabilistic methods and included 132 patients scheduled for cataract surgery in one eye. Subjects were adult cataract patients at Samsung and Kunyang General Hospital in Seoul, Korea, and Tulane University Hospital and Clinics in New Orleans, Louisiana. Preoperative utility was assessed using the verbal rating scale and standard reference gamble techniques. Current preoperative health status was assessed using the SF-36 and VF-14 surveys. Current preoperative Snellen visual acuity was used as a clinical measure of vision status. Korean patients were more likely to be younger (p = 0.001), less educated (p = 0.001), and to have worse Snellen visual acuity (p = 0.002) than United States patients. Multivariate analysis of variance (MANOVA) revealed that in contrast to Korean patients, United States patients were assessed to have higher scoring in general health, vitality, VF-14, and verbal rating for visual health. This higher scoring trend persisted after controlling for age, gender, education and Snellen visual acuity. The difference in health-related quality of life (HRQOL) between the two countries was quite clear, especially in the older age and highly educated group. Subjects in Korea and the United States were significantly different in quality of life, functional status and clinical outcomes. Subjects in the United States had more favorable health outcomes than those in Korea. These differences may be caused by multiple factors, including country-specific differences in economic status, health care system

  4. Modelling methods for milk intake measurements

    International Nuclear Information System (INIS)

    Coward, W.A.

    1999-01-01

    One component of the first Research Coordination Programme was a tutorial session on modelling in in-vivo tracer kinetic methods. This section describes the principles that are involved and how these can be translated into spreadsheets using Microsoft Excel and the SOLVER function to fit the model to the data. The purpose of this section is to describe the system developed within the RCM, and how it is used

  5. Measuring and modelling the structure of chocolate

    Science.gov (United States)

    Le Révérend, Benjamin J. D.; Fryer, Peter J.; Smart, Ian; Bakalis, Serafim

    2015-01-01

    The cocoa butter present in chocolate exists as six different polymorphs. To achieve the desired crystal form (βV), traditional chocolate manufacturers use relatively slow cooling (chocolate products during processing as well as the crystal structure of cocoa butter throughout the process. A set of ordinary differential equations describes the kinetics of fat crystallisation. The parameters were obtained by fitting the model to a set of DSC curves. The heat transfer equations were coupled to the kinetic model and solved using commercially available CFD software. A method using single crystal XRD was developed using a novel subtraction method to quantify the cocoa butter structure in chocolate directly and results were compared to the ones predicted from the model. The model was proven to predict phase change temperature during processing accurately (±1°C). Furthermore, it was possible to correctly predict phase changes and polymorphous transitions. The good agreement between the model and experimental data on the model geometry allows a better design and control of industrial processes.

  6. Modelling of landfill gas adsorption with bottom ash for utilization of renewable energy

    Energy Technology Data Exchange (ETDEWEB)

    Miao, Chen

    2011-10-06

    Energy crisis, environment pollution and climate change are the serious challenges to people worldwide. In the 21st century, human being is trend to research new technology of renewable energy, so as to slow down global warming and develop society in an environmentally sustainable method. Landfill gas, produced by biodegradable municipal solid waste in landfill, is a renewable energy source. In this work, landfill gas utilization for energy generation is introduced. Landfill gas is able to produce hydrogen by steam reforming reactions. There is a steam reformer equipment in the fuel cells system. A sewage plant of Cologne in Germany has run the Phosphoric Acid Fuel Cells power station with biogas for more than 50,000 hours successfully. Landfill gas thus may be used as fuel for electricity generation via fuel cells system. For the purpose of explaining the possibility of landfill gas utilization via fuel cells, the thermodynamics of landfill gas steam reforming are discussed by simulations. In practice, the methane-riched gas can be obtained by landfill gas purification and upgrading. This work investigate a new method for upgrading-landfill gas adsorption with bottom ash experimentally. Bottom ash is a by-product of municipal solid waste incineration, some of its physical and chemical properties are analysed in this work. The landfill gas adsorption experimental data show bottom ash can be used as a potential adsorbent for landfill gas adsorption to remove CO{sub 2}. In addition, the alkalinity of bottom ash eluate can be reduced in these adsorption processes. Therefore, the interactions between landfill gas and bottom ash can be explained by series reactions accordingly. Furthermore, a conceptual model involving landfill gas adsorption with bottom ash is developed. In this thesis, the parameters of landfill gas adsorption equilibrium equations can be obtained by fitting experimental data. On the other hand, these functions can be deduced with theoretical approach

  7. A Quantitative Human Spacecraft Design Evaluation Model for Assessing Crew Accommodation and Utilization

    Science.gov (United States)

    Fanchiang, Christine

    Crew performance, including both accommodation and utilization factors, is an integral part of every human spaceflight mission from commercial space tourism, to the demanding journey to Mars and beyond. Spacecraft were historically built by engineers and technologists trying to adapt the vehicle into cutting edge rocketry with the assumption that the astronauts could be trained and will adapt to the design. By and large, that is still the current state of the art. It is recognized, however, that poor human-machine design integration can lead to catastrophic and deadly mishaps. The premise of this work relies on the idea that if an accurate predictive model exists to forecast crew performance issues as a result of spacecraft design and operations, it can help designers and managers make better decisions throughout the design process, and ensure that the crewmembers are well-integrated with the system from the very start. The result should be a high-quality, user-friendly spacecraft that optimizes the utilization of the crew while keeping them alive, healthy, and happy during the course of the mission. Therefore, the goal of this work was to develop an integrative framework to quantitatively evaluate a spacecraft design from the crew performance perspective. The approach presented here is done at a very fundamental level starting with identifying and defining basic terminology, and then builds up important axioms of human spaceflight that lay the foundation for how such a framework can be developed. With the framework established, a methodology for characterizing the outcome using a mathematical model was developed by pulling from existing metrics and data collected on human performance in space. Representative test scenarios were run to show what information could be garnered and how it could be applied as a useful, understandable metric for future spacecraft design. While the model is the primary tangible product from this research, the more interesting outcome of

  8. Achievable ADC Performance by Postcorrection Utilizing Dynamic Modeling of the Integral Nonlinearity

    Directory of Open Access Journals (Sweden)

    Peter Händel

    2008-03-01

    Full Text Available There is a need for a universal dynamic model of analog-to-digital converters (ADC’s aimed for postcorrection. However, it is complicated to fully describe the properties of an ADC by a single model. An alternative is to split up the ADC model in different components, where each component has unique properties. In this paper, a model based on three components is used, and a performance analysis for each component is presented. Each component can be postcorrected individually and by the method that best suits the application. The purpose of postcorrection of an ADC is to improve the performance. Hence, for each component, expressions for the potential improvement have been developed. The measures of performance are total harmonic distortion (THD and signal to noise and distortion (SINAD, and to some extent spurious-free dynamic range (SFDR.

  9. Modeling Late-State Serpentinization on Enceladus and Implications for Methane-Utilizing Microbial Metabolisms

    Science.gov (United States)

    Hart, R.; Cardace, D.

    2017-12-01

    Modeling investigations of Enceladus and other icy-satellites have included physicochemical properties (Sohl et al., 2010; Glein et al., 2015; Neveu et al., 2015), geophysical prospects of serpentinization (Malamud and Prialnik, 2016; Vance et al., 2016), and aqueous geochemistry across different antifreeze fluid-rock scenarios (Neveu et al., 2017). To more effectively evaluate the habitability of Enceladus, in the context of recent observations (Waite et al., 2017), we model the potential bioenergetic pathways that would be thermodynamically favorable at the interface of hydrothermal water-rock reactions resulting from late stage serpentinization (>90% serpentinized), hypothesized on Enceladus. Building on previous geochemical model outputs of Enceladus (Neveu et al., 2017), and bioenergetic modeling (as in Amend and Shock, 2001; Cardace et al., 2015), we present a model of late stage serpentinization possible at the water-rock interface of Enceladus, and report changing activities of chemical species related to methane utilization by microbes over the course of serpentinization using the Geochemist's Workbench REACT code [modified Extended Debye-Hückel (Helgeson, 1969) using the thermodynamic database of SUPCRT92 (Johnson et al., 1992)]. Using a model protolith speculated to exist at Enceladus's water-rock boundary, constrained by extraterrestrial analog analytical data for subsurface serpentinites of the Coast Range Ophiolite (Lower Lake, CA, USA) mélange rocks, we deduce evolving habitability conditions as the model protolith reacts with feasible, though hypothetical, planetary ocean chemistries (from Glien et al., 2015, and Neveu et al., 2017). Major components of modeled oceans, Na-Cl, Mg-Cl, and Ca-Cl, show shifts in the feasibility of CO2-CH4-H2 driven microbial habitability, occurring early in the reaction progress, with methanogenesis being bioenergetically favored. Methanotrophy was favored late in the reaction progress of some Na-Cl systems and in the

  10. Innovative practice model to optimize resource utilization and improve access to care for high-risk and BRCA+ patients.

    Science.gov (United States)

    Head, Linden; Nessim, Carolyn; Usher Boyd, Kirsty

    2017-02-01

    Bilateral prophylactic mastectomy (BPM) has demonstrated breast cancer risk reduction in high-risk/ BRCA + patients. However, priority of active cancers coupled with inefficient use of operating room (OR) resources presents challenges in offering BPM in a timely manner. To address these challenges, a rapid access prophylactic mastectomy and immediate reconstruction (RAPMIR) program was innovated. The purpose of this study was to evaluate RAPMIR with regards to access to care and efficiency. We retrospectively reviewed the cases of all high-risk/ BRCA + patients having had BPM between September 2012 and August 2014. Patients were divided into 2 groups: those managed through the traditional model and those managed through the RAPMIR model. RAPMIR leverages 2 concurrently running ORs with surgical oncology and plastic surgery moving between rooms to complete 3 combined BPMs with immediate reconstruction in addition to 1-2 independent cases each operative day. RAPMIR eligibility criteria included high-risk/ BRCA + status; BPM with immediate, implant-based reconstruction; and day surgery candidacy. Wait times, case volumes and patient throughput were measured and compared. There were 16 traditional patients and 13 RAPMIR patients. Mean wait time (days from referral to surgery) for RAPMIR was significantly shorter than for the traditional model (165.4 v. 309.2 d, p = 0.027). Daily patient throughput (4.3 v. 2.8), plastic surgery case volume (3.7 v. 1.6) and surgical oncology case volume (3.0 v. 2.2) were significantly greater in the RAPMIR model than the traditional model ( p = 0.003, p < 0.001 and p = 0.015, respectively). A multidisciplinary model with optimized scheduling has the potential to improve access to care and optimize resource utilization.

  11. A gentle introduction to Rasch measurement models for metrologists

    International Nuclear Information System (INIS)

    Mari, Luca; Wilson, Mark

    2013-01-01

    The talk introduces the basics of Rasch models by systematically interpreting them in the conceptual and lexical framework of the International Vocabulary of Metrology, third edition (VIM3). An admittedly simple example of physical measurement highlights the analogies between physical transducers and tests, as they can be understood as measuring instruments of Rasch models and psychometrics in general. From the talk natural scientists and engineers might learn something of Rasch models, as a specifically relevant case of social measurement, and social scientists might re-interpret something of their knowledge of measurement in the light of the current physical measurement models

  12. Longitudinal predictive ability of mapping models: examining post-intervention EQ-5D utilities derived from baseline MHAQ data in rheumatoid arthritis patients.

    Science.gov (United States)

    Kontodimopoulos, Nick; Bozios, Panagiotis; Yfantopoulos, John; Niakas, Dimitris

    2013-04-01

    The purpose of this methodological study was to to provide insight into the under-addressed issue of the longitudinal predictive ability of mapping models. Post-intervention predicted and reported utilities were compared, and the effect of disease severity on the observed differences was examined. A cohort of 120 rheumatoid arthritis (RA) patients (60.0% female, mean age 59.0) embarking on therapy with biological agents completed the Modified Health Assessment Questionnaire (MHAQ) and the EQ-5D at baseline, and at 3, 6 and 12 months post-intervention. OLS regression produced a mapping equation to estimate post-intervention EQ-5D utilities from baseline MHAQ data. Predicted and reported utilities were compared with t test, and the prediction error was modeled, using fixed effects, in terms of covariates such as age, gender, time, disease duration, treatment, RF, DAS28 score, predicted and reported EQ-5D. The OLS model (RMSE = 0.207, R(2) = 45.2%) consistently underestimated future utilities, with a mean prediction error of 6.5%. Mean absolute differences between reported and predicted EQ-5D utilities at 3, 6 and 12 months exceeded the typically reported MID of the EQ-5D (0.03). According to the fixed-effects model, time, lower predicted EQ-5D and higher DAS28 scores had a significant impact on prediction errors, which appeared increasingly negative for lower reported EQ-5D scores, i.e., predicted utilities tended to be lower than reported ones in more severe health states. This study builds upon existing research having demonstrated the potential usefulness of mapping disease-specific instruments onto utility measures. The specific issue of longitudinal validity is addressed, as mapping models derived from baseline patients need to be validated on post-therapy samples. The underestimation of post-treatment utilities in the present study, at least in more severe patients, warrants further research before it is prudent to conduct cost-utility analyses in the context

  13. The headache under-response to treatment (HURT) questionnaire, an outcome measure to guide follow-up in primary care: development, psychometric evaluation and assessment of utility.

    Science.gov (United States)

    Steiner, T J; Buse, D C; Al Jumah, M; Westergaard, M L; Jensen, R H; Reed, M L; Prilipko, L; Mennini, F S; Láinez, M J A; Ravishankar, K; Sakai, F; Yu, S-Y; Fontebasso, M; Al Khathami, A; MacGregor, E A; Antonaci, F; Tassorelli, C; Lipton, R B

    2018-02-14

    Headache disorders are both common and burdensome but, given the many people affected, provision of health care to all is challenging. Structured headache services based in primary care are the most efficient, equitable and cost-effective solution but place responsibility for managing most patients on health-care providers with limited training in headache care. The development of practical management aids for primary care is therefore a purpose of the Global Campaign against Headache. This manuscript presents an outcome measure, the Headache Under-Response to Treatment (HURT) questionnaire, describing its purpose, development, psychometric evaluation and assessment for clinical utility. The objective was a simple-to-use instrument that would both assess outcome and provide guidance to improving outcome, having utility across the range of headache disorders, across clinical settings and across countries and cultures. After literature review, an expert consensus group drawn from all six world regions formulated HURT through item development and item reduction using item-response theory. Using the American Migraine Prevalence and Prevention Study's general-population respondent panel, two mailed surveys assessed the psychometric properties of HURT, comparing it with other instruments as external validators. Reliability was assessed in patients in two culturally-contrasting clinical settings: headache specialist centres in Europe (n = 159) and primary-care centres in Saudi Arabia (n = 40). Clinical utility was assessed in similar settings (Europe n = 201; Saudi Arabia n = 342). The final instrument, an 8-item self-administered questionnaire, addressed headache frequency, disability, medication use and effect, patients' perceptions of headache "control" and their understanding of their diagnoses. Psychometric evaluation revealed a two-factor model (headache frequency, disability and medication use; and medication efficacy and headache control), with

  14. Utilization and cost of a new model of care for managing acute knee injuries: the Calgary acute knee injury clinic

    Directory of Open Access Journals (Sweden)

    Lau Breda HF

    2012-12-01

    Full Text Available Abstract Background Musculoskeletal disorders (MSDs affect a large proportion of the Canadian population and present a huge problem that continues to strain primary healthcare resources. Currently, the Canadian healthcare system depicts a clinical care pathway for MSDs that is inefficient and ineffective. Therefore, a new inter-disciplinary team-based model of care for managing acute knee injuries was developed in Calgary, Alberta, Canada: the Calgary Acute Knee Injury Clinic (C-AKIC. The goal of this paper is to evaluate and report on the appropriateness, efficiency, and effectiveness of the C-AKIC through healthcare utilization and costs associated with acute knee injuries. Methods This quasi-experimental study measured and evaluated cost and utilization associated with specific healthcare services for patients presenting with acute knee injuries. The goal was to compare patients receiving care from two clinical care pathways: the existing pathway (i.e. comparison group and a new model, the C-AKIC (i.e. experimental group. This was accomplished through the use of a Healthcare Access and Patient Satisfaction Questionnaire (HAPSQ. Results Data from 138 questionnaires were analyzed in the experimental group and 136 in the comparison group. A post-hoc analysis determined that both groups were statistically similar in socio-demographic characteristics. With respect to utilization, patients receiving care through the C-AKIC used significantly less resources. Overall, patients receiving care through the C-AKIC incurred 37% of the cost of patients with knee injuries in the comparison group and significantly incurred less costs when compared to the comparison group. The total aggregate average cost for the C-AKIC group was $2,549.59 compared to $6,954.33 for the comparison group (p Conclusions The Calgary Acute Knee Injury Clinic was able to manage and treat knee injured patients for less cost than the existing state of healthcare delivery. The

  15. Flavor release measurement from gum model system

    DEFF Research Database (Denmark)

    Ovejero-López, I.; Haahr, Anne-Mette; van den Berg, Frans W.J.

    2004-01-01

    composition can be measured by both instrumental and sensory techniques, providing comparable information. The peppermint oil level (0.5-2% w/w) in the gum influenced both the retronasal concentration and the perceived peppermint flavor. The sweeteners' (sorbitol or xylitol) effect is less apparent. Sensory...

  16. Coherent acceptability measures in multiperiod models

    NARCIS (Netherlands)

    Roorda, Berend; Schumacher, Hans; Engwerda, Jacob

    2005-01-01

    The framework of coherent risk measures has been introduced by Artzner et al. (1999; Math. Finance 9, 203–228) in a single-period setting. Here, we investigate a similar framework in a multiperiod context. We add an axiom of dynamic consistency to the standard coherence axioms, and obtain a

  17. Model-based cartilage thickness measurement in the submillimeter range

    International Nuclear Information System (INIS)

    Streekstra, G. J.; Strackee, S. D.; Maas, M.; Wee, R. ter; Venema, H. W.

    2007-01-01

    Current methods of image-based thickness measurement in thin sheet structures utilize second derivative zero crossings to locate the layer boundaries. It is generally acknowledged that the nonzero width of the point spread function (PSF) limits the accuracy of this measurement procedure. We propose a model-based method that strongly reduces PSF-induced bias by incorporating the PSF into the thickness estimation method. We estimated the bias in thickness measurements in simulated thin sheet images as obtained from second derivative zero crossings. To gain insight into the range of sheet thickness where our method is expected to yield improved results, sheet thickness was varied between 0.15 and 1.2 mm with an assumed PSF as present in the high-resolution modes of current computed tomography (CT) scanners [full width at half maximum (FWHM) 0.5-0.8 mm]. Our model-based method was evaluated in practice by measuring layer thickness from CT images of a phantom mimicking two parallel cartilage layers in an arthrography procedure. CT arthrography images of cadaver wrists were also evaluated, and thickness estimates were compared to those obtained from high-resolution anatomical sections that served as a reference. The thickness estimates from the simulated images reveal that the method based on second derivative zero crossings shows considerable bias for layers in the submillimeter range. This bias is negligible for sheet thickness larger than 1 mm, where the size of the sheet is more than twice the FWHM of the PSF but can be as large as 0.2 mm for a 0.5 mm sheet. The results of the phantom experiments show that the bias is effectively reduced by our method. The deviations from the true thickness, due to random fluctuations induced by quantum noise in the CT images, are of the order of 3% for a standard wrist imaging protocol. In the wrist the submillimeter thickness estimates from the CT arthrography images correspond within 10% to those estimated from the anatomical

  18. Performance Measurement Model A TarBase model with ...

    Indian Academy of Sciences (India)

    rohit

    Model A 8.0 2.0 94.52% 88.46% 76 108 12 12 0.86 0.91 0.78 0.94. Model B 2.0 2.0 93.18% 89.33% 64 95 10 9 0.88 0.90 0.75 0.98. The above results for TEST – 1 show details for our two models (Model A and Model B).Performance of Model A after adding of 32 negative dataset of MiRTif on our testing set(MiRecords) ...

  19. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  20. Utilizing Visual Effects Software for Efficient and Flexible Isostatic Adjustment Modelling

    Science.gov (United States)

    Meldgaard, A.; Nielsen, L.; Iaffaldano, G.

    2017-12-01

    The isostatic adjustment signal generated by transient ice sheet loading is an important indicator of past ice sheet extent and the rheological constitution of the interior of the Earth. Finite element modelling has proved to be a very useful tool in these studies. We present a simple numerical model for 3D visco elastic Earth deformation and a new approach to the design of such models utilizing visual effects software designed for the film and game industry. The software package Houdini offers an assortment of optimized tools and libraries which greatly facilitate the creation of efficient numerical algorithms. In particular, we make use of Houdini's procedural work flow, the SIMD programming language VEX, Houdini's sparse matrix creation and inversion libraries, an inbuilt tetrahedralizer for grid creation, and the user interface, which facilitates effortless manipulation of 3D geometry. We mitigate many of the time consuming steps associated with the authoring of efficient algorithms from scratch while still keeping the flexibility that may be lost with the use of commercial dedicated finite element programs. We test the efficiency of the algorithm by comparing simulation times with off-the-shelf solutions from the Abaqus software package. The algorithm is tailored for the study of local isostatic adjustment patterns, in close vicinity to present ice sheet margins. In particular, we wish to examine possible causes for the considerable spatial differences in the uplift magnitude which are apparent from field observations in these areas. Such features, with spatial scales of tens of kilometres, are not resolvable with current global isostatic adjustment models, and may require the inclusion of local topographic features. We use the presented algorithm to study a near field area where field observations are abundant, namely, Disko Bay in West Greenland with the intention of constraining Earth parameters and ice thickness. In addition, we assess how local

  1. Protein (multi-)location prediction: utilizing interdependencies via a generative model

    Science.gov (United States)

    Shatkay, Hagit

    2015-01-01

    Motivation: Proteins are responsible for a multitude of vital tasks in all living organisms. Given that a protein’s function and role are strongly related to its subcellular location, protein location prediction is an important research area. While proteins move from one location to another and can localize to multiple locations, most existing location prediction systems assign only a single location per protein. A few recent systems attempt to predict multiple locations for proteins, however, their performance leaves much room for improvement. Moreover, such systems do not capture dependencies among locations and usually consider locations as independent. We hypothesize that a multi-location predictor that captures location inter-dependencies can improve location predictions for proteins. Results: We introduce a probabilistic generative model for protein localization, and develop a system based on it—which we call MDLoc—that utilizes inter-dependencies among locations to predict multiple locations for proteins. The model captures location inter-dependencies using Bayesian networks and represents dependency between features and locations using a mixture model. We use iterative processes for learning model parameters and for estimating protein locations. We evaluate our classifier MDLoc, on a dataset of single- and multi-localized proteins derived from the DBMLoc dataset, which is the most comprehensive protein multi-localization dataset currently available. Our results, obtained by using MDLoc, significantly improve upon results obtained by an initial simpler classifier, as well as on results reported by other top systems. Availability and implementation: MDLoc is available at: http://www.eecis.udel.edu/∼compbio/mdloc. Contact: shatkay@udel.edu. PMID:26072505

  2. Spot markets vs. long-term contracts - modelling tools for regional electricity generating utilities

    International Nuclear Information System (INIS)

    Grohnheit, P.E.

    1999-01-01

    A properly organised market for electricity requires that some information will be available for all market participants. Also a range of generally available modelling tools are necessary. This paper describes a set of simple models based on published data for analyses of the long-term revenues of regional utilities with combined heat and power generation (CHP), who will operate a competitive international electricity market and a local heat market. The future revenues from trade on the spot market is analysed using a load curve model, in which marginal costs are calculated on the basis of short-term costs of the available units and chronological hourly variations in the demands for electricity and heat. Assumptions on prices, marginal costs and electricity generation by the different types of generating units are studied for selected types of local electricity generators. The long-term revenue requirements to be met by long-term contracts are analysed using a traditional techno-economic optimisation model focusing on technology choice and competition among technologies over 20.30 years. A possible conclusion from this discussion is that it is important for the economic and environmental efficiency of the electricity market that local or regional generators of CHP, who are able to react on price signals, do not conclude long-term contracts that include fixed time-of-day tariff for sale of electricity. Optimisation results for a CHP region (represented by the structure of the Danish electricity and CHP market in 1995) also indicates that a market for CO 2 tradable permits is unlikely to attract major non-fossil fuel technologies for electricity generation, e.g. wind power. (au)

  3. Protein (multi-)location prediction: utilizing interdependencies via a generative model.

    Science.gov (United States)

    Simha, Ramanuja; Briesemeister, Sebastian; Kohlbacher, Oliver; Shatkay, Hagit

    2015-06-15

    Proteins are responsible for a multitude of vital tasks in all living organisms. Given that a protein's function and role are strongly related to its subcellular location, protein location prediction is an important research area. While proteins move from one location to another and can localize to multiple locations, most existing location prediction systems assign only a single location per protein. A few recent systems attempt to predict multiple locations for proteins, however, their performance leaves much room for improvement. Moreover, such systems do not capture dependencies among locations and usually consider locations as independent. We hypothesize that a multi-location predictor that captures location inter-dependencies can improve location predictions for proteins. We introduce a probabilistic generative model for protein localization, and develop a system based on it-which we call MDLoc-that utilizes inter-dependencies among locations to predict multiple locations for proteins. The model captures location inter-dependencies using Bayesian networks and represents dependency between features and locations using a mixture model. We use iterative processes for learning model parameters and for estimating protein locations. We evaluate our classifier MDLoc, on a dataset of single- and multi-localized proteins derived from the DBMLoc dataset, which is the most comprehensive protein multi-localization dataset currently available. Our results, obtained by using MDLoc, significantly improve upon results obtained by an initial simpler classifier, as well as on results reported by other top systems. MDLoc is available at: http://www.eecis.udel.edu/∼compbio/mdloc. © The Author 2015. Published by Oxford University Press.

  4. Mars Colony in situ resource utilization: An integrated architecture and economics model

    Science.gov (United States)

    Shishko, Robert; Fradet, René; Do, Sydney; Saydam, Serkan; Tapia-Cortez, Carlos; Dempster, Andrew G.; Coulton, Jeff

    2017-09-01

    This paper reports on our effort to develop an ensemble of specialized models to explore the commercial potential of mining water/ice on Mars in support of a Mars Colony. This ensemble starts with a formal systems architecting framework to describe a Mars Colony and capture its artifacts' parameters and technical attributes. The resulting database is then linked to a variety of ;downstream; analytic models. In particular, we integrated an extraction process (i.e., ;mining;) model, a simulation of the colony's environmental control and life support infrastructure known as HabNet, and a risk-based economics model. The mining model focuses on the technologies associated with in situ resource extraction, processing, storage and handling, and delivery. This model computes the production rate as a function of the systems' technical parameters and the local Mars environment. HabNet simulates the fundamental sustainability relationships associated with establishing and maintaining the colony's population. The economics model brings together market information, investment and operating costs, along with measures of market uncertainty and Monte Carlo techniques, with the objective of determining the profitability of commercial water/ice in situ mining operations. All told, over 50 market and technical parameters can be varied in order to address ;what-if; questions, including colony location.

  5. Time versus frequency domain measurements: layered model ...

    African Journals Online (AJOL)

    ... their high frequency content while among TEM data sets with low frequency content, the averaging times for the FEM ellipticity were shorter than the TEM quality. Keywords: ellipticity, frequency domain, frequency electromagnetic method, model parameter, orientation error, time domain, transient electromagnetic method

  6. Modelling of power-reactivity coefficient measurement

    International Nuclear Information System (INIS)

    Strmensky, C.; Petenyi, V.; Jagrik, J.; Minarcin, M.; Hascik, R.; Toth, L.

    2005-01-01

    Report describes results of modeling of power-reactivity coefficient analysis on power-level. In paper we calculate values of discrepancies arisen during transient process. These discrepancies can be arisen as result of experiment evaluation and can be caused by disregard of 3D effects on neutron distribution. The results are critically discussed (Authors)

  7. Practical utilization of modeling and simulation in laboratory process waste assessments

    International Nuclear Information System (INIS)

    Lyttle, T.W.; Smith, D.M.; Weinrach, J.B.; Burns, M.L.

    1993-01-01

    At Los Alamos National Laboratory (LANL), facility waste streams tend to be small but highly diverse. Initial characterization of such waste streams is difficult in part due to a lack of tools to assist the waste generators in completing such assessments. A methodology has been developed at LANL to allow process knowledgeable field personnel to develop baseline waste generation assessments and to evaluate potential waste minimization technology. This process waste assessment (PWA) system is an application constructed within the process modeling system. The Process Modeling System (PMS) is an object-oriented, mass balance-based, discrete-event simulation using the common LISP object system (CLOS). Analytical capabilities supported within the PWA system include: complete mass balance specifications, historical characterization of selected waste streams and generation of facility profiles for materials consumption, resource utilization and worker exposure. Anticipated development activities include provisions for a best available technologies (BAT) database and integration with the LANL facilities management Geographic Information System (GIS). The environments used to develop these assessment tools will be discussed in addition to a review of initial implementation results

  8. Inferring the most probable maps of underground utilities using Bayesian mapping model

    Science.gov (United States)

    Bilal, Muhammad; Khan, Wasiq; Muggleton, Jennifer; Rustighi, Emiliano; Jenks, Hugo; Pennock, Steve R.; Atkins, Phil R.; Cohn, Anthony

    2018-03-01

    Mapping the Underworld (MTU), a major initiative in the UK, is focused on addressing social, environmental and economic consequences raised from the inability to locate buried underground utilities (such as pipes and cables) by developing a multi-sensor mobile device. The aim of MTU device is to locate different types of buried assets in real time with the use of automated data processing techniques and statutory records. The statutory records, even though typically being inaccurate and incomplete, provide useful prior information on what is buried under the ground and where. However, the integration of information from multiple sensors (raw data) with these qualitative maps and their visualization is challenging and requires the implementation of robust machine learning/data fusion approaches. An approach for automated creation of revised maps was developed as a Bayesian Mapping model in this paper by integrating the knowledge extracted from sensors raw data and available statutory records. The combination of statutory records with the hypotheses from sensors was for initial estimation of what might be found underground and roughly where. The maps were (re)constructed using automated image segmentation techniques for hypotheses extraction and Bayesian classification techniques for segment-manhole connections. The model consisting of image segmentation algorithm and various Bayesian classification techniques (segment recognition and expectation maximization (EM) algorithm) provided robust performance on various simulated as well as real sites in terms of predicting linear/non-linear segments and constructing refined 2D/3D maps.

  9. Optimal energy-utilization ratio for long-distance cruising of a model fish

    Science.gov (United States)

    Liu, Geng; Yu, Yong-Liang; Tong, Bing-Gang

    2012-07-01

    The efficiency of total energy utilization and its optimization for long-distance migration of fish have attracted much attention in the past. This paper presents theoretical and computational research, clarifying the above well-known classic questions. Here, we specify the energy-utilization ratio (fη) as a scale of cruising efficiency, which consists of the swimming speed over the sum of the standard metabolic rate and the energy consumption rate of muscle activities per unit mass. Theoretical formulation of the function fη is made and it is shown that based on a basic dimensional analysis, the main dimensionless parameters for our simplified model are the Reynolds number (Re) and the dimensionless quantity of the standard metabolic rate per unit mass (Rpm). The swimming speed and the hydrodynamic power output in various conditions can be computed by solving the coupled Navier-Stokes equations and the fish locomotion dynamic equations. Again, the energy consumption rate of muscle activities can be estimated by the quotient of dividing the hydrodynamic power by the muscle efficiency studied by previous researchers. The present results show the following: (1) When the value of fη attains a maximum, the dimensionless parameter Rpm keeps almost constant for the same fish species in different sizes. (2) In the above cases, the tail beat period is an exponential function of the fish body length when cruising is optimal, e.g., the optimal tail beat period of Sockeye salmon is approximately proportional to the body length to the power of 0.78. Again, the larger fish's ability of long-distance cruising is more excellent than that of smaller fish. (3) The optimal swimming speed we obtained is consistent with previous researchers’ estimations.

  10. Measurements of the Backstreaming Proton IONS in the Self-Magnetic Pinch (SMP) Diode Utilizing Copper Activation Technique

    Science.gov (United States)

    Mazarakis, Michael; Cuneo, Michael; Fournier, Sean; Johnston, Mark; Kiefer, Mark; Leckbee, Joshua; Simpson, Sean; Renk, Timothy; Webb, Timothy; Bennett, Nichelle

    2016-10-01

    The results presented here were obtained with an SMP diode mounted at the front high voltage end of the 8-10-MV RITS Self-Magnetically Insulated Transmission Line (MITL) voltage adder. Our experiments had two objectives: first, to measure the contribution of the back-streaming proton currents emitted from the anode target, and second, to evaluate the energy of those ions and hence the actual Anode-Cathode (A-K) gap voltage. The accelerating voltage quoted in the literature is estimated utilizing para-potential flow theories. Thus, it is interesting to have another independent measurement of the A-K voltage. We have measured the back-streaming protons emitted from the anode and propagating through a hollow cathode tip for various diode configurations and different techniques of target cleaning treatment, namely, heating at very high temperatures with DC and pulsed current, with RF plasma cleaning, and with both plasma cleaning and heating. We have also evaluated the A-K gap voltage by energy filtering techniques. Sandia is operated by Sandia Corporation, a subsidiary of Lockheed Martin Company, for the US DOE NNSA under Contract No. DE-AC04-94AL85000.

  11. Measuring outcomes in psychiatry: an inpatient model.

    Science.gov (United States)

    Davis, D E; Fong, M L

    1996-02-01

    This article describes a system for measuring outcomes recently implemented in the department of psychiatry of Baptist Memorial Hospital, a 78-bed inpatient and day treatment unit that represents one service line of a large, urban teaching hospital in Memphis. In June 1993 Baptist Hospital began a 15-month pilot test of PsychSentinel, a measurement tool developed by researchers in the Department of Community Medicine at the University of Connecticut. The hospital identified the following four primary goals for this pilot project: provide data for internal hospital program evaluation, provide data for external marketing in a managed care environment, satisfy requirements of the Joint Commission on Accreditation of Health Care Organizations, and generate studies that add to the literature in psychiatry and psychology. PsychSentinel is based on the standardized diagnostic criteria in the Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV). The outcome measure assesses the change in the number of symptoms of psychopathology that occurs between admission and discharge from the hospital. Included in the nonproprietary system are risk adjustment factors, as well as access to a national reference database for comparative analysis purposes. Data collection can be done by trained ancillary staff members, with as much or as little direct physician involvement as desired. The system has proven to be both time effective and cost effective, and it provides important outcome information both at the program level and at the clinician level. After the pilot test, the staff at Baptist Memorial Hospital determined that the system met all initial objectives identified and recently adopted the system as an ongoing measure of quality patient care in the department of psychiatry.

  12. Measurement Models for Reasoned Action Theory

    OpenAIRE

    Hennessy, Michael; Bleakley, Amy; Fishbein, Martin

    2012-01-01

    Quantitative researchers distinguish between causal and effect indicators. What are the analytic problems when both types of measures are present in a quantitative reasoned action analysis? To answer this question, we use data from a longitudinal study to estimate the association between two constructs central to reasoned action theory: behavioral beliefs and attitudes toward the behavior. The belief items are causal indicators that define a latent variable index while the attitude items are ...

  13. Measuring growth in bilingual and monolingual children's english productive vocabulary development: the utility of combining parent and teacher report.

    Science.gov (United States)

    Vagh, Shaher Banu; Pan, Barbara Alexander; Mancilla-Martinez, Jeannette

    2009-01-01

    This longitudinal study examined growth in the English productive vocabularies of bilingual and monolingual children between ages 24 and 36 months and explored the utility and validity of supplementing parent reports with teacher reports to improve the estimation of children's vocabulary. Low-income, English-speaking and English/Spanish-speaking parents and Early Head Start and Head Start program teachers completed the MacArthur-Bates Communicative Development Inventory, Words and Sentences for 85 children. Results indicate faster growth rates for monolingual than for bilingual children and larger vocabularies for bilingual children who spoke mostly English than mostly Spanish at home. Parent-teacher composite reports, like parent reports, significantly related to children's directly assessed productive vocabulary at ages 30 and 36 months, but parent reports fit the model better. Implications for vocabulary assessment are discussed.

  14. Artificial intelligence model for sustain ability measurement

    International Nuclear Information System (INIS)

    Navickiene, R.; Navickas, K.

    2012-01-01

    The article analyses the main dimensions of organizational sustain ability, their possible integrations into artificial neural network. In this article authors performing analyses of organizational internal and external environments, their possible correlations with 4 components of sustain ability, and the principal determination models for sustain ability of organizations. Based on the general principles of sustainable development organizations, a artificial intelligence model for the determination of organizational sustain ability has been developed. The use of self-organizing neural networks allows the identification of the organizational sustain ability and the endeavour to explore vital, social, antropogenical and economical efficiency. The determination of the forest enterprise sustain ability is expected to help better manage the sustain ability. (Authors)

  15. Measuring Quality Satisfaction with Servqual Model

    Directory of Open Access Journals (Sweden)

    Dan Păuna

    2012-05-01

    Full Text Available The orientation to customer satisfaction is not a recent phenomenon, many very successfulbusinesspeople from the beginning of the 20th century, such as Sir Henry Royce, a name synonymous withRoll – Royce vehicles, stated the first principle regarding customer satisfaction “Our interest in the Roll-Royce cars does not end at the moment when the owner pays for and takes delivery the car. Our interest in thecar never wanes. Our ambition is that every purchaser of the Rolls - Royce car shall continue to be more thansatisfied (Rolls-Royce.” The following paper tries to deal with the important qualities of the concept for themeasuring of the gap between expected costumer services satisfactions, and perceived services like a routinecustomer feedback process, by means of a relatively new model, the Servqual model.

  16. Validation of the measurement model concept for error structure identification

    International Nuclear Information System (INIS)

    Shukla, Pavan K.; Orazem, Mark E.; Crisalle, Oscar D.

    2004-01-01

    The development of different forms of measurement models for impedance has allowed examination of key assumptions on which the use of such models to assess error structure are based. The stochastic error structures obtained using the transfer-function and Voigt measurement models were identical, even when non-stationary phenomena caused some of the data to be inconsistent with the Kramers-Kronig relations. The suitability of the measurement model for assessment of consistency with the Kramers-Kronig relations, however, was found to be more sensitive to the confidence interval for the parameter estimates than to the number of parameters in the model. A tighter confidence interval was obtained for Voigt measurement model, which made the Voigt measurement model a more sensitive tool for identification of inconsistencies with the Kramers-Kronig relations

  17. Are Clinical Trial Experiences Utilized?: A Differentiated Model of Medical Sites’ Information Transfer Ability

    DEFF Research Database (Denmark)

    Smed, Marie; Schultz, Carsten; Getz, Kenneth A.

    2015-01-01

    The collaboration with medical professionals in pharmaceutical clinical trials facilitates opportunities to gain valuable market information concerning product functionality issues, as well as issues related to market implementation and adoption. However, previous research on trial management lacks......’ information transfer ability, their methods of communicating, are included. The model is studied on a unique dataset of 395 medical site representatives by applying Rasch scale modeling to differentiate the stickiness of the heterogenic information issues. The results reveal that economic measures...... a differentiated perspective on the potential for information transfer from site to producer. An exploration of the variation in stickiness of information, and therefore the complexity of information transfer in clinical trials, is the main aim of this study. To further enrich the model of the dispersed sites...

  18. Multivariate linear models and repeated measurements revisited

    DEFF Research Database (Denmark)

    Dalgaard, Peter

    2009-01-01

    Methods for generalized analysis of variance based on multivariate normal theory have been known for many years. In a repeated measurements context, it is most often of interest to consider transformed responses, typically within-subject contrasts or averages. Efficiency considerations leads...... to sphericity assumptions, use of F tests and the Greenhouse-Geisser and Huynh-Feldt adjustments to compensate for deviations from sphericity. During a recent implementation of such methods in the R language, the general structure of such transformations was reconsidered, leading to a flexible specification...

  19. Statistical learning modeling method for space debris photometric measurement

    Science.gov (United States)

    Sun, Wenjing; Sun, Jinqiu; Zhang, Yanning; Li, Haisen

    2016-03-01

    Photometric measurement is an important way to identify the space debris, but the present methods of photometric measurement have many constraints on star image and need complex image processing. Aiming at the problems, a statistical learning modeling method for space debris photometric measurement is proposed based on the global consistency of the star image, and the statistical information of star images is used to eliminate the measurement noises. First, the known stars on the star image are divided into training stars and testing stars. Then, the training stars are selected as the least squares fitting parameters to construct the photometric measurement model, and the testing stars are used to calculate the measurement accuracy of the photometric measurement model. Experimental results show that, the accuracy of the proposed photometric measurement model is about 0.1 magnitudes.

  20. On the utilization of hydrological modelling for road drainage design under climate and land use change.

    Science.gov (United States)

    Kalantari, Zahra; Briel, Annemarie; Lyon, Steve W; Olofsson, Bo; Folkeson, Lennart

    2014-03-15

    Road drainage structures are often designed using methods that do not consider process-based representations of a landscape's hydrological response. This may create inadequately sized structures as coupled land cover and climate changes can lead to an amplified hydrological response. This study aims to quantify potential increases of runoff in response to future extreme rain events in a 61 km(2) catchment (40% forested) in southwest Sweden using a physically-based hydrological modelling approach. We simulate peak discharge and water level (stage) at two types of pipe bridges and one culvert, both of which are commonly used at Swedish road/stream intersections, under combined forest clear-cutting and future climate scenarios for 2050 and 2100. The frequency of changes in peak flow and water level varies with time (seasonality) and storm size. These changes indicate that the magnitude of peak flow and the runoff response are highly correlated to season rather than storm size. In all scenarios considered, the dimensions of the current culvert are insufficient to handle the increase in water level estimated using a physically-based modelling approach. It also appears that the water level at the pipe bridges changes differently depending on the size and timing of the storm events. The findings of the present study and the approach put forward should be considered when planning investigations on and maintenance for areas at risk of high water flows. In addition, the research highlights the utility of physically-based hydrological models to identify the appropriateness of road drainage structure dimensioning. Copyright © 2014 Elsevier B.V. All rights reserved.