WorldWideScience

Sample records for model utilized measures

  1. Utility of Monte Carlo Modelling for Holdup Measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Belian, Anthony P.; Russo, P. A. (Phyllis A.); Weier, Dennis R. (Dennis Ray),

    2005-01-01

    Non-destructive assay (NDA) measurements performed to locate and quantify holdup in the Oak Ridge K25 enrichment cascade used neutron totals counting and low-resolution gamma-ray spectroscopy. This facility housed the gaseous diffusion process for enrichment of uranium, in the form of UF{sub 6} gas, from {approx} 20% to 93%. Inventory of {sup 235}U inventory in K-25 is all holdup. These buildings have been slated for decontaminatino and decommissioning. The NDA measurements establish the inventory quantities and will be used to assure criticality safety and meet criteria for waste analysis and transportation. The tendency to err on the side of conservatism for the sake of criticality safety in specifying total NDA uncertainty argues, in the interests of safety and costs, for obtaining the best possible value of uncertainty at the conservative confidence level for each item of process equipment. Variable deposit distribution is a complex systematic effect (i.e., determined by multiple independent variables) on the portable NDA results for very large and bulk converters that contributes greatly to total uncertainty for holdup in converters measured by gamma or neutron NDA methods. Because the magnitudes of complex systematic effects are difficult to estimate, computational tools are important for evaluating those that are large. Motivated by very large discrepancies between gamma and neutron measurements of high-mass converters with gamma results tending to dominate, the Monte Carlo code MCNP has been used to determine the systematic effects of deposit distribution on gamma and neutron results for {sup 235}U holdup mass in converters. This paper details the numerical methodology used to evaluate large systematic effects unique to each measurement type, validates the methodology by comparison with measurements, and discusses how modeling tools can supplement the calibration of instruments used for holdup measurements by providing realistic values at well

  2. Modeling strategy to identify patients with primary immunodeficiency utilizing risk management and outcome measurement.

    Science.gov (United States)

    Modell, Vicki; Quinn, Jessica; Ginsberg, Grant; Gladue, Ron; Orange, Jordan; Modell, Fred

    2017-06-01

    This study seeks to generate analytic insights into risk management and probability of an identifiable primary immunodeficiency defect. The Jeffrey Modell Centers Network database, Jeffrey Modell Foundation's 10 Warning Signs, the 4 Stages of Testing Algorithm, physician-reported clinical outcomes, programs of physician education and public awareness, the SPIRIT® Analyzer, and newborn screening, taken together, generates P values of less than 0.05%. This indicates that the data results do not occur by chance, and that there is a better than 95% probability that the data are valid. The objectives are to improve patients' quality of life, while generating significant reduction of costs. The advances of the world's experts aligned with these JMF programs can generate analytic insights as to risk management and probability of an identifiable primary immunodeficiency defect. This strategy reduces the uncertainties related to primary immunodeficiency risks, as we can screen, test, identify, and treat undiagnosed patients. We can also address regional differences and prevalence, age, gender, treatment modalities, and sites of care, as well as economic benefits. These tools support high net benefits, substantial financial savings, and significant reduction of costs. All stakeholders, including patients, clinicians, pharmaceutical companies, third party payers, and government healthcare agencies, must address the earliest possible precise diagnosis, appropriate intervention and treatment, as well as stringent control of healthcare costs through risk assessment and outcome measurement. An affected patient is entitled to nothing less, and stakeholders are responsible to utilize tools currently available. Implementation offers a significant challenge to the entire primary immunodeficiency community.

  3. Decision model incorporating utility theory and measurement of social values applied to nuclear waste management

    International Nuclear Information System (INIS)

    Litchfield, J.W.; Hansen, J.V.; Beck, L.C.

    1975-07-01

    A generalized computer-based decision analysis model was developed and tested. Several alternative concepts for ultimate disposal have already been developed; however, significant research is still required before any of these can be implemented. To make a choice based on technical estimates of the costs, short-term safety, long-term safety, and accident detection and recovery requires estimating the relative importance of each of these factors or attributes. These relative importance estimates primarily involve social values and therefore vary from one individual to the next. The approach used was to sample various public groups to determine the relative importance of each of the factors to the public. These estimates of importance weights were combined in a decision analysis model with estimates, furnished by technical experts, of the degree to which each alternative concept achieves each of the criteria. This model then integrates the two separate and unique sources of information and provides the decision maker with information as to the preferences and concerns of the public as well as the technical areas within each concept which need further research. The model can rank the alternatives using sampled public opinion and techno-economic data. This model provides a decision maker with a structured approach to subdividing complex alternatives into a set of more easily considered attributes, measuring the technical performance of each alternative relative to each attribute, estimating relevant social values, and assimilating quantitative information in a rational manner to estimate total value for each alternative. Because of the explicit nature of this decision analysis, the decision maker can select a specific alternative supported by clear documentation and justification for his assumptions and estimates. (U.S.)

  4. Utilizing patch and site level greenhouse-gas concentration measurements in tandem with the prognostic model, ecosys

    Science.gov (United States)

    Morin, T. H.; Rey Sanchez, C.; Bohrer, G.; Riley, W. J.; Angle, J.; Mekonnen, Z. A.; Stefanik, K. C.; Wrighton, K. C.

    2016-12-01

    Estimates of wetland greenhouse gas (GHG) budgets currently have large uncertainties. While wetlands are the largest source of natural methane (CH4) emissions worldwide, they are also important carbon dioxide (CO2) sinks. Determining the GHG budget of a wetland is challenging, particularly because wetlands have intrinsically temporally and spatially heterogeneous land cover patterns and complex dynamics of CH4 production and emissions. These issues pose challenges to both measuring and modeling GHG budgets from wetlands. To improve wetland GHG flux predictability, we utilized the ecosys model to predict CH4 fluxes from a natural temperate estuarine wetland in northern Ohio. Multiple patches of terrain (that included Typha spp. and Nelumbo lutea) were represented as separate grid cells in the model. Cells were initialized with measured values but were allowed to dynamically evolve in response to meteorological, hydrological, and thermodynamic conditions. Trace gas surface emissions were predicted as the end result of microbial activity, physical transport, and plant processes. Corresponding to each model gridcell, measurements of dissolved gas concentrations were conducted with pore-water dialysis samplers (peepers). The peeper measurements were taken via a series of tubes, providing an undisturbed observation of the pore water concentrations of in situ dissolved gases along a vertical gradient. Non-steady state chambers and a flux tower provided both patch level and integrated site-level fluxes of CO2 and CH4. New Typha chambers were also developed to enclose entire plants and segregate the plant fluxes from soil/water fluxes. We expect ecosys to predict the seasonal and diurnal fluxes of CH4 from within each land cover type and to resolve where CH4 is generated within the soil column and its transmission mechanisms. We demonstrate the need for detailed information at both the patch and site level when using models to predict whole wetland ecosystem-scale GHG

  5. A psychometric evaluation of the Swedish version of the Research Utilization Questionnaire using a Rasch measurement model.

    Science.gov (United States)

    Lundberg, Veronica; Boström, Anne-Marie; Malinowsky, Camilla

    2017-07-30

    Evidence-based practice and research utilisation has become a commonly used concept in health care. The Research Utilization Questionnaire (RUQ) has been recognised to be a widely used instrument measuring the perception of research utilisation among nursing staff in clinical practice. Few studies have however analysed the psychometric properties of the RUQ. The aim of this study was to examine the psychometric properties of the Swedish version of the three subscales in RUQ using a Rasch measurement model. This study has a cross-sectional design using a sample of 163 staff (response rate 81%) working in one nursing home in Sweden. Data were collected using the Swedish version of RUQ in 2012. The three subscales Attitudes towards research, Availability of and support for research use and Use of research findings in clinical practice were investigated. Data were analysed using a Rasch measurement model. The results indicate presence of multidimensionality in all subscales. Moreover, internal scale validity and person response validity also provide some less satisfactory results, especially for the subscale Use of research findings. Overall, there seems to be a problem with the negatively worded statements. The findings suggest that clarification and refining of items, including additional psychometric evaluation of the RUQ, are needed before using the instrument in clinical practice and research studies among staff in nursing homes. © 2017 Nordic College of Caring Science.

  6. Evidence Evaluation: Measure "Z" Corresponds to Human Utility Judgments Better than Measure "L" and Optimal-Experimental-Design Models

    Science.gov (United States)

    Rusconi, Patrice; Marelli, Marco; D'Addario, Marco; Russo, Selena; Cherubini, Paolo

    2014-01-01

    Evidence evaluation is a crucial process in many human activities, spanning from medical diagnosis to impression formation. The present experiments investigated which, if any, normative model best conforms to people's intuition about the value of the obtained evidence. Psychologists, epistemologists, and philosophers of science have proposed…

  7. Quantifying annual internal effective137Cesium dose utilizing direct body-burden measurement and ecological dose modeling.

    Science.gov (United States)

    Jelin, Benjamin A; Sun, Wenjie; Kravets, Alexandra; Naboka, Maryna; Stepanova, Eugenia I; Vdovenko, Vitaliy Y; Karmaus, Wilfried J; Lichosherstov, Alex; Svendsen, Erik R

    2016-11-01

    The Chernobyl Nuclear Power Plant (CNPP) accident represents one of the most significant civilian releases of 137 Cesium ( 137 Cs, radiocesium) in human history. In the Chernobyl-affected region, radiocesium is considered to be the greatest on-going environmental hazard to human health by radiobiologists and public health scientists. The goal of this study was to characterize dosimetric patterns and predictive factors for whole-body count (WBC)-derived radiocesium internal dose estimations in a CNPP-affected children's cohort, and cross-validate these estimations with a soil-based ecological dose estimation model. WBC data were used to estimate the internal effective dose using the International Commission on Radiological Protection (ICRP) 67 dose conversion coefficient for 137 Cs and MONDAL Version 3.01 software. Geometric mean dose estimates from each model were compared utilizing paired t-tests and intra-class correlation coefficients. Additionally, we developed predictive models for WBC-derived dose estimation in order to determine the appropriateness of EMARC to estimate dose for this population. The two WBC-derived dose predictive models identified 137 Cs soil concentration (Pmodel. The geometric mean internal effective dose estimate of the EMARC model (0.183 mSv/y) was the highest followed by the ICRP 67 dose estimates (0.165 mSv/y) and the MONDAL model estimates (0.149 mSv/y). All three models yielded significantly different geometric mean dose (Pecological model dose estimations, in conjunction with findings from animal toxicological studies, should help elucidate possible deterministic radiogenic health effects associated with chronic low-dose internal exposure to 137 Cs.

  8. The utility target market model

    International Nuclear Information System (INIS)

    Leng, G.J.; Martin, J.

    1994-01-01

    A new model (the Utility Target Market Model) is used to evaluate the economic benefits of photovoltaic (PV) power systems located at the electrical utility customer site. These distributed PV demand-side generation systems can be evaluated in a similar manner to other demand-side management technologies. The energy and capacity values of an actual PV system located in the service area of the New England Electrical System (NEES) are the two utility benefits evaluated. The annual stream of energy and capacity benefits calculated for the utility are converted to the installed cost per watt that the utility should be willing to invest to receive this benefit stream. Different discount rates are used to show the sensitivity of the allowable installed cost of the PV systems to a utility's average cost of capital. Capturing both the energy and capacity benefits of these relatively environmentally friendly distributed generators, NEES should be willing to invest in this technology when the installed cost per watt declines to ca $2.40 using NEES' rated cost of capital (8.78%). If a social discount rate of 3% is used, installation should be considered when installed cost approaches $4.70/W. Since recent installations in the Sacramento Municipal Utility District have cost between $7-8/W, cost-effective utility applications of PV are close. 22 refs., 1 fig., 2 tabs

  9. Year rather than farming system influences protein utilization and energy value of vegetables when measured in a rat model.

    Science.gov (United States)

    Jørgensen, Henry; Brandt, Kirsten; Lauridsen, Charlotte

    2008-12-01

    The aim of the study was to measure protein utilization and energy value of dried apple, carrot, kale, pea, and potato prepared for human consumption and grown in 2 consecutive years with 3 different farming systems: (1) low input of fertilizer without pesticides (LIminusP), (2) low input of fertilizers and high input of pesticides (LIplusP), (3) and high input of fertilizers and high input of pesticides (HIplusP). In addition, the study goal was to verify the nutritional values, taking into consideration the physiologic state. In experiment 1, the nutritive values, including protein digestibility-corrected amino acid score, were determined in single ingredients in trials with young rats (3-4 weeks) as recommended by the Food and Agriculture Organization of the United Nations/World Health Organization for all age groups. A second experiment was carried out with adult rats to assess the usefulness of digestibility values to predict the digestibility and nutritive value of mixed diets and study the age aspect. Each plant material was included in the diet with protein-free basal mixtures or casein to contain 10% dietary protein. The results showed that variations in protein utilization and energy value determined on single ingredients between cultivation strategies were inconsistent and smaller than between harvest years. Overall, dietary crude fiber was negatively correlated with energy digestibility. The energy value of apple, kale, and pea was lower than expected from literature values. A mixture of plant ingredients fed to adult rats showed lower protein digestibility and higher energy digestibility than predicted. The protein digestibility data obtained using young rats in the calculation of protein digestibility-corrected amino acid score overestimates protein digestibility and quality and underestimates energy value for mature rats. The present study provides new data on protein utilization and energy digestibility of some typical plant foods that may

  10. Use of Coral Microatolls and a Tide Model to Measure Coseismic Vertical Deformation: Potential Utility and Common Mistakes

    Science.gov (United States)

    Meltzner, A. J.

    2007-12-01

    In the past few years, several great ( M>8) subduction megathrust ruptures have occurred beneath tropical seas and their fringing coral reefs. As predicted by elastic-dislocation theory, coastlines above the rupture patch rose, and adjacent regions subsided. Several investigators have used emerged or submerged coastal features to document land-level changes associated with these events. Unfortunately, when referencing these measurements to their high- or low-tide datums, some have overlooked the fact that both high and low tide levels can vary by more than a meter in some regions. In locations where tides are semidiurnal, a measured local high- tide value may be either the lower or higher high tide of the day, and a measured low-tide value may be either the higher or lower low tide. Furthermore, a measured high or low tide may be anywhere between the fortnightly spring and neap tides. Finally, even the elevations of spring and neap tides vary from month to month. One must know these variations to properly reference geological measurements. Some researchers have also made questionable assumptions about how the geological features they measured relate to tidal levels. As a result, published uplift or subsidence values in some studies may have errors of a meter or more, despite stated uncertainties of a few centimeters or less. A new approach, highlighted below, couples geological observations with a tide model to dramatically reduce uncertainties and produce more accurate estimates of uplift or subsidence. The upward growth of coral microatolls is controlled by low tide. Off the west coast of northern Sumatra, Porites microatolls' highest level of survival (HLS) is typically ~5 cm above annual low tide, but this is different for other genera and may be different in other regions. A comparison of pre- and post-earthquake HLS on a microatoll is the best method for documenting coseismic uplift; however, in cases where an entire reef was killed (and post-earthquake HLS

  11. Electroencephalographic topography measures of experienced utility.

    Science.gov (United States)

    Pedroni, Andreas; Langer, Nicolas; Koenig, Thomas; Allemand, Michael; Jäncke, Lutz

    2011-07-20

    Economic theory distinguishes two concepts of utility: decision utility, objectively quantifiable by choices, and experienced utility, referring to the satisfaction by an obtainment. To date, experienced utility is typically measured with subjective ratings. This study intended to quantify experienced utility by global levels of neuronal activity. Neuronal activity was measured by means of electroencephalographic (EEG) responses to gain and omission of graded monetary rewards at the level of the EEG topography in human subjects. A novel analysis approach allowed approximating psychophysiological value functions for the experienced utility of monetary rewards. In addition, we identified the time windows of the event-related potentials (ERP) and the respective intracortical sources, in which variations in neuronal activity were significantly related to the value or valence of outcomes. Results indicate that value functions of experienced utility and regret disproportionally increase with monetary value, and thus contradict the compressing value functions of decision utility. The temporal pattern of outcome evaluation suggests an initial (∼250 ms) coarse evaluation regarding the valence, concurrent with a finer-grained evaluation of the value of gained rewards, whereas the evaluation of the value of omitted rewards emerges later. We hypothesize that this temporal double dissociation is explained by reward prediction errors. Finally, a late, yet unreported, reward-sensitive ERP topography (∼500 ms) was identified. The sources of these topographical covariations are estimated in the ventromedial prefrontal cortex, the medial frontal gyrus, the anterior and posterior cingulate cortex and the hippocampus/amygdala. The results provide important new evidence regarding "how," "when," and "where" the brain evaluates outcomes with different hedonic impact.

  12. Clinical utility of measures of breathlessness.

    Science.gov (United States)

    Cullen, Deborah L; Rodak, Bernadette

    2002-09-01

    The clinical utility of measures of dyspnea has been debated in the health care community. Although breathlessness can be evaluated with various instruments, the most effective dyspnea measurement tool for patients with chronic lung disease or for measuring treatment effectiveness remains uncertain. Understanding the evidence for the validity and reliability of these instruments may provide a basis for appropriate clinical application. Evaluate instruments designed to measure breathlessness, either as single-symptom or multidimensional instruments, based on psychometrics foundations such as validity, reliability, and discriminative and evaluative properties. Classification of each dyspnea measurement instrument will recommend clinical application in terms of exercise, benchmarking patients, activities of daily living, patient outcomes, clinical trials, and responsiveness to treatment. Eleven dyspnea measurement instruments were selected. Each instrument was assessed as discriminative or evaluative and then analyzed as to its psychometric properties and purpose of design. Descriptive data from all studies were described according to their primary patient application (ie, chronic obstructive pulmonary disease, asthma, or other patient populations). The Borg Scale and the Visual Analogue Scale are applicable to exertion and thus can be applied to any cardiopulmonary patient to determine dyspnea. All other measures were determined appropriate for chronic obstructive pulmonary disease, whereas the Shortness of Breath Questionnaire can be applied to cystic fibrosis and lung transplant patients. The most appropriate utility for all instruments was measuring the effects on activities of daily living and for benchmarking patient progress. Instruments that quantify function and health-related quality of life have great utility for documenting outcomes but may be limited as to documenting treatment responsiveness in terms of clinically important changes. The dyspnea

  13. Cyberspace Assurance Metrics: Utilizing Models of Networks, Complex Systems Theory, Multidimensional Wavelet Analysis, and Generalized Entrophy Measures

    National Research Council Canada - National Science Library

    Johnson, Joseph E; Gudkov, Vladimir

    2005-01-01

    ... as continuous group theory and Markov processes. Based upon this research he has proposed that entropy metrics, and the associated cluster analysis of the network so measured by these metrics, can be useful indicators of aberrant processes and behavior. Other team members have obtained important connections using higher order Renyi entropy metrics, and complexity theory to both monitor real networks and to study networks by simulation.

  14. Neutron flux measurement utilizing Campbell technique

    International Nuclear Information System (INIS)

    Kropik, M.

    2000-01-01

    Application of the Campbell technique for the neutron flux measurement is described in the contribution. This technique utilizes the AC component (noise) of a neutron chamber signal rather than a usually used DC component. The Campbell theorem, originally discovered to describe noise behaviour of valves, explains that the root mean square of the AC component of the chamber signal is proportional to the neutron flux (reactor power). The quadratic dependence of the reactor power on the root mean square value usually permits to accomplish the whole current power range of the neutron flux measurement by only one channel. Further advantage of the Campbell technique is that large pulses of the response to neutrons are favoured over small pulses of the response to gamma rays in the ratio of their mean square charge transfer and thus, the Campbell technique provides an excellent gamma rays discrimination in the current operational range of a neutron chamber. The neutron flux measurement channel using state of the art components was designed and put into operation. Its linearity, accuracy, dynamic range, time response and gamma discrimination were tested on the VR-1 nuclear reactor in Prague, and behaviour under high neutron flux (accident conditions) was tested on the TRIGA nuclear reactor in Vienna. (author)

  15. Utilities for high performance dispersion model PHYSIC

    International Nuclear Information System (INIS)

    Yamazawa, Hiromi

    1992-09-01

    The description and usage of the utilities for the dispersion calculation model PHYSIC were summarized. The model was developed in the study of developing high performance SPEEDI with the purpose of introducing meteorological forecast function into the environmental emergency response system. The procedure of PHYSIC calculation consists of three steps; preparation of relevant files, creation and submission of JCL, and graphic output of results. A user can carry out the above procedure with the help of the Geographical Data Processing Utility, the Model Control Utility, and the Graphic Output Utility. (author)

  16. Deriving minimal models for resource utilization

    NARCIS (Netherlands)

    te Brinke, Steven; Bockisch, Christoph; Bergmans, Lodewijk; Malakuti Khah Olun Abadi, Somayeh; Aksit, Mehmet; Katz, Shmuel

    2013-01-01

    We show how compact Resource Utilization Models (RUMs) can be extracted from concrete overly-detailed models of systems or sub-systems in order to model energy-aware software. Using the Counterexample-Guided Abstraction Refinement (CEGAR) approach, along with model-checking tools, abstract models

  17. Risk measurement with equivalent utility principles

    NARCIS (Netherlands)

    Denuit, M.; Dhaene, J.; Goovaerts, M.; Kaas, R.; Laeven, R.

    2006-01-01

    Risk measures have been studied for several decades in the actuarial literature, where they appeared under the guise of premium calculation principles. Risk measures and properties that risk measures should satisfy have recently received considerable attention in the financial mathematics

  18. Continuous utility factor in segregation models.

    Science.gov (United States)

    Roy, Parna; Sen, Parongama

    2016-02-01

    We consider the constrained Schelling model of social segregation in which the utility factor of agents strictly increases and nonlocal jumps of the agents are allowed. In the present study, the utility factor u is defined in a way such that it can take continuous values and depends on the tolerance threshold as well as the fraction of unlike neighbors. Two models are proposed: in model A the jump probability is determined by the sign of u only, which makes it equivalent to the discrete model. In model B the actual values of u are considered. Model A and model B are shown to differ drastically as far as segregation behavior and phase transitions are concerned. In model A, although segregation can be achieved, the cluster sizes are rather small. Also, a frozen state is obtained in which steady states comprise many unsatisfied agents. In model B, segregated states with much larger cluster sizes are obtained. The correlation function is calculated to show quantitatively that larger clusters occur in model B. Moreover for model B, no frozen states exist even for very low dilution and small tolerance parameter. This is in contrast to the unconstrained discrete model considered earlier where agents can move even when utility remains the same. In addition, we also consider a few other dynamical aspects which have not been studied in segregation models earlier.

  19. The linear utility model for optimal selection

    NARCIS (Netherlands)

    Mellenbergh, Gideon J.; van der Linden, Willem J.

    A linear utility model is introduced for optimal selection when several subpopulations of applicants are to be distinguished. Using this model, procedures are described for obtaining optimal cutting scores in subpopulations in quota-free as well as quota-restricted selection situations. The cutting

  20. Power Measurement Errors on a Utility Aircraft

    Science.gov (United States)

    Bousman, William G.

    2002-01-01

    Extensive flight test data obtained from two recent performance tests of a UH 60A aircraft are reviewed. A power difference is calculated from the power balance equation and is used to examine power measurement errors. It is shown that the baseline measurement errors are highly non-Gaussian in their frequency distribution and are therefore influenced by additional, unquantified variables. Linear regression is used to examine the influence of other variables and it is shown that a substantial portion of the variance depends upon measurements of atmospheric parameters. Correcting for temperature dependence, although reducing the variance in the measurement errors, still leaves unquantified effects. Examination of the power difference over individual test runs indicates significant errors from drift, although it is unclear how these may be corrected. In an idealized case, where the drift is correctable, it is shown that the power measurement errors are significantly reduced and the error distribution is Gaussian. A new flight test program is recommended that will quantify the thermal environment for all torque measurements on the UH 60. Subsequently, the torque measurement systems will be recalibrated based on the measured thermal environment and a new power measurement assessment performed.

  1. Utility measurement in healthcare: the things I never got to.

    Science.gov (United States)

    Torrance, George W

    2006-01-01

    The present article provides a brief historical background on the development of utility measurement and cost-utility analysis in healthcare. It then outlines a number of research ideas in this field that the author never got to. The first idea is extremely fundamental. Why is health economics the only application of economics that does not use the discipline of economics? And, more importantly, what discipline should it use? Research ideas are discussed to investigate precisely the underlying theory and axiom systems of both Paretian welfare economics and the decision-theoretical utility approach. Can the two approaches be integrated or modified in some appropriate way so that they better reflect the needs of the health field? The investigation is described both for the individual and societal levels. Constructing a 'Robinson Crusoe' society of only a few individuals with different health needs, preferences and willingness to pay is suggested as a method for gaining insight into the problem. The second idea concerns the interval property of utilities and, therefore, QALYs. It specifically concerns the important requirement that changes of equal magnitude anywhere on the utility scale, or alternatively on the QALY scale, should be equally desirable. Unfortunately, one of the original restrictions on utility theory states that such comparisons are not permitted by the theory. It is shown, in an important new finding, that while this restriction applies in a world of certainty, it does not in a world of uncertainty, such as healthcare. Further research is suggested to investigate this property under both certainty and uncertainty. Other research ideas that are described include: the development of a precise axiomatic basis for the time trade-off method; the investigation of chaining as a method of preference measurement with the standard gamble or time trade-off; the development and training of a representative panel of the general public to improve the completeness

  2. The Utility of Ada for Army Modeling

    Science.gov (United States)

    1990-04-10

    34 Ada " for Ada Lovelace (1815-1851), a mathematician who worked with Charles Babbage on his difference and analytic engines.9 Later in 1979, the HOLWG...OF ADA FOR ARMY MODELING BY COLONEL MICHAEL L. YOCOM DISTRIBUTION STATEMENT A: Approved for publie releases distribution is unlimited. 1% LF-, EC TE...TITLE (ad Subtitle) a. TYPE OF REPORT & PERIOD COVERED The Utility of Ada for Army Modeling Individual Study Project 6 PERFORMING ORG. REPORT NUMBER

  3. Network bandwidth utilization forecast model on high bandwidth networks

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wuchert (William) [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-03-30

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2%. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.

  4. Network Bandwidth Utilization Forecast Model on High Bandwidth Network

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wucherl; Sim, Alex

    2014-07-07

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2percent. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.

  5. Characterizing QALYs under a General Rank Dependent Utility Model

    NARCIS (Netherlands)

    H. Bleichrodt (Han); J. Quiggin (John)

    1997-01-01

    textabstractThis paper provides a characterization of QALYs, the most important outcome measure in medical decision making, in the context of a general rank dependent utility model. We show that both for chronic and for nonchronic health states the characterization of QALYs depends on intuitive

  6. Development of a Neutron Spectroscopic System Utilizing Compressed Sensing Measurements

    Directory of Open Access Journals (Sweden)

    Vargas Danilo

    2016-01-01

    Full Text Available A new approach to neutron detection capable of gathering spectroscopic information has been demonstrated. The approach relies on an asymmetrical arrangement of materials, geometry, and an ability to change the orientation of the detector with respect to the neutron field. Measurements are used to unfold the energy characteristics of the neutron field using a new theoretical framework of compressed sensing. Recent theoretical results show that the number of multiplexed samples can be lower than the full number of traditional samples while providing the ability to have some super-resolution. Furthermore, the solution approach does not require a priori information or inclusion of physics models. Utilizing the MCNP code, a number of candidate detector geometries and materials were modeled. Simulations were carried out for a number of neutron energies and distributions with preselected orientations for the detector. The resulting matrix (A consists of n rows associated with orientation and m columns associated with energy and distribution where n < m. The library of known responses is used for new measurements Y (n × 1 and the solver is able to determine the system, Y = Ax where x is a sparse vector. Therefore, energy spectrum measurements are a combination of the energy distribution information of the identified elements of A. This approach allows for determination of neutron spectroscopic information using a single detector system with analog multiplexing. The analog multiplexing allows the use of a compressed sensing solution similar to approaches used in other areas of imaging. A single detector assembly provides improved flexibility and is expected to reduce uncertainty associated with current neutron spectroscopy measurement.

  7. Knowledge and utilization of preventive measures in the control of ...

    African Journals Online (AJOL)

    Background: The burden of neonatal malaria remains a major public health problem in Nigeria receiving less attention. Knowledge and awareness of preventive measures of neonatal malaria is still very low. This study aimed at assessing the Knowledge and utilization of preventive measures in the control of neonatal ...

  8. Modeling utilization distributions in space and time

    Science.gov (United States)

    Keating, K.A.; Cherry, S.

    2009-01-01

    W. Van Winkle defined the utilization distribution (UD) as a probability density that gives an animal's relative frequency of occurrence in a two-dimensional (x, y) plane. We extend Van Winkle's work by redefining the UD as the relative frequency distribution of an animal's occurrence in all four dimensions of space and time. We then describe a product kernel model estimation method, devising a novel kernel from the wrapped Cauchy distribution to handle circularly distributed temporal covariates, such as day of year. Using Monte Carlo simulations of animal movements in space and time, we assess estimator performance. Although not unbiased, the product kernel method yields models highly correlated (Pearson's r - 0.975) with true probabilities of occurrence and successfully captures temporal variations in density of occurrence. In an empirical example, we estimate the expected UD in three dimensions (x, y, and t) for animals belonging to each of two distinct bighorn sheep {Ovis canadensis) social groups in Glacier National Park, Montana, USA. Results show the method can yield ecologically informative models that successfully depict temporal variations in density of occurrence for a seasonally migratory species. Some implications of this new approach to UD modeling are discussed. ?? 2009 by the Ecological Society of America.

  9. Animal Models Utilized in HTLV-1 Research

    Directory of Open Access Journals (Sweden)

    Amanda R. Panfil

    2013-01-01

    Full Text Available Since the isolation and discovery of human T-cell leukemia virus type 1 (HTLV-1 over 30 years ago, researchers have utilized animal models to study HTLV-1 transmission, viral persistence, virus-elicited immune responses, and HTLV-1-associated disease development (ATL, HAM/TSP. Non-human primates, rabbits, rats, and mice have all been used to help understand HTLV-1 biology and disease progression. Non-human primates offer a model system that is phylogenetically similar to humans for examining viral persistence. Viral transmission, persistence, and immune responses have been widely studied using New Zealand White rabbits. The advent of molecular clones of HTLV-1 has offered the opportunity to assess the importance of various viral genes in rabbits, non-human primates, and mice. Additionally, over-expression of viral genes using transgenic mice has helped uncover the importance of Tax and Hbz in the induction of lymphoma and other lymphocyte-mediated diseases. HTLV-1 inoculation of certain strains of rats results in histopathological features and clinical symptoms similar to that of humans with HAM/TSP. Transplantation of certain types of ATL cell lines in immunocompromised mice results in lymphoma. Recently, “humanized” mice have been used to model ATL development for the first time. Not all HTLV-1 animal models develop disease and those that do vary in consistency depending on the type of monkey, strain of rat, or even type of ATL cell line used. However, the progress made using animal models cannot be understated as it has led to insights into the mechanisms regulating viral replication, viral persistence, disease development, and, most importantly, model systems to test disease treatments.

  10. Utility models as a business inspiration

    Directory of Open Access Journals (Sweden)

    Dlask, Petr

    2016-06-01

    Full Text Available Nowadays, there are many possibilities and conditions for individual business ideas. At first glance, it may seem that restrictions come from not getting the necessary funds. The fact is that funds are only a secondary issue. The primary one is the quality of the business idea. If the idea is good and sufficiently inventive, raising the necessary investment resources is not a problem. Utility models registered through the Industrial Property Office present a broad potential for invention. The authors offer an insight into the submitted innovative practices associated with traditional building materials, such as stone. Its use in construction has a long-standing tradition, and new manufacturing and processing methods offer new opportunities for unconventional business.

  11. Awareness and utilization of abattoir safety measures in Katsina ...

    African Journals Online (AJOL)

    The study assessed utilization of abattoir safety measures in Katsina South and Central senatorial districts, Nigeria. Information was obtained from a total of 80 abattoir workers in each district, while frequency counts, percentages and independent sample t-test were used to analyze data. The majority, in the respective ...

  12. A New Preference Reversal in Health Utility Measurement

    NARCIS (Netherlands)

    H. Bleichrodt (Han); J.L. Pinto (Jose Luis)

    2007-01-01

    textabstractA central assumption in health utility measurement is that preferences are invariant to the elicitation method that is used. This assumption is challenged by preference reversals. Previous studies have observed preference reversals between choice and matching tasks and between choice and

  13. Modeling regulated water utility investment incentives

    Science.gov (United States)

    Padula, S.; Harou, J. J.

    2014-12-01

    This work attempts to model the infrastructure investment choices of privatized water utilities subject to rate of return and price cap regulation. The goal is to understand how regulation influences water companies' investment decisions such as their desire to engage in transfers with neighbouring companies. We formulate a profit maximization capacity expansion model that finds the schedule of new supply, demand management and transfer schemes that maintain the annual supply-demand balance and maximize a companies' profit under the 2010-15 price control process in England. Regulatory incentives for costs savings are also represented in the model. These include: the CIS scheme for the capital expenditure (capex) and incentive allowance schemes for the operating expenditure (opex) . The profit-maximizing investment program (what to build, when and what size) is compared with the least cost program (social optimum). We apply this formulation to several water companies in South East England to model performance and sensitivity to water network particulars. Results show that if companies' are able to outperform the regulatory assumption on the cost of capital, a capital bias can be generated, due to the fact that the capital expenditure, contrarily to opex, can be remunerated through the companies' regulatory capital value (RCV). The occurrence of the 'capital bias' or its entity depends on the extent to which a company can finance its investments at a rate below the allowed cost of capital. The bias can be reduced by the regulatory penalties for underperformances on the capital expenditure (CIS scheme); Sensitivity analysis can be applied by varying the CIS penalty to see how and to which extent this impacts the capital bias effect. We show how regulatory changes could potentially be devised to partially remove the 'capital bias' effect. Solutions potentially include allowing for incentives on total expenditure rather than separately for capex and opex and allowing

  14. Utilization of Multispectral Images for Meat Color Measurements

    DEFF Research Database (Denmark)

    Trinderup, Camilla Himmelstrup; Dahl, Anders Lindbjerg; Carstensen, Jens Michael

    2013-01-01

    This short paper describes how the use of multispectral imaging for color measurement can be utilized in an efficient and descriptive way for meat scientists. The basis of the study is meat color measurements performed with a multispectral imaging system as well as with a standard colorimeter....... It is described how different color spaces can enhance the purpose of the analysis - whether that is investigation of a single sample or a comparison between samples. Moreover the study describes how a simple segmentation can be applied to the multispectral images in order to reach a more descriptive measure...... of color and color variance than what is obtained by the standard colorimeter....

  15. The Health Utilities Index (HUI®: concepts, measurement properties and applications

    Directory of Open Access Journals (Sweden)

    Horsman John

    2003-10-01

    Full Text Available Abstract This is a review of the Health Utilities Index (HUI® multi-attribute health-status classification systems, and single- and multi-attribute utility scoring systems. HUI refers to both HUI Mark 2 (HUI2 and HUI Mark 3 (HUI3 instruments. The classification systems provide compact but comprehensive frameworks within which to describe health status. The multi-attribute utility functions provide all the information required to calculate single-summary scores of health-related quality of life (HRQL for each health state defined by the classification systems. The use of HUI in clinical studies for a wide variety of conditions in a large number of countries is illustrated. HUI provides comprehensive, reliable, responsive and valid measures of health status and HRQL for subjects in clinical studies. Utility scores of overall HRQL for patients are also used in cost-utility and cost-effectiveness analyses. Population norm data are available from numerous large general population surveys. The widespread use of HUI facilitates the interpretation of results and permits comparisons of disease and treatment outcomes, and comparisons of long-term sequelae at the local, national and international levels.

  16. Modelling of biomass utilization for energy purpose

    Energy Technology Data Exchange (ETDEWEB)

    Grzybek, Anna (ed.)

    2010-07-01

    the overall farms structure, farms land distribution on several separate subfields for one farm, villages' overpopulation and very high employment in agriculture (about 27% of all employees in national economy works in agriculture). Farmers have low education level. In towns 34% of population has secondary education and in rural areas - only 15-16%. Less than 2% inhabitants of rural areas have higher education. The structure of land use is as follows: arable land 11.5%, meadows and pastures 25.4%, forests 30.1%. Poland requires implementation of technical and technological progress for intensification of agricultural production. The reason of competition for agricultural land is maintenance of the current consumption level and allocation of part of agricultural production for energy purposes. Agricultural land is going to be key factor for biofuels production. In this publication research results for the Project PL0073 'Modelling of energetical biomass utilization for energy purposes' have been presented. The Project was financed from the Norwegian Financial Mechanism and European Economic Area Financial Mechanism. The publication is aimed at moving closer and explaining to the reader problems connected with cultivations of energy plants and dispelling myths concerning these problems. Exchange of fossil fuels by biomass for heat and electric energy production could be significant input in carbon dioxide emission reduction. Moreover, biomass crop and biomass utilization for energetical purposes play important role in agricultural production diversification in rural areas transformation. Agricultural production widening enables new jobs creation. Sustainable development is going to be fundamental rule for Polish agriculture evolution in long term perspective. Energetical biomass utilization perfectly integrates in the evolution frameworks, especially on local level. There are two facts. The fist one is that increase of interest in energy crops in Poland

  17. Measuring Disclosure Risk and Data Utility for Flexible Table Generators

    Directory of Open Access Journals (Sweden)

    Shlomo Natalie

    2015-06-01

    Full Text Available Statistical agencies are making increased use of the internet to disseminate census tabular outputs through web-based flexible table-generating servers that allow users to define and generate their own tables. The key questions in the development of these servers are: (1 what data should be used to generate the tables, and (2 what statistical disclosure control (SDC method should be applied. To generate flexible tables, the server has to be able to measure the disclosure risk in the final output table, apply the SDC method and then iteratively reassess the disclosure risk. SDC methods may be applied either to the underlying data used to generate the tables and/or to the final output table that is generated from original data. Besides assessing disclosure risk, the server should provide a measure of data utility by comparing the perturbed table to the original table. In this article, we examine aspects of the design and development of a flexible table-generating server for census tables and demonstrate a disclosure risk-data utility analysis for comparing SDC methods. We propose measures for disclosure risk and data utility that are based on information theory.

  18. Mathematical models for estimating radio channels utilization when ...

    African Journals Online (AJOL)

    Definition of the radio channel utilization indicator is given. Mathematical models for radio channels utilization assessment by real-time flows transfer in the wireless self-organized network are presented. Estimated experiments results according to the average radio channel utilization productivity with and without buffering of ...

  19. Measuring and modelling concurrency

    Science.gov (United States)

    Sawers, Larry

    2013-01-01

    This article explores three critical topics discussed in the recent debate over concurrency (overlapping sexual partnerships): measurement of the prevalence of concurrency, mathematical modelling of concurrency and HIV epidemic dynamics, and measuring the correlation between HIV and concurrency. The focus of the article is the concurrency hypothesis – the proposition that presumed high prevalence of concurrency explains sub-Saharan Africa's exceptionally high HIV prevalence. Recent surveys using improved questionnaire design show reported concurrency ranging from 0.8% to 7.6% in the region. Even after adjusting for plausible levels of reporting errors, appropriately parameterized sexual network models of HIV epidemics do not generate sustainable epidemic trajectories (avoid epidemic extinction) at levels of concurrency found in recent surveys in sub-Saharan Africa. Efforts to support the concurrency hypothesis with a statistical correlation between HIV incidence and concurrency prevalence are not yet successful. Two decades of efforts to find evidence in support of the concurrency hypothesis have failed to build a convincing case. PMID:23406964

  20. Development of the multi-attribute Adolescent Health Utility Measure (AHUM

    Directory of Open Access Journals (Sweden)

    Beusterien Kathleen M

    2012-08-01

    Full Text Available Abstract Objective Obtain utilities (preferences for a generalizable set of health states experienced by older children and adolescents who receive therapy for chronic health conditions. Methods A health state classification system, the Adolescent Health Utility Measure (AHUM, was developed based on generic health status measures and input from children with Hunter syndrome and their caregivers. The AHUM contains six dimensions with 4–7 severity levels: self-care, pain, mobility, strenuous activities, self-image, and health perceptions. Using the time trade off (TTO approach, a UK population sample provided utilities for 62 of 16,800 AHUM states. A mixed effects model was used to estimate utilities for the AHUM states. The AHUM was applied to trial NCT00069641 of idursulfase for Hunter syndrome and its extension (NCT00630747. Results Observations (i.e., utilities totaled 3,744 (12*312 participants, with between 43 to 60 for each health state except for the best and worst states which had 312 observations. The mean utilities for the best and worst AHUM states were 0.99 and 0.41, respectively. The random effects model was statistically significant (p  Discussion The AHUM health state classification system may be used in future research to enable calculation of quality-adjust life expectancy for applicable health conditions.

  1. Quality measurement affecting surgical practice: Utility versus utopia.

    Science.gov (United States)

    Henry, Leonard R; von Holzen, Urs W; Minarich, Michael J; Hardy, Ashley N; Beachy, Wilbur A; Franger, M Susan; Schwarz, Roderich E

    2018-03-01

    The Triple Aim: improving healthcare quality, cost and patient experience has resulted in massive healthcare "quality" measurement. For many surgeons the origins, intent and strengths of this measurement barrage seems nebulous-though their shortcomings are noticeable. This article reviews the major organizations and programs (namely the Centers for Medicare and Medicaid Services) driving the somewhat burdensome healthcare quality climate. The success of this top-down approach is mixed, and far from convincing. We contend that the current programs disproportionately reflect the definitions of quality from (and the interests of) the national payer perspective; rather than a more balanced representation of all stakeholders interests-most importantly, patients' beneficence. The result is an environment more like performance management than one of valid quality assessment. Suggestions for a more meaningful construction of surgical quality measurement are offered, as well as a strategy to describe surgical quality from all of the stakeholders' perspectives. Our hope is to entice surgeons to engage in institution level quality improvement initiatives that promise utility and are less utopian than what is currently present. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. A Model of Trusted Measurement Model

    OpenAIRE

    Ma Zhili; Wang Zhihao; Dai Liang; Zhu Xiaoqin

    2017-01-01

    A model of Trusted Measurement supporting behavior measurement based on trusted connection architecture (TCA) with three entities and three levels is proposed, and a frame to illustrate the model is given. The model synthesizes three trusted measurement dimensions including trusted identity, trusted status and trusted behavior, satisfies the essential requirements of trusted measurement, and unified the TCA with three entities and three levels.

  3. A note on additive risk measures in rank-dependent utility

    NARCIS (Netherlands)

    Goovaerts, M.J.; Kaas, R.; Laeven, R.J.A.

    2010-01-01

    This note proves that risk measures obtained by applying the equivalent utility principle in rank-dependent utility are additive if and only if the utility function is linear or exponential and the probability weighting (distortion) function is the identity.

  4. Animal models of asthma: utility and limitations

    Directory of Open Access Journals (Sweden)

    Aun MV

    2017-11-01

    Full Text Available Marcelo Vivolo Aun,1,2 Rafael Bonamichi-Santos,1,2 Fernanda Magalhães Arantes-Costa,2 Jorge Kalil,1 Pedro Giavina-Bianchi1 1Clinical Immunology and Allergy Division, Department of Internal Medicine, University of São Paulo School of Medicine, São Paulo, Brazil, 2Laboratory of Experimental Therapeutics (LIM20, Department of Internal Medicine, University of Sao Paulo, Sao Paulo, Brazil Abstract: Clinical studies in asthma are not able to clear up all aspects of disease pathophysiology. Animal models have been developed to better understand these mechanisms and to evaluate both safety and efficacy of therapies before starting clinical trials. Several species of animals have been used in experimental models of asthma, such as Drosophila, rats, guinea pigs, cats, dogs, pigs, primates and equines. However, the most common species studied in the last two decades is mice, particularly BALB/c. Animal models of asthma try to mimic the pathophysiology of human disease. They classically include two phases: sensitization and challenge. Sensitization is traditionally performed by intraperitoneal and subcutaneous routes, but intranasal instillation of allergens has been increasingly used because human asthma is induced by inhalation of allergens. Challenges with allergens are performed through aerosol, intranasal or intratracheal instillation. However, few studies have compared different routes of sensitization and challenge. The causative allergen is another important issue in developing a good animal model. Despite being more traditional and leading to intense inflammation, ovalbumin has been replaced by aeroallergens, such as house dust mites, to use the allergens that cause human disease. Finally, researchers should define outcomes to be evaluated, such as serum-specific antibodies, airway hyperresponsiveness, inflammation and remodeling. The present review analyzes the animal models of asthma, assessing differences between species, allergens and routes

  5. Orlistat for the treatment of obesity: cost utility model.

    Science.gov (United States)

    Foxcroft, D R

    2005-11-01

    This study aimed to assess the cost utility of orlistat treatment based on (i) criteria from recent guidance from the National Institute for Clinical Excellence (NICE) for England and Wales (treatment discontinued if weight loss < 5% at 3 months; and < 10% at 6 months); and (ii) alternative criteria from the European Agency for the Evaluation of Medicinal Products (EMEA) licence for orlistat prescription in the European Community (treatment discontinued if weight loss < 5% at 3 months). Subjects were 1398 obese individuals who participated in three large European Phase III trials of orlistat treatment for adults (BMI: 28-47 kg m(-2)). Measures were: response to treatment in orlistat and placebo treatment groups; health benefit expressed as quality adjusted life years (QALYs) gained associated with weight loss; costs associated with orlistat treatment. In the cost utility model with multiway sensitivity analysis, the cost/QALY gained using the NICE criteria was estimated to be 24,431 pounds (sensitivity analysis range: 10,856 to 77,197 pounds). The cost/QALY gained using the alternative EMEA criteria was estimated to be 19,005 pounds (range: 8,840 to 57,798 pounds). In conclusion, NICE guidance for the continued use of orlistat was supported in this updated cost utility model, comparing favourably with a previously published estimate of 45,881 pounds per QALY gained. Moreover, the value for money of orlistat treatment is improved further if EMEA treatment criteria for continued orlistat treatment are applied. The EMEA criteria should be considered in any future changes to the NICE guidance or in guidance issued by similar agencies.

  6. Utility of Social Modeling for Proliferation Assessment - Preliminary Findings

    International Nuclear Information System (INIS)

    Coles, Garill A.; Gastelum, Zoe N.; Brothers, Alan J.; Thompson, Sandra E.

    2009-01-01

    Often the methodologies for assessing proliferation risk are focused around the inherent vulnerability of nuclear energy systems and associated safeguards. For example an accepted approach involves ways to measure the intrinsic and extrinsic barriers to potential proliferation. This paper describes preliminary investigation into non-traditional use of social and cultural information to improve proliferation assessment and advance the approach to assessing nuclear material diversion. Proliferation resistance assessment, safeguard assessments and related studies typically create technical information about the vulnerability of a nuclear energy system to diversion of nuclear material. The purpose of this research project is to find ways to integrate social information with technical information by explicitly considering the role of culture, groups and/or individuals to factors that impact the possibility of proliferation. When final, this work is expected to describe and demonstrate the utility of social science modeling in proliferation and proliferation risk assessments.

  7. Environmental Measurements and Modeling

    Science.gov (United States)

    Environmental measurement is any data collection activity involving the assessment of chemical, physical, or biological factors in the environment which affect human health. Learn more about these programs and tools that aid in environmental decisions

  8. A workflow learning model to improve geovisual analytics utility.

    Science.gov (United States)

    Roth, Robert E; Maceachren, Alan M; McCabe, Craig A

    2009-01-01

    INTRODUCTION: This paper describes the design and implementation of the G-EX Portal Learn Module, a web-based, geocollaborative application for organizing and distributing digital learning artifacts. G-EX falls into the broader context of geovisual analytics, a new research area with the goal of supporting visually-mediated reasoning about large, multivariate, spatiotemporal information. Because this information is unprecedented in amount and complexity, GIScientists are tasked with the development of new tools and techniques to make sense of it. Our research addresses the challenge of implementing these geovisual analytics tools and techniques in a useful manner. OBJECTIVES: The objective of this paper is to develop and implement a method for improving the utility of geovisual analytics software. The success of software is measured by its usability (i.e., how easy the software is to use?) and utility (i.e., how useful the software is). The usability and utility of software can be improved by refining the software, increasing user knowledge about the software, or both. It is difficult to achieve transparent usability (i.e., software that is immediately usable without training) of geovisual analytics software because of the inherent complexity of the included tools and techniques. In these situations, improving user knowledge about the software through the provision of learning artifacts is as important, if not more so, than iterative refinement of the software itself. Therefore, our approach to improving utility is focused on educating the user. METHODOLOGY: The research reported here was completed in two steps. First, we developed a model for learning about geovisual analytics software. Many existing digital learning models assist only with use of the software to complete a specific task and provide limited assistance with its actual application. To move beyond task-oriented learning about software use, we propose a process-oriented approach to learning based on

  9. A mangrove creek restoration plan utilizing hydraulic modeling.

    Science.gov (United States)

    Marois, Darryl E; Mitsch, William J

    2017-11-01

    Despite the valuable ecosystem services provided by mangrove ecosystems they remain threatened around the globe. Urban development has been a primary cause for mangrove destruction and deterioration in south Florida USA for the last several decades. As a result, the restoration of mangrove forests has become an important topic of research. Using field sampling and remote-sensing we assessed the past and present hydrologic conditions of a mangrove creek and its connected mangrove forest and brackish marsh systems located on the coast of Naples Bay in southwest Florida. We concluded that the hydrology of these connected systems had been significantly altered from its natural state due to urban development. We propose here a mangrove creek restoration plan that would extend the existing creek channel 1.1 km inland through the adjacent mangrove forest and up to an adjacent brackish marsh. We then tested the hydrologic implications using a hydraulic model of the mangrove creek calibrated with tidal data from Naples Bay and water levels measured within the creek. The calibrated model was then used to simulate the resulting hydrology of our proposed restoration plan. Simulation results showed that the proposed creek extension would restore a twice-daily flooding regime to a majority of the adjacent mangrove forest and that there would still be minimal tidal influence on the brackish marsh area, keeping its salinity at an acceptable level. This study demonstrates the utility of combining field data and hydraulic modeling to aid in the design of mangrove restoration plans.

  10. Kinetic models of cell growth, substrate utilization and bio ...

    African Journals Online (AJOL)

    Bio-decolorization kinetic studies of distillery effluent in a batch culture were conducted using Aspergillus fumigatus. A simple model was proposed using the Logistic Equation for the growth, Leudeking-Piret kinetics for bio-decolorization, and also for substrate utilization. The proposed models appeared to provide a suitable ...

  11. Model plant Key Measurement Points

    International Nuclear Information System (INIS)

    Schneider, R.A.

    1984-01-01

    For IAEA safeguards a Key Measurement Point is defined as the location where nuclear material appears in such a form that it may be measured to determine material flow or inventory. This presentation describes in an introductory manner the key measurement points and associated measurements for the model plant used in this training course

  12. Utility of Social Modeling for Proliferation Assessment - Preliminary Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Coles, Garill A.; Gastelum, Zoe N.; Brothers, Alan J.; Thompson, Sandra E.

    2009-06-01

    This Preliminary Assessment draft report will present the results of a literature search and preliminary assessment of the body of research, analysis methods, models and data deemed to be relevant to the Utility of Social Modeling for Proliferation Assessment research. This report will provide: 1) a description of the problem space and the kinds of information pertinent to the problem space, 2) a discussion of key relevant or representative literature, 3) a discussion of models and modeling approaches judged to be potentially useful to the research, and 4) the next steps of this research that will be pursued based on this preliminary assessment. This draft report represents a technical deliverable for the NA-22 Simulations, Algorithms, and Modeling (SAM) program. Specifically this draft report is the Task 1 deliverable for project PL09-UtilSocial-PD06, Utility of Social Modeling for Proliferation Assessment. This project investigates non-traditional use of social and cultural information to improve nuclear proliferation assessment, including nonproliferation assessment, proliferation resistance assessments, safeguards assessments and other related studies. These assessments often use and create technical information about the State’s posture towards proliferation, the vulnerability of a nuclear energy system to an undesired event, and the effectiveness of safeguards. This project will find and fuse social and technical information by explicitly considering the role of cultural, social and behavioral factors relevant to proliferation. The aim of this research is to describe and demonstrate if and how social science modeling has utility in proliferation assessment.

  13. Estimation of utility values from visual analog scale measures of health in patients undergoing cardiac surgery

    Directory of Open Access Journals (Sweden)

    Oddershede L

    2014-01-01

    Full Text Available Lars Oddershede,1,2 Jan Jesper Andreasen,1 Lars Ehlers2 1Department of Cardiothoracic Surgery, Center for Cardiovascular Research, Aalborg University Hospital, Aalborg, Denmark; 2Danish Center for Healthcare Improvements, Faculty of Social Sciences and Faculty of Health Sciences, Aalborg University, Aalborg East, Denmark Introduction: In health economic evaluations, mapping can be used to estimate utility values from other health outcomes in order to calculate quality adjusted life-years. Currently, no methods exist to map visual analog scale (VAS scores to utility values. This study aimed to develop and propose a statistical algorithm for mapping five dimensions of health, measured on VASs, to utility scores in patients suffering from cardiovascular disease. Methods: Patients undergoing coronary artery bypass grafting at Aalborg University Hospital in Denmark were asked to score their health using the five VAS items (mobility, self-care, ability to perform usual activities, pain, and presence of anxiety or depression and the EuroQol 5 Dimensions questionnaire. Regression analysis was used to estimate four mapping models from patients' age, sex, and the self-reported VAS scores. Prediction errors were compared between mapping models and on subsets of the observed utility scores. Agreement between predicted and observed values was assessed using Bland–Altman plots. Results: Random effects generalized least squares (GLS regression yielded the best results when quadratic terms of VAS scores were included. Mapping models fitted using the Tobit model and censored least absolute deviation regression did not appear superior to GLS regression. The mapping models were able to explain approximately 63%–65% of the variation in the observed utility scores. The mean absolute error of predictions increased as the observed utility values decreased. Conclusion: We concluded that it was possible to predict utility scores from VAS scores of the five

  14. Smartphone photography utilized to measure wrist range of motion.

    Science.gov (United States)

    Wagner, Eric R; Conti Mica, Megan; Shin, Alexander Y

    2018-02-01

    The purpose was to determine if smartphone photography is a reliable tool in measuring wrist movement. Smartphones were used to take digital photos of both wrists in 32 normal participants (64 wrists) at extremes of wrist motion. The smartphone measurements were compared with clinical goniometry measurements. There was a very high correlation between the clinical goniometry and smartphone measurements, as the concordance coefficients were high for radial deviation, ulnar deviation, wrist extension and wrist flexion. The Pearson coefficients also demonstrated the high precision of the smartphone measurements. The Bland-Altman plots demonstrated 29-31 of 32 smartphone measurements were within the 95% confidence interval of the clinical measurements for all positions of the wrists. There was high reliability between the photography taken by the volunteer and researcher, as well as high inter-observer reliability. Smartphone digital photography is a reliable and accurate tool for measuring wrist range of motion. II.

  15. Maximizing the model for Discounted Stream of Utility from ...

    African Journals Online (AJOL)

    Osagiede et al. (2009) considered an analytic model for maximizing discounted stream of utility from consumption when the rate of production is linear. A solution was provided to a level where methods of solving order differential equations will be applied, but they left off there, as a result of the mathematical complexity ...

  16. Model plant key measurement points

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    The key measurement points for the model low enriched fuel fabrication plant are described as well as the measurement methods. These are the measurement points and methods that are used to complete the plant's formal material balance. The purpose of the session is to enable participants to: (1) understand the basis for each key measurement; and (2) understand the importance of each measurement to the overall plant material balance. The feed to the model low enriched uranium fuel fabrication plant is UF 6 and the product is finished light water reactor fuel assemblies. The waste discards are solid and liquid wastes. The plant inventory consists of unopened UF 6 cylinders, UF 6 heels, fuel assemblies, fuel rods, fuel pellets, UO 2 powder, U 3 O 8 powder, and various scrap materials. At the key measurement points the total plant material balance (flow and inventory) is measured. The two types of key measurement points-flow and inventory are described

  17. Transition Models with Measurement Errors

    OpenAIRE

    Magnac, Thierry; Visser, Michael

    1999-01-01

    In this paper, we estimate a transition model that allows for measurement errors in the data. The measurement errors arise because the survey design is partly retrospective, so that individuals sometimes forget or misclassify their past labor market transitions. The observed data are adjusted for errors via a measurement-error mechanism. The parameters of the distribution of the true data, and those of the measurement-error mechanism are estimated by a two-stage method. The results, based on ...

  18. Nonlinear Growth Models as Measurement Models: A Second-Order Growth Curve Model for Measuring Potential.

    Science.gov (United States)

    McNeish, Daniel; Dumas, Denis

    2017-01-01

    Recent methodological work has highlighted the promise of nonlinear growth models for addressing substantive questions in the behavioral sciences. In this article, we outline a second-order nonlinear growth model in order to measure a critical notion in development and education: potential. Here, potential is conceptualized as having three components-ability, capacity, and availability-where ability is the amount of skill a student is estimated to have at a given timepoint, capacity is the maximum amount of ability a student is predicted to be able to develop asymptotically, and availability is the difference between capacity and ability at any particular timepoint. We argue that single timepoint measures are typically insufficient for discerning information about potential, and we therefore describe a general framework that incorporates a growth model into the measurement model to capture these three components. Then, we provide an illustrative example using the public-use Early Childhood Longitudinal Study-Kindergarten data set using a Michaelis-Menten growth function (reparameterized from its common application in biochemistry) to demonstrate our proposed model as applied to measuring potential within an educational context. The advantage of this approach compared to currently utilized methods is discussed as are future directions and limitations.

  19. Heat Transmission Coefficient Measurements in Buildings Utilizing a Heat Loss Measuring Device

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt

    2013-01-01

    and cooling our houses. There is a huge energy-saving potential in this area for reducing both the global climate problems as well as economy challenges. Heating of buildings in Denmark accounts for approximately 40% of the entire national energy consumption. For this reason, a reduction of heat losses from...... to optimize the energy performance. This paper presents a method for measuring the heat loss by utilizing a U-value meter. The U-value meter measures the heat transfer in the unit W/Km2 and has been used in several projects to upgrade the energy performance in temperate regions. The U-value meter was also......Global energy efficiency can be obtained in two ordinary ways. One way is to improve the energy production and supply side, and the other way is, in general, to reduce the consumption of energy in society. This paper has focus on the latter and especially the consumption of energy for heating...

  20. Sustainable geothermal utilization - Case histories; definitions; research issues and modelling

    International Nuclear Information System (INIS)

    Axelsson, Gudni

    2010-01-01

    Sustainable development by definition meets the needs of the present without compromising the ability of future generations to meet their own needs. The Earth's enormous geothermal resources have the potential to contribute significantly to sustainable energy use worldwide as well as to help mitigate climate change. Experience from the use of numerous geothermal systems worldwide lasting several decades demonstrates that by maintaining production below a certain limit the systems reach a balance between net energy discharge and recharge that may be maintained for a long time (100-300 years). Modelling studies indicate that the effect of heavy utilization is often reversible on a time-scale comparable to the period of utilization. Thus, geothermal resources can be used in a sustainable manner either through (1) constant production below the sustainable limit, (2) step-wise increase in production, (3) intermittent excessive production with breaks, and (4) reduced production after a shorter period of heavy production. The long production histories that are available for low-temperature as well as high-temperature geothermal systems distributed throughout the world, provide the most valuable data available for studying sustainable management of geothermal resources, and reservoir modelling is the most powerful tool available for this purpose. The paper presents sustainability modelling studies for the Hamar and Nesjavellir geothermal systems in Iceland, the Beijing Urban system in China and the Olkaria system in Kenya as examples. Several relevant research issues have also been identified, such as the relevance of system boundary conditions during long-term utilization, how far reaching interference from utilization is, how effectively geothermal systems recover after heavy utilization and the reliability of long-term (more than 100 years) model predictions. (author)

  1. A Utility-Based Approach to Some Information Measures

    Directory of Open Access Journals (Sweden)

    Sven Sandow

    2007-01-01

    Full Text Available We review a decision theoretic, i.e., utility-based, motivation for entropy and Kullback-Leibler relative entropy, the natural generalizations that follow, and various properties of thesegeneralized quantities. We then consider these generalized quantities in an easily interpreted spe-cial case. We show that the resulting quantities, share many of the properties of entropy andrelative entropy, such as the data processing inequality and the second law of thermodynamics.We formulate an important statistical learning problem – probability estimation – in terms of ageneralized relative entropy. The solution of this problem reflects general risk preferences via theutility function; moreover, the solution is optimal in a sense of robust absolute performance.

  2. Diagnostic utility of attention measures in postconcussion syndrome.

    Science.gov (United States)

    Cicerone, Keith D; Azulay, Joanne

    2002-08-01

    Neuropsychological evaluation may be of particular relevance in the detection of subtle cognitive impairments after mild traumatic brain injury (MTBI), including the subgroup of MTBI patients with a persistent postconcussion syndrome (PCS). Attention measures may be the most sensitive indicators of dysfunction associated with MTBI; however, previous studies have typically relied on the analysis of overall group differences, which may not reflect the diagnostic accuracy of attention measures when applied to individuals with MTBI. In the present study, subjects with persistent symptoms at least 3 months following a mild traumatic brain injury were compared with a sample of community living, normal control subjects in order to evaluate the sensitivity, specificity, and diagnostic accuracy of attention measures. Patients with PCS, screened with conservative inclusion and exclusion criteria, and a matched normal control group were administered six clinical tests of attention: Digit Span, Trail Making Test, Part A and Part B, Stroop Color-Word Test, Continuous Performance Test of Attention (CPTA), Paced Auditory Serial Addition Test (PASAT), and Ruff 2 & 7 Selective Attention Test. Consistent with prior research, these measures exhibited a wide range of sensitivity and specificity to possible cognitive impairment among patients. Attention measures may be the most sensitive indicators of dysfunction associated with PCS. Measures with high specificity (e.g., Stroop Color, and 2 & 7 Processing Speed) were shown to have strong positive predictive value, while measures with high sensitivity (e.g., CPTA) demonstrated strong negative predictive value for diagnosing PCS. Examination of the Odds Ratios indicated that measures assessing processing speed had a reliable, positive association with PCS, while measures without a processing speed component did not. Implications for making informed clinical decisions are discussed.

  3. Nondestructive measurement of esophageal biaxial mechanical properties utilizing sonometry

    Science.gov (United States)

    Aho, Johnathon M.; Qiang, Bo; Wigle, Dennis A.; Tschumperlin, Daniel J.; Urban, Matthew W.

    2016-07-01

    Malignant esophageal pathology typically requires resection of the esophagus and reconstruction to restore foregut continuity. Reconstruction options are limited and morbid. The esophagus represents a useful target for tissue engineering strategies based on relative simplicity in comparison to other organs. The ideal tissue engineered conduit would have sufficient and ideally matched mechanical tolerances to native esophageal tissue. Current methods for mechanical testing of esophageal tissues both in vivo and ex vivo are typically destructive, alter tissue conformation, ignore anisotropy, or are not able to be performed in fluid media. The aim of this study was to investigate biomechanical properties of swine esophageal tissues through nondestructive testing utilizing sonometry ex vivo. This method allows for biomechanical determination of tissue properties, particularly longitudinal and circumferential moduli and strain energy functions. The relative contribution of mucosal-submucosal layers and muscular layers are compared to composite esophagi. Swine thoracic esophageal tissues (n  =  15) were tested by pressure loading using a continuous pressure pump system to generate stress. Preconditioning of tissue was performed by pressure loading with the pump system and pre-straining the tissue to in vivo length before data was recorded. Sonometry using piezocrystals was utilized to determine longitudinal and circumferential strain on five composite esophagi. Similarly, five mucosa-submucosal and five muscular layers from thoracic esophagi were tested independently. This work on esophageal tissues is consistent with reported uniaxial and biaxial mechanical testing and reported results using strain energy theory and also provides high resolution displacements, preserves native architectural structure and allows assessment of biomechanical properties in fluid media. This method may be of use to characterize mechanical properties of tissue engineered esophageal

  4. Utilization of minicomputer in the radiocarbon analysis measurements

    International Nuclear Information System (INIS)

    Szarka, J.; Krnac, S.

    1984-01-01

    Possibilities of minicomputer applications for radiocarbon analysis with multielement proportional counters are considered. Off-line on-line measuring system operation is possible. TPA-70 minicomputer and CAMAC electronics are used in on-line operation. Block-diagrams of data acquisition and data processing as well as the block-diagram of data evaluation program, which permits not only to increase the precision of the measurements, but also reduces the measuring time by 1/3, as compared with conventional methods, are given

  5. CONTROVERSIES REGARDING THE UTILIZATION OF ALTMAN MODEL IN ROMANIA

    Directory of Open Access Journals (Sweden)

    Mihaela ONOFREI

    2012-06-01

    Full Text Available Altman model was built for U.S. companies, based on the characteristics of that economy. Promising results were obtained in other countries such as Britain, Australia, Canada, Finland, Germany, Israel, Norway, India, South Korea; the percentage is over 80% predictability. However, as can be seen, they have an Anglo-Saxon legal system and also the economic environment is highly developed. While there is no reason why this model can be applied to companies in the whole world, we recognize that each has its own peculiarities economic environment, therefore, local models forecast could be better than American models, at least in their testing phase. But the utilization of Altman model is suitable for the Romanian economy? Taking this into account, the purpose of this paper is to test the Altman model on the Romanian market.

  6. Using Random Utility Models to Estimate the Recreational Value of Estuarine Resources

    OpenAIRE

    Yoshiaki Kaoru; V. Kerry Smith; Jin Long Liu

    1995-01-01

    In this paper we describe a model using a household production framework to link measures of nonpoint source pollution to fishing quality and a random utility model to describe how that quality influences sport fishing parties' decisions in North Carolina. The results provide clear support for using a model that evaluates the effects of pollution on the activities and decisions associated with the fishing activity once a trip is taken. Site selection decisions are then conditioned on the anti...

  7. Stock Selection for Portfolios Using Expected Utility-Entropy Decision Model

    Directory of Open Access Journals (Sweden)

    Jiping Yang

    2017-09-01

    Full Text Available Yang and Qiu proposed and then recently improved an expected utility-entropy (EU-E measure of risk and decision model. When segregation holds, Luce et al. derived an expected utility term, plus a constant multiplies the Shannon entropy as the representation of risky choices, further demonstrating the reasonability of the EU-E decision model. In this paper, we apply the EU-E decision model to selecting the set of stocks to be included in the portfolios. We first select 7 and 10 stocks from the 30 component stocks of Dow Jones Industrial Average index, and then derive and compare the efficient portfolios in the mean-variance framework. The conclusions imply that efficient portfolios composed of 7(10 stocks selected using the EU-E model with intermediate intervals of the tradeoff coefficients are more efficient than that composed of the sets of stocks selected using the expected utility model. Furthermore, the efficient portfolio of 7(10 stocks selected by the EU-E decision model have almost the same efficient frontier as that of the sample of all stocks. This suggests the necessity of incorporating both the expected utility and Shannon entropy together when taking risky decisions, further demonstrating the importance of Shannon entropy as the measure of uncertainty, as well as the applicability of the EU-E model as a decision-making model.

  8. Direct measurement of neutrino mass utilizing beta decay of tritium

    Energy Technology Data Exchange (ETDEWEB)

    Kawakami, Hirokane (Tokyo Univ., Tanashi (Japan). Inst. for Nuclear Study)

    1992-10-01

    Among elementary particles, neutrino is the queer, wondrous elementary particle that has asymmetric property, and in spite of strenuous efforts, its mass has not been determined. The mass value expected for electron neutrino is as extremely small as several tens eV, but its value may control the future of the space whether this vast space continues to expand as it is or turns to contract. Accordingly, it has become a very important subject for space physics as well as elementary particle physics. The mass of neutrino has been considered to be nearly zero, but in 1980, the USSR group gave the finite value of 14-46 eV for the first time. Since then, the experiments for verifying this result were begun in more than ten places in the world. The method of measuring the mass of neutrino is that by precisely measuring with a beta ray analyzer the vicinity of the maximum value in the continuous energy spectra of the electron beam emitted simultaneously with neutrinos in the beta decay of tritium, and determining the mass from its form. [pi][radical]2 type air core beta ray analyzer, beta ray source, electron detector, the comparison of the contents of the published experiments, and the results of measurement are reported. (K.I.).

  9. Modeling utility-scale wind power plants, part 1: Economics

    Energy Technology Data Exchange (ETDEWEB)

    Milligan, M.

    2000-06-29

    As the worldwide use of wind turbine generators continues to increase in utility-scale applications, it will become increasingly important to assess the economic and reliability impact of these intermittent resources. Although the utility industry in the United States appears to be moving towards a restructured environment, basic economic and reliability issues will continue to be relevant to companies involved with electricity generation. This paper is the first of two that address modeling approaches and results obtained in several case studies and research projects at the National Renewable Energy Laboratory (NREL). This first paper addresses the basic economic issues associated with electricity production from several generators that include large-scale wind power plants. An important part of this discussion is the role of unit commitment and economic dispatch in production-cost models. This paper includes overviews and comparisons of the prevalent production-cost modeling met hods, including several case studies applied to a variety of electric utilities. The second paper discusses various methods of assessing capacity credit and results from several reliability-based studies performed at NREL.

  10. Quality of life, health status, and health service utilization related to a new measure of health literacy: FLIGHT/VIDAS.

    Science.gov (United States)

    Ownby, Raymond L; Acevedo, Amarilis; Jacobs, Robin J; Caballero, Joshua; Waldrop-Valverde, Drenna

    2014-09-01

    Researchers have identified significant limitations in some currently used measures of health literacy. The purpose of this paper is to present data on the relation of health-related quality of life, health status, and health service utilization to performance on a new measure of health literacy in a nonpatient population. The new measure was administered to 475 English- and Spanish-speaking community-dwelling volunteers along with existing measures of health literacy and assessments of health-related quality of life, health status, and healthcare service utilization. Relations among measures were assessed via correlations and health status and utilization was tested across levels of health literacy using ANCOVA models. The new health literacy measure is significantly related to existing measures of health literacy as well as to participants' health-related quality of life. Persons with lower levels of health literacy reported more health conditions, more frequent physical symptoms, and greater healthcare service utilization. The new measure of health literacy is valid and shows relations to measures of conceptually related constructs such as quality of life and health behaviors. FLIGHT/VIDAS may be useful to researchers and clinicians interested in a computer administered and scored measure of health literacy. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. Utility of proverb interpretation measures with cardiac transplant candidates.

    Science.gov (United States)

    Dugbartey, A T

    1998-12-01

    To assess metaphorical understanding and proverb interpretation in cardiac transplant candidates, the neuropsychological assessment records of 22 adults with end-stage cardiac disease under consideration for transplantation were analyzed. Neuropsychological tests consisted of the Controlled Oral Word Association Test, Halstead Category Test, Rey-Osterrieth Complex Figure Test (Copy), Trial Making Test, and summed scores for the proverb items of the WAIS-R Comprehension subtest. Analysis showed that the group tended to interpret proverbs literally. Proverb scores were significantly associated with scores on the Similarities and Picture Arrangement subtests of the WAIS-R. There was a moderate negative association between number of reported heart attacks and Proverb scores. The need for brief yet robust assessments including measures of inferential thinking and conceptualization in transplant candidates are highlighted.

  12. Models of Credit Risk Measurement

    OpenAIRE

    Hagiu Alina

    2011-01-01

    Credit risk is defined as that risk of financial loss caused by failure by the counterparty. According to statistics, for financial institutions, credit risk is much important than market risk, reduced diversification of the credit risk is the main cause of bank failures. Just recently, the banking industry began to measure credit risk in the context of a portfolio along with the development of risk management started with models value at risk (VAR). Once measured, credit risk can be diversif...

  13. Risk Decision Making Model for Reservoir Floodwater resources Utilization

    Science.gov (United States)

    Huang, X.

    2017-12-01

    Floodwater resources utilization(FRU) can alleviate the shortage of water resources, but there are risks. In order to safely and efficiently utilize the floodwater resources, it is necessary to study the risk of reservoir FRU. In this paper, the risk rate of exceeding the design flood water level and the risk rate of exceeding safety discharge are estimated. Based on the principle of the minimum risk and the maximum benefit of FRU, a multi-objective risk decision making model for FRU is constructed. Probability theory and mathematical statistics method is selected to calculate the risk rate; C-D production function method and emergy analysis method is selected to calculate the risk benefit; the risk loss is related to flood inundation area and unit area loss; the multi-objective decision making problem of the model is solved by the constraint method. Taking the Shilianghe reservoir in Jiangsu Province as an example, the optimal equilibrium solution of FRU of the Shilianghe reservoir is found by using the risk decision making model, and the validity and applicability of the model are verified.

  14. Hedonic travel cost and random utility models of recreation

    Energy Technology Data Exchange (ETDEWEB)

    Pendleton, L. [Univ. of Southern California, Los Angeles, CA (United States); Mendelsohn, R.; Davis, E.W. [Yale Univ., New Haven, CT (United States). School of Forestry and Environmental Studies

    1998-07-09

    Micro-economic theory began as an attempt to describe, predict and value the demand and supply of consumption goods. Quality was largely ignored at first, but economists have started to address quality within the theory of demand and specifically the question of site quality, which is an important component of land management. This paper demonstrates that hedonic and random utility models emanate from the same utility theoretical foundation, although they make different estimation assumptions. Using a theoretically consistent comparison, both approaches are applied to examine the quality of wilderness areas in the Southeastern US. Data were collected on 4778 visits to 46 trails in 20 different forest areas near the Smoky Mountains. Visitor data came from permits and an independent survey. The authors limited the data set to visitors from within 300 miles of the North Carolina and Tennessee border in order to focus the analysis on single purpose trips. When consistently applied, both models lead to results with similar signs but different magnitudes. Because the two models are equally valid, recreation studies should continue to use both models to value site quality. Further, practitioners should be careful not to make simplifying a priori assumptions which limit the effectiveness of both techniques.

  15. Division Quilts: A Measurement Model

    Science.gov (United States)

    Pratt, Sarah S.; Lupton, Tina M.; Richardson, Kerri

    2015-01-01

    As teachers seek activities to assist students in understanding division as more than just the algorithm, they find many examples of division as fair sharing. However, teachers have few activities to engage students in a quotative (measurement) model of division. Efraim Fischbein and his colleagues (1985) defined two types of whole-number…

  16. Novel design and sensitivity analysis of displacement measurement system utilizing knife edge diffraction for nanopositioning stages.

    Science.gov (United States)

    Lee, ChaBum; Lee, Sun-Kyu; Tarbutton, Joshua A

    2014-09-01

    This paper presents a novel design and sensitivity analysis of a knife edge-based optical displacement sensor that can be embedded with nanopositioning stages. The measurement system consists of a laser, two knife edge locations, two photodetectors, and axillary optics components in a simple configuration. The knife edge is installed on the stage parallel to its moving direction and two separated laser beams are incident on knife edges. While the stage is in motion, the direct transverse and diffracted light at each knife edge is superposed producing interference at the detector. The interference is measured with two photodetectors in a differential amplification configuration. The performance of the proposed sensor was mathematically modeled, and the effect of the optical and mechanical parameters, wavelength, beam diameter, distances from laser to knife edge to photodetector, and knife edge topography, on sensor outputs was investigated to obtain a novel analytical method to predict linearity and sensitivity. From the model, all parameters except for the beam diameter have a significant influence on measurement range and sensitivity of the proposed sensing system. To validate the model, two types of knife edges with different edge topography were used for the experiment. By utilizing a shorter wavelength, smaller sensor distance and higher edge quality increased measurement sensitivity can be obtained. The model was experimentally validated and the results showed a good agreement with the theoretically estimated results. This sensor is expected to be easily implemented into nanopositioning stage applications at a low cost and mathematical model introduced here can be used for design and performance estimation of the knife edge-based sensor as a tool.

  17. Utility of Small Animal Models of Developmental Programming.

    Science.gov (United States)

    Reynolds, Clare M; Vickers, Mark H

    2018-01-01

    Any effective strategy to tackle the global obesity and rising noncommunicable disease epidemic requires an in-depth understanding of the mechanisms that underlie these conditions that manifest as a consequence of complex gene-environment interactions. In this context, it is now well established that alterations in the early life environment, including suboptimal nutrition, can result in an increased risk for a range of metabolic, cardiovascular, and behavioral disorders in later life, a process preferentially termed developmental programming. To date, most of the mechanistic knowledge around the processes underpinning development programming has been derived from preclinical research performed mostly, but not exclusively, in laboratory mouse and rat strains. This review will cover the utility of small animal models in developmental programming, the limitations of such models, and potential future directions that are required to fully maximize information derived from preclinical models in order to effectively translate to clinical use.

  18. Animal models of myasthenia gravis: utility and limitations

    Science.gov (United States)

    Mantegazza, Renato; Cordiglieri, Chiara; Consonni, Alessandra; Baggi, Fulvio

    2016-01-01

    Myasthenia gravis (MG) is a chronic autoimmune disease caused by the immune attack of the neuromuscular junction. Antibodies directed against the acetylcholine receptor (AChR) induce receptor degradation, complement cascade activation, and postsynaptic membrane destruction, resulting in functional reduction in AChR availability. Besides anti-AChR antibodies, other autoantibodies are known to play pathogenic roles in MG. The experimental autoimmune MG (EAMG) models have been of great help over the years in understanding the pathophysiological role of specific autoantibodies and T helper lymphocytes and in suggesting new therapies for prevention and modulation of the ongoing disease. EAMG can be induced in mice and rats of susceptible strains that show clinical symptoms mimicking the human disease. EAMG models are helpful for studying both the muscle and the immune compartments to evaluate new treatment perspectives. In this review, we concentrate on recent findings on EAMG models, focusing on their utility and limitations. PMID:27019601

  19. Precise models deserve precise measures

    Directory of Open Access Journals (Sweden)

    Benjamin E. Hilbig

    2010-07-01

    Full Text Available The recognition heuristic (RH --- which predicts non-compensatory reliance on recognition in comparative judgments --- has attracted much research and some disagreement, at times. Most studies have dealt with whether or under which conditions the RH is truly used in paired-comparisons. However, even though the RH is a precise descriptive model, there has been less attention concerning the precision of the methods applied to measure RH-use. In the current work, I provide an overview of different measures of RH-use tailored to the paradigm of natural recognition which has emerged as a preferred way of studying the RH. The measures are compared with respect to different criteria --- with particular emphasis on how well they uncover true use of the RH. To this end, both simulations and a re-analysis of empirical data are presented. The results indicate that the adherence rate --- which has been pervasively applied to measure RH-use --- is a severely biased measure. As an alternative, a recently developed formal measurement model emerges as the recommended candidate for assessment of RH-use.

  20. Measuring the costs of photovoltaics in an electric utility planning framework

    International Nuclear Information System (INIS)

    Awerbuch, Shimon

    1993-01-01

    Utility planning models evaluate alternative generating options using the revenue requirements method-an engineering-oriented, discounted cash-flow (DCF) methodology that has been widely used for over three decades. Discounted cash-flow techniques were conceived in the context of active expense-intensive technologies, such as conventional, fuel-intensive power generation. Photovoltaic (PV) technology, by contrast, is passive and capital intensive-attributes that are similar to those of other new process technologies, such as computer-integrated manufacturing. Discounted cash-flow techniques have a dismal record for correctly valuing new technologies with these attributes, in part because their benefits cannot be easily measured using traditional accounting concepts. This paper examines how these issues affect cost measurement in both conventional and PV-based electricity, and presents kWh-cost estimates for three technologies (coal, gas and PV) using risk-adjusted approaches, which suggest that PV costs are generally equivalent to the gas/combined cycle and about twice the cost of base-load coal (environmental externalities are ignored). Finally, the paper evaluates independent power purchases for a typical US utility and finds that in such a setting the cost of PV-based power is comparable to the firm's published avoided costs. (author)

  1. Utility of the iPhone 4 Gyroscope Application in the Measurement of Wrist Motion.

    Science.gov (United States)

    Lendner, Nuphar; Wells, Erik; Lavi, Idit; Kwok, Yan Yan; Ho, Pak-Cheong; Wollstein, Ronit

    2017-09-01

    Measurement of wrist range of motion (ROM) is important to all aspects of treatment and rehabilitation of upper extremity conditions. Recently, gyroscopes have been used to measure ROM and may be more precise than manual evaluations. The purpose of this study was to evaluate the use of the iPhone gyroscope application and compare it with use of a goniometer, specifically evaluating its accuracy and ease of use. A cross-sectional study evaluated adult Caucasian participants, with no evidence of wrist pathology. Wrist ROM measurements in 306 wrists using the 2 methods were compared. Demographic information was collected including age, sex, and occupation. Analysis included mixed models and Bland-Altman plots. Wrist motion was similar between the 2 methods. Technical difficulties were encountered with gyroscope use. Age was an independent predictor of ROM. Correct measurement of ROM is critical to guide, compare, and evaluate treatment and rehabilitation of the upper extremity. Inaccurate measurements could mislead the surgeon and harm patient adherence with therapy or surgeon instruction. An application used by the patient could improve adherence but needs to be reliable and easy to use. Evaluation is necessary before utilization of such an application. This study supports revision of the application on the iPhone to improve ease of use.

  2. Proposal of the Measurement Method of the Transmission Line Constants by Automatic Oscillograph Utilization

    Science.gov (United States)

    Ooura, Yoshifumi

    The author devised new method for measurement of the transmission line constants of high precision with the automatic oscillograph. This paper is proposal of new method for measurement of the transmission line constants. The author utilized that the inherent eigenvector matrixs of transmission line had an equal relation with four-terminal constants eigenvector matrixs of transmission line. And the author calculated four-terminal constants of transmission line from the data (voltage-current data of the automatic oscillograph) of six cases of transmission line system faults and devised the method for measurement for transmission line constants from analysis of the four-terminal constants of transmission line next. Furthermore, the author inspected this new method in the system fault simulations of the EMTP transmission line system model. It was shown that the result is the measurement method of high accuracy. From now on, the author advances the measurement of the transmission line constants from actual system faults data of the transmission line and its periphery with the cooperation power system companies.

  3. Animal models of GM2 gangliosidosis: utility and limitations

    Directory of Open Access Journals (Sweden)

    Lawson CA

    2016-07-01

    Full Text Available Cheryl A Lawson,1,2 Douglas R Martin2,3 1Department of Pathobiology, 2Scott-Ritchey Research Center, 3Department of Anatomy, Physiology and Pharmacology, Auburn University College of Veterinary Medicine, Auburn, AL, USA Abstract: GM2 gangliosidosis, a subset of lysosomal storage disorders, is caused by a deficiency of the glycohydrolase, β-N-acetylhexosaminidase, and includes the closely related Tay–Sachs and Sandhoff diseases. The enzyme deficiency prevents the normal, stepwise degradation of ganglioside, which accumulates unchecked within the cellular lysosome, particularly in neurons. As a result, individuals with GM2 gangliosidosis experience progressive neurological diseases including motor deficits, progressive weakness and hypotonia, decreased responsiveness, vision deterioration, and seizures. Mice and cats are well-established animal models for Sandhoff disease, whereas Jacob sheep are the only known laboratory animal model of Tay–Sachs disease to exhibit clinical symptoms. Since the human diseases are relatively rare, animal models are indispensable tools for further study of pathogenesis and for development of potential treatments. Though no effective treatments for gangliosidoses currently exist, animal models have been used to test promising experimental therapies. Herein, the utility and limitations of gangliosidosis animal models and how they have contributed to the development of potential new treatments are described. Keywords: GM2 gangliosidosis, Tay–Sachs disease, Sandhoff disease, lysosomal storage disorder, sphingolipidosis, brain disease

  4. Animal models of GM2 gangliosidosis: utility and limitations.

    Science.gov (United States)

    Lawson, Cheryl A; Martin, Douglas R

    2016-01-01

    GM2 gangliosidosis, a subset of lysosomal storage disorders, is caused by a deficiency of the glycohydrolase, β-N-acetylhexosaminidase, and includes the closely related Tay-Sachs and Sandhoff diseases. The enzyme deficiency prevents the normal, stepwise degradation of ganglioside, which accumulates unchecked within the cellular lysosome, particularly in neurons. As a result, individuals with GM2 gangliosidosis experience progressive neurological diseases including motor deficits, progressive weakness and hypotonia, decreased responsiveness, vision deterioration, and seizures. Mice and cats are well-established animal models for Sandhoff disease, whereas Jacob sheep are the only known laboratory animal model of Tay-Sachs disease to exhibit clinical symptoms. Since the human diseases are relatively rare, animal models are indispensable tools for further study of pathogenesis and for development of potential treatments. Though no effective treatments for gangliosidoses currently exist, animal models have been used to test promising experimental therapies. Herein, the utility and limitations of gangliosidosis animal models and how they have contributed to the development of potential new treatments are described.

  5. ON THE UTILITY OF SORNETTE’S CRASH PREDICTION MODEL

    Directory of Open Access Journals (Sweden)

    IOAN ROXANA

    2015-10-01

    Full Text Available Stock market crashes have been a constant subject of interest among capital market researchers. Crashes’ behavior has been largely studied, but the problem that remained unsolved until recently, was that of a prediction algorithm. Stock market crashes are complex and global events, rarely taking place on a singular national capital market. They usually occur simultaneously on several if not most capital markets, implying important losses among the investors. Investments made within various stock markets have an extremely important role within the global economy, influencing people’s lives in many ways. Presently, stock market crashes are being studied with great interest, not only because of the necessity of a deep understanding of the phenomenon, but also because of the fact that these crashes belong to the so-called category of “extreme phenomena”. Those are the main reasons that determined scientists to try building mathematical models for crashes prediction. Such a model was built by Professor Didier Sornette, inspired and adapted from an earthquake detection model. Still, the model keeps many characteristics of its predecessor, not being fully adapted to the economic realities and demands, or to the stock market’s characteristics. This paper attempts to test the utility of the model in predicting Bucharest Stock Exchange’s price falls, as well as the possibility of it being successfully used by investors.

  6. Generic Model to Send Secure Alerts for Utility Companies

    Directory of Open Access Journals (Sweden)

    Perez–Díaz J.A.

    2010-04-01

    Full Text Available In some industries such as logistics services, bank services, and others, the use of automated systems that deliver critical business information anytime and anywhere play an important role in the decision making process. This paper introduces a "Generic model to send secure alerts and notifications", which operates as a middleware between enterprise data sources and its mobile users. This model uses Short Message Service (SMS as its main mobile messaging technology, however is open to use new types of messaging technologies. Our model is interoperable with existing information systems, it can store any kind of information about alerts or notifications at different levels of granularity, it offers different types of notifications (as analert when critical business problems occur,asanotificationina periodical basis or as 2 way query. Notification rules can be customized by final users according to their preferences. The model provides a security framework in the cases where information requires confidentiality, it is extensible to existing and new messaging technologies (like e–mail, MMS, etc. It is a platform, mobile operator and hardware independent. Currently, our solution is being used at the Comisión Federal de Electricidad (Mexico's utility company to deliver secure alerts related to critical events registered in the main power generation plants of our country.

  7. Utilizing Chamber Data for Developing and Validating Climate Change Models

    Science.gov (United States)

    Monje, Oscar

    2012-01-01

    Controlled environment chambers (e.g. growth chambers, SPAR chambers, or open-top chambers) are useful for measuring plant ecosystem responses to climatic variables and CO2 that affect plant water relations. However, data from chambers was found to overestimate responses of C fluxes to CO2 enrichment. Chamber data may be confounded by numerous artifacts (e.g. sidelighting, edge effects, increased temperature and VPD, etc) and this limits what can be measured accurately. Chambers can be used to measure canopy level energy balance under controlled conditions and plant transpiration responses to CO2 concentration can be elucidated. However, these measurements cannot be used directly in model development or validation. The response of stomatal conductance to CO2 will be the same as in the field, but the measured response must be recalculated in such a manner to account for differences in aerodynamic conductance, temperature and VPD between the chamber and the field.

  8. A structured review of health utility measures and elicitation in advanced/metastatic breast cancer

    Directory of Open Access Journals (Sweden)

    Hao Y

    2016-06-01

    Full Text Available Yanni Hao,1 Verena Wolfram,2 Jennifer Cook2 1Novartis Pharmaceuticals, East Hanover, NJ, USA; 2Adelphi Values, Bollington, UK Background: Health utilities are increasingly incorporated in health economic evaluations. Different elicitation methods, direct and indirect, have been established in the past. This study examined the evidence on health utility elicitation previously reported in advanced/metastatic breast cancer and aimed to link these results to requirements of reimbursement bodies. Methods: Searches were conducted using a detailed search strategy across several electronic databases (MEDLINE, EMBASE, Cochrane Library, and EconLit databases, online sources (Cost-effectiveness Analysis Registry and the Health Economics Research Center, and web sites of health technology assessment (HTA bodies. Publications were selected based on the search strategy and the overall study objectives. Results: A total of 768 publications were identified in the searches, and 26 publications, comprising 18 journal articles and eight submissions to HTA bodies, were included in the evidence review. Most journal articles derived utilities from the European Quality of Life Five-Dimensions questionnaire (EQ-5D. Other utility measures, such as the direct methods standard gamble (SG, time trade-off (TTO, and visual analog scale (VAS, were less frequently used. Several studies described mapping algorithms to generate utilities from disease-specific health-related quality of life (HRQOL instruments such as European Organization for Research and Treatment of Cancer Quality of Life Questionnaire – Core 30 (EORTC QLQ-C30, European Organization for Research and Treatment of Cancer Quality of Life Questionnaire – Breast Cancer 23 (EORTC QLQ-BR23, Functional Assessment of Cancer Therapy – General questionnaire (FACT-G, and Utility-Based Questionnaire-Cancer (UBQ-C; most used EQ-5D as the reference. Sociodemographic factors that affect health utilities, such as age, sex

  9. The utility of novel outcome measures in a naturalistic evaluation of schizophrenia treatment

    Directory of Open Access Journals (Sweden)

    Tompsett T

    2018-03-01

    Full Text Available Tamara Tompsett,1 Kate Masters,1,2 Parastou Donyai1 1Department of Pharmacy, University of Reading, Reading, UK; 2Department of Pharmacy, Berkshire Healthcare NHS Foundation Trust, Reading, UK Background: A number of naturalistic studies have investigated paliperidone palmitate (PP using proxy measures of effectiveness. An unexplored option is to examine the utility of the mental health clustering tool (MHCT, which is used in UK clinical practice to measure patient well-being and is linked to allocation of resources. This study evaluated the effectiveness of PP using the MHCT, the Health of the Nation Outcome Scales (HoNOS, and, for comparison, more conventional outcome measures.Methods: This was a naturalistic, 1-year evaluation of PP (n=50 in schizophrenia as well as a comparator antipsychotic drugs group. Changes in the MHCT cluster-score cost ranking and four HoNOS-derived factors were analyzed using a mixed-model statistical analysis to explore the utility of these measures.Results: At 1 year, 30 patients (60% continued PP treatment. The mean “cluster-score cost ranking” (–1.5 and Severe Disturbance factor scores (–1.1 were significantly lower (p-value [adjusted] =0.0003, p-value [adjusted] =0.002, respectively after 1 year of antipsychotic treatment but no differences were found between PP and the comparator antipsychotic drugs group. Patients prescribed PP were 1.8 times (95% CI 1.1–3.1 more likely to be discharged from hospital than those in the comparator antipsychotic drugs group.Conclusion: PP’s continuation rate after 1 year made the study similar to the existing evaluations, and it was possible to prospectively evaluate antipsychotic effectiveness using the novel measures although these did not discriminate between PP and the comparator group. The investigation illustrates that in principle these novel measures are meaningful in naturalistic study designs. Keywords: paliperidone palmitate, antipsychotics

  10. Economic analysis of open space box model utilization in spacecraft

    Science.gov (United States)

    Mohammad, Atif F.; Straub, Jeremy

    2015-05-01

    It is a known fact that the amount of data about space that is stored is getting larger on an everyday basis. However, the utilization of Big Data and related tools to perform ETL (Extract, Transform and Load) applications will soon be pervasive in the space sciences. We have entered in a crucial time where using Big Data can be the difference (for terrestrial applications) between organizations underperforming and outperforming their peers. The same is true for NASA and other space agencies, as well as for individual missions and the highly-competitive process of mission data analysis and publication. In most industries, conventional opponents and new candidates alike will influence data-driven approaches to revolutionize and capture the value of Big Data archives. The Open Space Box Model is poised to take the proverbial "giant leap", as it provides autonomic data processing and communications for spacecraft. We can find economic value generated from such use of data processing in our earthly organizations in every sector, such as healthcare, retail. We also can easily find retailers, performing research on Big Data, by utilizing sensors driven embedded data in products within their stores and warehouses to determine how these products are actually used in the real world.

  11. Modeling a Packed Bed Reactor Utilizing the Sabatier Process

    Science.gov (United States)

    Shah, Malay G.; Meier, Anne J.; Hintze, Paul E.

    2017-01-01

    A numerical model is being developed using Python which characterizes the conversion and temperature profiles of a packed bed reactor (PBR) that utilizes the Sabatier process; the reaction produces methane and water from carbon dioxide and hydrogen. While the specific kinetics of the Sabatier reaction on the RuAl2O3 catalyst pellets are unknown, an empirical reaction rate equation1 is used for the overall reaction. As this reaction is highly exothermic, proper thermal control is of the utmost importance to ensure maximum conversion and to avoid reactor runaway. It is therefore necessary to determine what wall temperature profile will ensure safe and efficient operation of the reactor. This wall temperature will be maintained by active thermal controls on the outer surface of the reactor. Two cylindrical PBRs are currently being tested experimentally and will be used for validation of the Python model. They are similar in design except one of them is larger and incorporates a preheat loop by feeding the reactant gas through a pipe along the center of the catalyst bed. The further complexity of adding a preheat pipe to the model to mimic the larger reactor is yet to be implemented and validated; preliminary validation is done using the smaller PBR with no reactant preheating. When mapping experimental values of the wall temperature from the smaller PBR into the Python model, a good approximation of the total conversion and temperature profile has been achieved. A separate CFD model incorporates more complex three-dimensional effects by including the solid catalyst pellets within the domain. The goal is to improve the Python model to the point where the results of other reactor geometry can be reasonably predicted relatively quickly when compared to the much more computationally expensive CFD approach. Once a reactor size is narrowed down using the Python approach, CFD will be used to generate a more thorough prediction of the reactors performance.

  12. Measurement error models with interactions

    Science.gov (United States)

    Midthune, Douglas; Carroll, Raymond J.; Freedman, Laurence S.; Kipnis, Victor

    2016-01-01

    An important use of measurement error models is to correct regression models for bias due to covariate measurement error. Most measurement error models assume that the observed error-prone covariate (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$W$\\end{document}) is a linear function of the unobserved true covariate (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$X$\\end{document}) plus other covariates (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$Z$\\end{document}) in the regression model. In this paper, we consider models for \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$W$\\end{document} that include interactions between \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$X$\\end{document} and \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$Z$\\end{document}. We derive the conditional distribution of

  13. Utilizing Photogrammetry and Strain Gage Measurement to Characterize Pressurization of an Inflatable Module

    Science.gov (United States)

    Valle, Gerard D.; Selig, Molly; Litteken, Doug; Oliveras, Ovidio

    2012-01-01

    This paper documents the integration of a large hatch penetration into an inflatable module. This paper also documents the comparison of analytical load predictions with measured results utilizing strain measurement. Strain was measured by utilizing photogrammetric measurement and through measurement obtained from strain gages mounted to selected clevises that interface with the structural webbings. Bench testing showed good correlation between strain measurement obtained from an extensometer and photogrammetric measurement especially after the fabric has transitioned through the low load/high strain region of the curve. Test results for the full-scale torus showed mixed results in the lower load and thus lower strain regions. Overall strain, and thus load, measured by strain gages and photogrammetry tracked fairly well with analytical predictions. Methods and areas of improvements are discussed.

  14. Awareness of Occupational Injuries and Utilization of Safety Measures among Welders in Coastal South India

    Directory of Open Access Journals (Sweden)

    S Ganesh Kumar

    2013-10-01

    Full Text Available Background: Awareness of occupational hazards and its safety precautions among welders is an important health issue, especially in developing countries. Objective: To assess the awareness of occupational hazards and utilization of safety measures among welders in coastal South India. Methods: A cross-sectional study was conducted among 209 welders in Puducherry, South India. Baseline characteristics, awareness of health hazards, safety measures and their availability to and utilization by the participants were assessed using a pre-tested structured questionnaire. Results: The majority of studied welders aged between 20 and 40 years (n=160, 76.6% and had 1-10 years of education (n=181, 86.6%. They were more aware of hazards (n=174, 83.3% than safety measures (n=134, 64.1%. The majority of studied welders utilized at least one protective measure in the preceding week (n=200, 95.7%. Many of them had more than 5 years of experience (n=175, 83.7%, however, only 20% of them had institutional training (n=40, 19.1%. Age group, education level, and utilization of safety measures were significantly associated with awareness of hazards in univariate analysis (p<0.05. Conclusion: Awareness of occupational hazards and utilization of safety measures is low among welders in coastal South India, which highlights the importance of strengthening safety regulatory services towards this group of workers.

  15. Occupational hazard perception and utilization of protective measures by welders in Kano City, Northern Nigeria

    Directory of Open Access Journals (Sweden)

    Z lliyasu

    2010-01-01

    Conclusions: The level of awareness of occupational hazards was high with low utilization of protective measures against the hazards. There is therefore need for safety education and legislation for the use of protective measures to safeguard workers health and increase productivity.

  16. A combined model to assess technical and economic consequences of changing conditions and management options for wastewater utilities.

    Science.gov (United States)

    Giessler, Mathias; Tränckner, Jens

    2018-02-01

    The paper presents a simplified model that quantifies economic and technical consequences of changing conditions in wastewater systems on utility level. It has been developed based on data from stakeholders and ministries, collected by a survey that determined resulting effects and adapted measures. The model comprises all substantial cost relevant assets and activities of a typical German wastewater utility. It consists of three modules: i) Sewer for describing the state development of sewer systems, ii) WWTP for process parameter consideration of waste water treatment plants (WWTP) and iii) Cost Accounting for calculation of expenses in the cost categories and resulting charges. Validity and accuracy of this model was verified by using historical data from an exemplary wastewater utility. Calculated process as well as economic parameters shows a high accuracy compared to measured parameters and given expenses. Thus, the model is proposed to support strategic, process oriented decision making on utility level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Resource allocation on computational grids using a utility model and the knapsack problem

    CERN Document Server

    Van der ster, Daniel C; Parra-Hernandez, Rafael; Sobie, Randall J

    2009-01-01

    This work introduces a utility model (UM) for resource allocation on computational grids and formulates the allocation problem as a variant of the 0–1 multichoice multidimensional knapsack problem. The notion of task-option utility is introduced, and it is used to effect allocation policies. We present a variety of allocation policies, which are expressed as functions of metrics that are both intrinsic and external to the task and resources. An external user-defined credit-value metric is shown to allow users to intervene in the allocation of urgent or low priority tasks. The strategies are evaluated in simulation against random workloads as well as those drawn from real systems. We measure the sensitivity of the UM-derived schedules to variations in the allocation policies and their corresponding utility functions. The UM allocation strategy is shown to optimally allocate resources congruent with the chosen policies.

  18. FY 2000 report on the results of the project for measures for rationalization of the international energy utilization - the model project for the heightening of efficiency of the international energy consumption. 1/2. Model project for facilities for effective utilization of by-producing exhaust gases from chemical plant, etc.; 2000 nendo kokusai energy shohi koritsuka tou moderu jigyo seika hokokusho. Kagaku kojo fukusei haigasu tou yuko riyo setsubi moderu jigyo (1/2)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    For the purpose of contributing to the reduction in the energy consumption in China and the stable energy supply in Japan by heightening efficiency of the energy utilization in the petrochemical industry which is an industry of much energy consumption in China, a model project for facilities for effective utilization of by-producing gases from chemical plant, etc. was carried out, and the FY 2000 results were reported. Concretely, the combustion incinerator and combustion exhaust gas recovery facilities for waste water and gas were to be installed at acrylonitrile plant of petrochemical plant in China to recover the combustion exhaust gas as process gas used in plant for effective utilization. The plant at installation site has been run since 1995, having a production capacity of 50,000-60,000 tons. In this fiscal year, the detailed design and supply of electric instrumentation equipment and manufacture of boiler facilities were carried out according to the basic design made in the previous fiscal year. Further, the equipment manufactured in the previous year and this fiscal year were transported and inspected. The paper also reviewed drawings of the design of the facilities for part of which China takes responsibility. (NEDO)

  19. FY 2000 report on the results of the project for measures for rationalization of the international energy utilization - the model project for the heightening of efficiency of the international energy consumption. 2/2. Model project for facilities for effective utilization of by-producing exhaust gases from chemical plant, etc.; 2000 nendo kokusai energy shohi koritsuka tou moderu jigyo seika hokokusho. Kagaku kojo fukusei haigasu tou yuko riyo setsubi moderu jigyo (2/2)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    For the purpose of contributing to the reduction in the energy consumption in China and the stable energy supply in Japan by heightening efficiency of the energy utilization in the petrochemical industry which is an industry of much energy consumption in China, a model project for facilities for effective utilization of by-producing gases from chemical plant, etc. was carried out, and the FY 2000 results were reported. Concretely, the combustion incinerator and combustion exhaust gas recovery facilities for waste water and gas were to be installed at acrylonitrile plant of petrochemical plant in China to recover the combustion exhaust gas as process gas used in plant for effective utilization. In this fiscal year, the detailed design and supply of electric instrumentation equipment and manufacture of boiler facilities were carried out according to the basic design made in the previous fiscal year. Further, the equipment manufactured in the previous year and this fiscal year were transported and inspected. The paper also reviewed drawings of the design of the facilities for part of which China takes responsibility. The separate volume (2/2) included drawings of valve, fire detector, orifice, thermocouple, motor control equipment, etc. (NEDO)

  20. Fabry-Perot interferometer utilized for displacement measurement in a large measuring range

    International Nuclear Information System (INIS)

    Wang, Yung-Cheng; Shyu, Lih-Horng; Chang, Chung-Ping

    2010-01-01

    The optical configuration of a Fabry-Perot interferometer is uncomplicated. This has already been applied in different measurement systems. For the displacement measurement with the Fabry-Perot interferometer, the result is significantly influenced by the tilt angles of the measurement mirror in the interferometer. Hence, only for the rather small measuring range, the Fabry-Perot interferometer is available. The goal of this investigation is to enhance the measuring range of Fabry-Perot interferometer by compensating the tilt angles. To verify the measuring characteristic of the self-developed Fabry-Perot interferometer, some comparison measurements with a reference standard have been performed. The maximum deviation of comparison experiments is less than 0.3 μm in the traveling range of 30 mm. The experimental results show that the Fabry-Perot interferometer is highly stable, insensitive to environment effects, and can meet the measuring requirement of the submicrometer order.

  1. Validity of utility measures for women with pelvic organ prolapse.

    Science.gov (United States)

    Harvie, Heidi S; Lee, Daniel D; Andy, Uduak U; Shea, Judy A; Arya, Lily A

    2018-01-01

    Pelvic organ prolapse is a common condition that frequently coexists with urinary and fecal incontinence. The impact of prolapse on quality of life is typically measured through condition-specific quality-of-life instruments. Utility preference scores are a standardized generic health-related quality-of-life measure that summarizes morbidity on a scale from 0 (death) to 1 (optimum health). Utility preference scores quantify disease severity and burden and are widely used in cost-effectiveness research. The validity of utility preference instruments in women with pelvic organ prolapse has not been established. The objective of this study was to evaluate the construct validity of generic quality-of-life instruments for measuring utility scores in women with pelvic organ prolapse. Our hypothesis was that women with multiple pelvic floor disorders would have worse (lower) utility scores than women with pelvic organ prolapse only and that women with all 3 pelvic floor disorders would have the worst (lowest) utility scores. This was a prospective observational study of 286 women with pelvic floor disorders from a referral female pelvic medicine and reconstructive surgery practice. All women completed the following general health-related quality-of-life questionnaires: Health Utilities Index Mark 3, EuroQol, and Short Form 6D, as well as a visual analog scale. Pelvic floor symptom severity and condition-specific quality of life were measured using the Pelvic Floor Distress Inventory and Pelvic Floor Impact Questionnaire, respectively. We measured the relationship between utility scores and condition-specific quality-of-life scores and compared utility scores among 4 groups of women: (1) pelvic organ prolapse only, (2) pelvic organ prolapse and stress urinary incontinence, (3) pelvic organ prolapse and urgency urinary incontinence, and (4) pelvic organ prolapse, urinary incontinence, and fecal incontinence. Of 286 women enrolled, 191 (67%) had pelvic organ prolapse; mean

  2. Measuring Health Utilities in Children and Adolescents: A Systematic Review of the Literature.

    Directory of Open Access Journals (Sweden)

    Dominic Thorrington

    Full Text Available The objective of this review was to evaluate the use of all direct and indirect methods used to estimate health utilities in both children and adolescents. Utilities measured pre- and post-intervention are combined with the time over which health states are experienced to calculate quality-adjusted life years (QALYs. Cost-utility analyses (CUAs estimate the cost-effectiveness of health technologies based on their costs and benefits using QALYs as a measure of benefit. The accurate measurement of QALYs is dependent on using appropriate methods to elicit health utilities.We sought studies that measured health utilities directly from patients or their proxies. We did not exclude those studies that also included adults in the analysis, but excluded those studies focused only on adults.We evaluated 90 studies from a total of 1,780 selected from the databases. 47 (52% studies were CUAs incorporated into randomised clinical trials; 23 (26% were health-state utility assessments; 8 (9% validated methods and 12 (13% compared existing or new methods. 22 unique direct or indirect calculation methods were used a total of 137 times. Direct calculation through standard gamble, time trade-off and visual analogue scale was used 32 times. The EuroQol EQ-5D was the most frequently-used single method, selected for 41 studies. 15 of the methods used were generic methods and the remaining 7 were disease-specific. 48 of the 90 studies (53% used some form of proxy, with 26 (29% using proxies exclusively to estimate health utilities.Several child- and adolescent-specific methods are still being developed and validated, leaving many studies using methods that have not been designed or validated for use in children or adolescents. Several studies failed to justify using proxy respondents rather than administering the methods directly to the patients. Only two studies examined missing responses to the methods administered with respect to the patients' ages.

  3. Clinical Utility of the DSM-5 Alternative Model of Personality Disorders

    DEFF Research Database (Denmark)

    Bach, Bo; Markon, Kristian; Simonsen, Erik

    2015-01-01

    In Section III, Emerging Measures and Models, DSM-5 presents an Alternative Model of Personality Disorders, which is an empirically based model of personality pathology measured with the Level of Personality Functioning Scale (LPFS) and the Personality Inventory for DSM-5 (PID-5). These novel...... (involving a comparison of presenting problems, history, and diagnoses) and used to formulate treatment considerations. We also considered 6 specific personality disorder types that could be derived from the profiles as defined in the DSM-5 Section III criteria. Results. Using the LPFS and PID-5, we were...... evaluation generally supported the utility for clinical purposes of the Alternative Model for Personality Disorders in Section III of the DSM-5, although it also identified some areas for refinement....

  4. Survey review of models for use in market penetration analysis: utility sector focus

    Energy Technology Data Exchange (ETDEWEB)

    Groncki, P.J.; Kydes, A.S.; Lamontagne, J.; Marcuse, W.; Vinjamuri, G.

    1980-11-01

    The ultimate benefits of federal expenditures in research and development for new technologies are dependent upon the degree of acceptance of these technologies. Market penetration considerations are central to the problem of quantifying the potential benefits. These benefits are inputs to the selection process of projects competing for finite R and D funds. Market penetration is the gradual acceptance of a new commodity or technology. The Office of Coal utilization is concerned with the specialized area of market penetration of new electric power generation technologies for both replacement and new capacity. The common measure of market penetration is the fraction of the market serviced by the challenging technology for each time point considered. The methodologies for estimating market penetration are divided into three generic classes: integrated energy/economy modeling systems, utility capacity expansion models, and technology substitution models. In general, the integrated energy/economy modeling systems have three advantages: they provide internally consistent macro, energy-economy scenarios, they account for the effect of prices on demand by fuel form, and they explicitly capture the effects of population growth and the level and structure of economic activity on energy demand. A variety of deficiencies appear in most energy-economy systems models. All of the methodologies may be applied at some level to questions of market penetration of new technologies in the utility sector; choice of methods for a particular analysis must be conditioned by the scope of the analysis, data availability, and the relative cost of alternative analysis.

  5. Measuring Collective Efficacy: A Multilevel Measurement Model for Nested Data

    Science.gov (United States)

    Matsueda, Ross L.; Drakulich, Kevin M.

    2016-01-01

    This article specifies a multilevel measurement model for survey response when data are nested. The model includes a test-retest model of reliability, a confirmatory factor model of inter-item reliability with item-specific bias effects, an individual-level model of the biasing effects due to respondent characteristics, and a neighborhood-level…

  6. A public utility model for managing public land recreation enterprises.

    Science.gov (United States)

    Tom. Quinn

    2002-01-01

    Through review of relevant economic principles and judicial precedent, a case is made that public-land recreation enterprises are analogous to traditionally recognized public utilities. Given the historical concern over the societal value of recreation and associated pricing issues, public-land management policies failing to acknowledge these utility-like...

  7. mathematical models for estimating radio channels utilization when ...

    African Journals Online (AJOL)

    Programmer at the Telecommunications Office of the IT Department, Belgorod State. University. Published online: 08 August 2017. ABSTRACT. The wireless self-organized network functioning efficiency is considered from its radio channels utilization point of view. In order to increase the radio channels utilization it is.

  8. Assessing the Accuracy of Eyelid Measurements Utilizing the Volk Eye Check System and Clinical Measurements.

    Science.gov (United States)

    Sinha, Kunal R; Yeganeh, Amir; Goldberg, Robert A; Rootman, Daniel B

    2017-08-23

    The purpose of this study was to validate the accuracy of marginal reflex distance 1 (MRD1) measurements obtained by the Volk Eye Check system, a modified smartphone that measures MRD1 automatically, relative to clinical and digital measurements. In this prospective observational study of adults with normal eyelids and ptosis, MRD1 was measured clinically, digitally, and automatically with the Volk device. Eyes were divided into successful versus unsuccessful Volk trial groups; successful eyes were then subdivided into control and ptosis subgroups. The primary outcome measures were mean MRD1 obtained by the 3 modalities. Secondary outcome measures included the success rate of the device and the prevalence of ptosis within the successful and unsuccessful groups. In the overall sample of 88 eyes, clinical and digital MRD1 were not significantly different. Among eyes with successful Volk trials, significant differences in MRD1 measured by the 3 modalities were as follows: in the successful group, Volk MRD1 (3.05 mm) was significantly (p MRD1 (2.68 mm); in the ptosis subgroup, Volk MRD1 (2.47 mm) was significantly higher than clinical (2.05 mm; p MRD1 (1.91 mm; p MRD1 (mean difference, 1.21 mm; p MRD1 well in normal patients but overestimates MRD1 in patients with ptosis. It may be most appropriate in assessing patients with normal or elevated eyelid position. Clinical and digital MRD1 measurements were not different than each other.

  9. Modeling Substrate Utilization, Metabolite Production, and Uranium Immobilization in Shewanella oneidensis Biofilms

    Directory of Open Access Journals (Sweden)

    Ryan S. Renslow

    2017-06-01

    Full Text Available In this study, we developed a two-dimensional mathematical model to predict substrate utilization and metabolite production rates in Shewanella oneidensis MR-1 biofilm in the presence and absence of uranium (U. In our model, lactate and fumarate are used as the electron donor and the electron acceptor, respectively. The model includes the production of extracellular polymeric substances (EPS. The EPS bound to the cell surface and distributed in the biofilm were considered bound EPS (bEPS and loosely associated EPS (laEPS, respectively. COMSOL® Multiphysics finite element analysis software was used to solve the model numerically (model file provided in the Supplementary Material. The input variables of the model were the lactate, fumarate, cell, and EPS concentrations, half saturation constant for fumarate, and diffusion coefficients of the substrates and metabolites. To estimate unknown parameters and calibrate the model, we used a custom designed biofilm reactor placed inside a nuclear magnetic resonance (NMR microimaging and spectroscopy system and measured substrate utilization and metabolite production rates. From these data we estimated the yield coefficients, maximum substrate utilization rate, half saturation constant for lactate, stoichiometric ratio of fumarate and acetate to lactate and stoichiometric ratio of succinate to fumarate. These parameters are critical to predicting the activity of biofilms and are not available in the literature. Lastly, the model was used to predict uranium immobilization in S. oneidensis MR-1 biofilms by considering reduction and adsorption processes in the cells and in the EPS. We found that the majority of immobilization was due to cells, and that EPS was less efficient at immobilizing U. Furthermore, most of the immobilization occurred within the top 10 μm of the biofilm. To the best of our knowledge, this research is one of the first biofilm immobilization mathematical models based on experimental

  10. Assessing the Utility of a Daily Log for Measuring Principal Leadership Practice

    Science.gov (United States)

    Camburn, Eric M.; Spillane, James P.; Sebastian, James

    2010-01-01

    Purpose: This study examines the feasibility and utility of a daily log for measuring principal leadership practice. Setting and Sample: The study was conducted in an urban district with approximately 50 principals. Approach: The log was assessed against two criteria: (a) Is it feasible to induce strong cooperation and high response rates among…

  11. The utility of curriculum-based measurement for evaluating the effects of methylphenidate on academic performance.

    OpenAIRE

    Stoner, G; Carey, S P; Ikeda, M J; Shinn, M R

    1994-01-01

    Two case studies were conducted to investigate the utility of curriculum-based measurement of math and reading for evaluating the effects of methylphenidate on the academic performance of 2 students diagnosed with attention deficit hyperactivity disorder. Following baseline measurement, double-blind placebo-controlled procedures were employed to evaluate each student's response to three levels (5 mg, 10 mg, and 15 mg) of the medication. Results of the first study suggest that the curriculum-b...

  12. Dynamic Modeling and Simulation Tools for Utility Systems

    National Research Council Canada - National Science Library

    Hock, Vincent

    2002-01-01

    Utility systems are enablers for the force projection process. They provide the electricity, water, transportation fuel, heating, cooling, compressed air, and communications required for the various steps of force projection...

  13. Markov Decision Process Measurement Model.

    Science.gov (United States)

    LaMar, Michelle M

    2018-03-01

    Within-task actions can provide additional information on student competencies but are challenging to model. This paper explores the potential of using a cognitive model for decision making, the Markov decision process, to provide a mapping between within-task actions and latent traits of interest. Psychometric properties of the model are explored, and simulation studies report on parameter recovery within the context of a simple strategy game. The model is then applied to empirical data from an educational game. Estimates from the model are found to correlate more strongly with posttest results than a partial-credit IRT model based on outcome data alone.

  14. Micrometric scale measurement of material structure moving utilizing μ-radiographic technique

    International Nuclear Information System (INIS)

    Vavrik, Daniel; Jakubek, Jan; Holy, Tomas

    2008-01-01

    Studies concerning the functionality and integrity of the loaded materials require information about their mechanical behavior. Consequently, the displacement field of the loaded material has to be studied for this purpose. Although it is quite a common problem, new challenges arise when one is interested about displacement field of the micrometric scale. One promising solution is utilizing the μ-radiographic technique. Generally, the measurement of the displacement field requires some marks which can be followed as moving objects during loading. Radiographic studies of mechanical behavior of alloys or composite materials can benefit from their natural microstructure, which can be utilized as natural mark fields

  15. A novel method of evaluating dynamic measurement uncertainty utilizing digital filters

    International Nuclear Information System (INIS)

    Hessling, J P

    2009-01-01

    For every measurement, a measurement uncertainty should be associated with the estimate of the measurand. In particular, this applies to all the common non-stationary transient dynamic measurements made with linear time-invariant measurement systems, despite the present standard treatment providing little guidance for this case. To complete recent studies on the estimation and correction of the systematic dynamic error, the remaining evaluation of measurement uncertainty is addressed here. A method based on digital filtering will be utilized to find the generically time-dependent dynamic contribution to the uncertainty. It will be shown that the widely accepted approach to calculate the stationary uncertainty can be extended to transient measurements, provided some complementary techniques are used and the sensitivity is generalized to be a time-dependent signal instead of a constant. For illustration, the method is applied to the same measurement system analysed in the related published studies

  16. User Utility Oriented Queuing Model for Resource Allocation in Cloud Environment

    Directory of Open Access Journals (Sweden)

    Zhe Zhang

    2015-01-01

    Full Text Available Resource allocation is one of the most important research topics in servers. In the cloud environment, there are massive hardware resources of different kinds, and many kinds of services are usually run on virtual machines of the cloud server. In addition, cloud environment is commercialized, and economical factor should also be considered. In order to deal with commercialization and virtualization of cloud environment, we proposed a user utility oriented queuing model for task scheduling. Firstly, we modeled task scheduling in cloud environment as an M/M/1 queuing system. Secondly, we classified the utility into time utility and cost utility and built a linear programming model to maximize total utility for both of them. Finally, we proposed a utility oriented algorithm to maximize the total utility. Massive experiments validate the effectiveness of our proposed model.

  17. Social Security Measures for Elderly Population in Delhi, India: Awareness, Utilization and Barriers.

    Science.gov (United States)

    Kohli, Charu; Gupta, Kalika; Banerjee, Bratati; Ingle, Gopal Krishna

    2017-05-01

    World population of elderly is increasing at a fast pace. The number of elderly in India has increased by 54.77% in the last 15 years. A number of social security measures have been taken by Indian government. To assess awareness, utilization and barriers faced while utilizing social security schemes by elderly in a secondary care hospital situated in a rural area in Delhi, India. A cross-sectional study was conducted among 360 individuals aged 60 years and above in a secondary care hospital situated in a rural area in Delhi. A pre-tested, semi-structured schedule prepared in local language was used. Data was analysed using SPSS software (version 17.0). Chi-square test was used to observe any statistical association between categorical variables. The results were considered statistically significant if p-value was less than 0.05. A majority of study subjects were females (54.2%), Hindu (89.7%), married (60.3%) and were not engaged in any occupation (82.8%). Awareness about Indira Gandhi National Old Age Pension Scheme (IGNOAPS) was present among 286 (79.4%) and Annapurna scheme in 193 (53.6%) subjects. Among 223 subjects who were below poverty line, 179 (80.3%) were aware of IGNOAPS; while, 112 (50.2%) were utilizing the scheme. There was no association of awareness with education status, occupation, religion, family type, marital status and caste (p>0.05). Corruption and tedious administrative formalities were major barriers reported. Awareness generation, provision of information on how to approach the concerned authority for utilizing the scheme and ease of administrative procedures should be an integral part of any social security scheme or measure. In the present study, about 79.4% of elderly were aware and 45% of the eligible subjects were utilizing pension scheme. Major barriers reported in utilization of schemes were corruption and tedious administrative procedures.

  18. Utilizing Photogrammetry and Strain Gage Measurement to Characterize Pressurization of Inflatable Modules

    Science.gov (United States)

    Mohammed, Anil

    2011-01-01

    This paper focuses on integrating a large hatch penetration into inflatable modules of various constructions. This paper also compares load predictions with test measurements. The strain was measured by utilizing photogrammetric methods and strain gages mounted to select clevises that interface with the structural webbings. Bench testing showed good correlation between strain data collected from an extensometer and photogrammetric measurements, even when the material transitioned from the low load to high load strain region of the curve. The full-scale torus design module showed mixed results as well in the lower load and high strain regions. After thorough analysis of photogrammetric measurements, strain gage measurements, and predicted load, the photogrammetric measurements seem to be off by a factor of two.

  19. Job Satisfaction and Personality: The Utility of the Five-Factor Model of Personality

    Science.gov (United States)

    1999-03-01

    Social and Enterprising interests. Costa and McCrae (1998) utilized the Wiggin’s (1979) circumplex model as a basis for developing their Style of...JOB SATISFACTION AND PERSONALITY: THE UTILITY OF THE FIVE-FACTOR MODEL OF PERSONALITY by GREGG F.TANOFF DISTRIBUTION STATEMENT A Approved for...COVERED DISSERTATION 4. TITLE AND SUBTITLE JOB SATISFACTION AND PERSONLAITY: THE UTILITY OF THE FIVE-FACTOR MODEL OF PERSONALITY 5. FUNDING NUMBERS

  20. Improving Patient Flow Utilizing a Collaborative Learning Model.

    Science.gov (United States)

    Tibor, Laura C; Schultz, Stacy R; Cravath, Julie L; Rein, Russell R; Krecke, Karl N

    2016-01-01

    This initiative utilized a collaborative learning approach to increase knowledge and experience in process improvement and systems thinking while targeting improved patient flow in seven radiology modalities. Teams showed improvements in their project metrics and collectively streamlined the flow for 530 patients per day by improving patient lead time, wait time, and first case on-time start rates. In a post-project survey of 50 project team members, 82% stated they had more effective solutions as a result of the process improvement methodology, 84% stated they will be able to utilize the process improvement tools again in the future, and 98% would recommend participating in another project to a colleague.

  1. Expected Utility and Catastrophic Risk in a Stochastic Economy-Climate Model

    NARCIS (Netherlands)

    Ikefuji, M.; Laeven, R.J.A.; Magnus, J.R.; Muris, C.H.M.

    2010-01-01

    In the context of extreme climate change, we ask how to conduct expected utility analysis in the presence of catastrophic risks. Economists typically model decision making under risk and uncertainty by expected util- ity with constant relative risk aversion (power utility); statisticians typi- cally

  2. The utility of Earth system Models of Intermediate Complexity

    NARCIS (Netherlands)

    Weber, S.L.

    2010-01-01

    Intermediate-complexity models are models which describe the dynamics of the atmosphere and/or ocean in less detail than conventional General Circulation Models (GCMs). At the same time, they go beyond the approach taken by atmospheric Energy Balance Models (EBMs) or ocean box models by

  3. An Additive-Utility Model of Delay Discounting

    Science.gov (United States)

    Killeen, Peter R.

    2009-01-01

    Goods remote in temporal, spatial, or social distance, or in likelihood, exert less control over our behavior than those more proximate. The decay of influence with distance, of perennial interest to behavioral economists, has had a renaissance in the study of delay discounting. By developing discount functions from marginal utilities, this…

  4. Mathematical model of a utility firm. Executive summary

    Energy Technology Data Exchange (ETDEWEB)

    1983-08-21

    The project was aimed at developing an understanding of the economic and behavioral processes that take place within a utility firm, and without it. This executive summary, one of five documents, gives the project goals and objectives, outlines the subject areas of investigation, discusses the findings and results, and finally considers applications within the electric power industry and future research directions. (DLC)

  5. Knowledge Management Models And Their Utility To The Effective ...

    African Journals Online (AJOL)

    Although indigenous knowledge is key to the development of sub Saharan Africa and the preservation of its societal memory, it is fast disappearing due to a variety of reasons. One of the strategies that may assist in the management and preservation of indigenous knowledge is the utilization of knowledge management ...

  6. Emergency Preparedness Education for Nurses: Core Competency Familiarity Measured Utilizing an Adapted Emergency Preparedness Information Questionnaire.

    Science.gov (United States)

    Georgino, Madeline M; Kress, Terri; Alexander, Sheila; Beach, Michael

    2015-01-01

    The purpose of this project was to measure trauma nurse improvement in familiarity with emergency preparedness and disaster response core competencies as originally defined by the Emergency Preparedness Information Questionnaire after a focused educational program. An adapted version of the Emergency Preparedness Information Questionnaire was utilized to measure familiarity of nurses with core competencies pertinent to first responder capabilities. This project utilized a pre- and postsurvey descriptive design and integrated education sessions into the preexisting, mandatory "Trauma Nurse Course" at large, level I trauma center. A total of 63 nurses completed the intervention during May and September 2014 sessions. Overall, all 8 competencies demonstrated significant (P in familiarity. In conclusion, this pilot quality improvement project demonstrated a unique approach to educating nurses to be more ready and comfortable when treating victims of a disaster.

  7. A Framework for Organizing Current and Future Electric Utility Regulatory and Business Models

    Energy Technology Data Exchange (ETDEWEB)

    Satchwell, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cappers, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Schwartz, Lisa [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fadrhonc, Emily Martin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-06-01

    In this report, we will present a descriptive and organizational framework for incremental and fundamental changes to regulatory and utility business models in the context of clean energy public policy goals. We will also discuss the regulated utility's role in providing value-added services that relate to distributed energy resources, identify the "openness" of customer information and utility networks necessary to facilitate change, and discuss the relative risks, and the shifting of risks, for utilities and customers.

  8. Rolling Resistance Measurement and Model Development

    DEFF Research Database (Denmark)

    Andersen, Lasse Grinderslev; Larsen, Jesper; Fraser, Elsje Sophia

    2015-01-01

    There is an increased focus worldwide on understanding and modeling rolling resistance because reducing the rolling resistance by just a few percent will lead to substantial energy savings. This paper reviews the state of the art of rolling resistance research, focusing on measuring techniques......, surface and texture modeling, contact models, tire models, and macro-modeling of rolling resistance...

  9. An Analysis/Synthesis System of Audio Signal with Utilization of an SN Model

    OpenAIRE

    Turi Nagy, M.; Rozinaj, G.

    2004-01-01

    An SN (sinusoids plus noise) model is a spectral model, in which the periodic components of the sound are represented by sinusoids with time-varying frequencies, amplitudes and phases. The remaining non-periodic components are represented by a filtered noise. The sinusoidal model utilizes physical properties of musical instruments and the noise model utilizes the human inability to perceive the exact spectral shape or the phase of stochastic signals. SN modeling can be applied in a compressio...

  10. Modeling energy flexibility of low energy buildings utilizing thermal mass

    DEFF Research Database (Denmark)

    Foteinaki, Kyriaki; Heller, Alfred; Rode, Carsten

    2016-01-01

    to match the production patterns, shifting demand from on-peak hours to off-peak hours. Buildings could act as flexibility suppliers to the energy system, through load shifting potential, provided that the large thermal mass of the building stock could be utilized for energy storage. In the present study......In the future energy system a considerable increase in the penetration of renewable energy is expected, challenging the stability of the system, as both production and consumption will have fluctuating patterns. Hence, the concept of energy flexibility will be necessary in order for the consumption...... the load shifting potential of an apartment of a low energy building in Copenhagen is assessed, utilizing the heat storage capacity of the thermal mass when the heating system is switched off for relieving the energy system. It is shown that when using a 4-hour preheating period before switching off...

  11. SEMPATH Ontology: modeling multidisciplinary treatment schemes utilizing semantics.

    Science.gov (United States)

    Alexandrou, Dimitrios Al; Pardalis, Konstantinos V; Bouras, Thanassis D; Karakitsos, Petros; Mentzas, Gregoris N

    2012-03-01

    A dramatic increase of demand for provided treatment quality has occurred during last decades. The main challenge to be confronted, so as to increase treatment quality, is the personalization of treatment, since each patient constitutes a unique case. Healthcare provision encloses a complex environment since healthcare provision organizations are highly multidisciplinary. In this paper, we present the conceptualization of the domain of clinical pathways (CP). The SEMPATH (SEMantic PATHways) Oontology comprises three main parts: 1) the CP part; 2) the business and finance part; and 3) the quality assurance part. Our implementation achieves the conceptualization of the multidisciplinary domain of healthcare provision, in order to be further utilized for the implementation of a Semantic Web Rules (SWRL rules) repository. Finally, SEMPATH Ontology is utilized for the definition of a set of SWRL rules for the human papillomavirus) disease and its treatment scheme. © 2012 IEEE

  12. Utility of ketone measurement in the prevention, diagnosis and management of diabetic ketoacidosis.

    Science.gov (United States)

    Misra, S; Oliver, N S

    2015-01-01

    Ketone measurement is advocated for the diagnosis of diabetic ketoacidosis and assessment of its severity. Assessing the evidence base for ketone measurement in clinical practice is challenging because multiple methods are available but there is a lack of consensus about which is preferable. Evaluating the utility of ketone measurement is additionally problematic because of variability in the biochemical definition of ketoacidosis internationally and in the proposed thresholds for ketone measures. This has led to conflicting guidance from expert bodies on how ketone measurement should be used in the management of ketoacidosis. The development of point-of-care devices that can reliably measure the capillary blood ketone β-hydroxybutyrate (BOHB) has widened the spectrum of applications of ketone measurement, but whether the evidence base supporting these applications is robust enough to warrant their incorporation into routine clinical practice remains unclear. The imprecision of capillary blood ketone measures at higher values, the lack of availability of routine laboratory-based assays for BOHB and the continued cost-effectiveness of urine ketone assessment prompt further discussion on the role of capillary blood ketone assessment in ketoacidosis. In the present article, we review the various existing methods of ketone measurement, the precision of capillary blood ketone as compared with other measures, its diagnostic accuracy in predicting ketoacidosis and other clinical applications including prevention, assessment of severity and resolution of ketoacidosis. © 2014 The Authors. Diabetic Medicine © 2014 Diabetes UK.

  13. Modeling and Optimizing Energy Utilization of Steel Production Process: A Hybrid Petri Net Approach

    Directory of Open Access Journals (Sweden)

    Peng Wang

    2013-01-01

    Full Text Available The steel industry is responsible for nearly 9% of anthropogenic energy utilization in the world. It is urgent to reduce the total energy utilization of steel industry under the huge pressures on reducing energy consumption and CO2 emission. Meanwhile, the steel manufacturing is a typical continuous-discrete process with multiprocedures, multiobjects, multiconstraints, and multimachines coupled, which makes energy management rather difficult. In order to study the energy flow within the real steel production process, this paper presents a new modeling and optimization method for the process based on Hybrid Petri Nets (HPN in consideration of the situation above. Firstly, we introduce the detailed description of HPN. Then the real steel production process from one typical integrated steel plant is transformed into Hybrid Petri Net model as a case. Furthermore, we obtain a series of constraints of our optimization model from this model. In consideration of the real process situation, we pick the steel production, energy efficiency and self-made gas surplus as the main optimized goals in this paper. Afterwards, a fuzzy linear programming method is conducted to obtain the multiobjective optimization results. Finally, some measures are suggested to improve this low efficiency and high whole cost process structure.

  14. Aerosol behaviour modeling and measurements

    International Nuclear Information System (INIS)

    Gieseke, J.A.; Reed, L.D.

    1977-01-01

    Aerosol behavior within Liquid Metal Fast Breeder Reactor (LMFBR) containments is of critical importance since most of the radioactive species are expected to be associated with particulate forms and the mass of radiologically significant material leaked to the ambient atmosphere is directly related to the aerosol concentration airborne within the containment. Mathematical models describing the behavior of aerosols in closed environments, besides providing a direct means of assessing the importance of specific assumptions regarding accident sequences, will also serve as the basic tool with which to predict the consequences of various postulated accident situations. Consequently, considerable efforts have been recently directed toward the development of accurate and physically realistic theoretical aerosol behavior models. These models have accounted for various mechanisms affecting agglomeration rates of airborne particulate matter as well as particle removal rates from closed systems. In all cases, spatial variations within containments have been neglected and a well-mixed control volume has been assumed. Examples of existing computer codes formulated from the mathematical aerosol behavior models are the Brookhaven National Laboratory TRAP code, the PARDISEKO-II and PARDISEKO-III codes developed at Karlsruhe Nuclear Research Center, and the HAA-2, HAA-3, and HAA-3B codes developed by Atomics International. Because of their attractive short computation times, the HAA-3 and HAA-3B codes have been used extensively for safety analyses and are attractive candidates with which to demonstrate order of magnitude estimates of the effects of various physical assumptions. Therefore, the HAA-3B code was used as the nucleus upon which changes have been made to account for various physical mechanisms which are expected to be present in postulated accident situations and the latest of the resulting codes has been termed the HAARM-2 code. It is the primary purpose of the HAARM

  15. Utility of noninvasive transcutaneous measurement of postoperative hemoglobin in total joint arthroplasty patients.

    Science.gov (United States)

    Stoesz, Michael; Wood, Kristin; Clark, Wesley; Kwon, Young-Min; Freiberg, Andrew A

    2014-11-01

    This study prospectively evaluated the clinical utility of a noninvasive transcutaneous device for postoperative hemoglobin measurement in 100 total hip and knee arthroplasty patients. A protocol to measure hemoglobin noninvasively, prior to venipuncture, successfully avoided venipuncture in 73% of patients. In the remaining 27 patients, there were a total of 48 venipunctures performed during the postoperative hospitalization period due to reasons including transcutaneous hemoglobin measurement less than or equal to 9 g/dL (19), inability to obtain a transcutaneous hemoglobin measurement (8), clinical signs of anemia (3), and noncompliance with the study protocol (18). Such screening protocols may provide a convenient and cost-effective alternative to routine venipuncture for identifying patients at risk for blood transfusion after elective joint arthroplasty. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Examining the Utility of the New Raney Vocabulary Measure Alongside the WAIS-III.

    Science.gov (United States)

    Ferguson, Ryan J; Roy-Charland, Annie; Dickinson, Joël

    2018-01-29

    Psychometric tests related to vocabulary assessments are, for the most part, restricted in their use by trained professionals and/or are costly. These restrictions limit their use, especially for research purposes. To circumvent these limitations, the Raney Vocabulary Measure was created for assessing vocabulary proficiency, specifically for research purposes. The measure consists of 30 questions where participants were instructed to choose the best definition of each word. The purpose of the study was to examine the utility of the new measure using the highly standardized but protected Wechsler Adult Intelligence Scale. Results from the linear combination of the subscales revealed the significant prediction of the Raney Vocabulary Measure, with the Vocabulary subtest contributing most to the unique variance. These results support that the test examines vocabulary ability. The current results are promising as the test would allow for greater accessibility for researchers who do not have access to restricted psychometric tests.

  17. Laser shaft alignment measurement model

    Science.gov (United States)

    Mo, Chang-tao; Chen, Changzheng; Hou, Xiang-lin; Zhang, Guoyu

    2007-12-01

    Laser beam's track which is on photosensitive surface of the a receiver will be closed curve, when driving shaft and the driven shaft rotate with same angular velocity and rotation direction. The coordinate of arbitrary point which is on the curve is decided by the relative position of two shafts. Basing on the viewpoint, a mathematic model of laser alignment is set up. By using a data acquisition system and a data processing model of laser alignment meter with single laser beam and a detector, and basing on the installation parameter of computer, the state parameter between two shafts can be obtained by more complicated calculation and correction. The correcting data of the four under chassis of the adjusted apparatus moving on the level and the vertical plane can be calculated. This will instruct us to move the apparatus to align the shafts.

  18. Measurement error models, methods, and applications

    CERN Document Server

    Buonaccorsi, John P

    2010-01-01

    Over the last 20 years, comprehensive strategies for treating measurement error in complex models and accounting for the use of extra data to estimate measurement error parameters have emerged. Focusing on both established and novel approaches, ""Measurement Error: Models, Methods, and Applications"" provides an overview of the main techniques and illustrates their application in various models. It describes the impacts of measurement errors on naive analyses that ignore them and presents ways to correct for them across a variety of statistical models, from simple one-sample problems to regres

  19. Utility of Arden Syntax for Representation of Fuzzy Logic in Clinical Quality Measures.

    Science.gov (United States)

    Jenders, Robert A

    2015-01-01

    Prior work has established that fuzzy logic is prevalent in clinical practice guidelines and that Arden Syntax is suitable for representing clinical quality measures (CQMs). Approved since then, Arden Syntax v2.9 (2012) has formal constructs for fuzzy logic even as new formalisms are proposed to represent quality logic. Determine the prevalence of fuzzy logic in CQMs and assess the utility of a contemporary version of Arden Syntax for representing them. Linguistic variables were tabulated in the 329 Assessing Care of the Vulnerable Elderly (ACOVE-3) CQMs, and these logic statements were encoded in Arden Syntax. In a total of 392 CQMs, linguistic variables occurred in 30.6%, and Arden Syntax could be used to represent these formally. Fuzzy logic occurs commonly in CQMs, and Arden Syntax offers particular utility for the representations of these constructs.

  20. Validation of the SF-6D Health State Utilities Measure in Lower Extremity Sarcoma

    Directory of Open Access Journals (Sweden)

    Kenneth R. Gundle

    2014-01-01

    Full Text Available Aim. Health state utilities measures are preference-weighted patient-reported outcome (PRO instruments that facilitate comparative effectiveness research. One such measure, the SF-6D, is generated from the Short Form 36 (SF-36. This report describes a psychometric evaluation of the SF-6D in a cross-sectional population of lower extremity sarcoma patients. Methods. Patients with lower extremity sarcoma from a prospective database who had completed the SF-36 and Toronto Extremity Salvage Score (TESS were eligible for inclusion. Computed SF-6D health states were given preference weights based on a prior valuation. The primary outcome was correlation between the SF-6D and TESS. Results. In 63 pairs of surveys in a lower extremity sarcoma population, the mean preference-weighted SF-6D score was 0.59 (95% CI 0.4–0.81. The distribution of SF-6D scores approximated a normal curve (skewness = 0.11. There was a positive correlation between the SF-6D and TESS (r=0.75, P<0.01. Respondents who reported walking aid use had lower SF-6D scores (0.53 versus 0.61, P=0.03. Five respondents underwent amputation, with lower SF-6D scores that approached significance (0.48 versus 0.6, P=0.06. Conclusions. The SF-6D health state utilities measure demonstrated convergent validity without evidence of ceiling or floor effects. The SF-6D is a health state utilities measure suitable for further research in sarcoma patients.

  1. Context analysis for a new regulatory model for electric utilities in Brazil

    International Nuclear Information System (INIS)

    El Hage, Fabio S.; Rufín, Carlos

    2016-01-01

    This article examines what would have to change in the Brazilian regulatory framework in order to make utilities profit from energy efficiency and the integration of resources, instead of doing so from traditional consumption growth, as it happens at present. We argue that the Brazilian integrated electric sector resembles a common-pool resources problem, and as such it should incorporate, in addition to the centralized operation for power dispatch already in place, demand side management, behavioral strategies, and smart grids, attained through a new business and regulatory model for utilities. The paper proposes several measures to attain a more sustainable and productive electricity distribution industry: decoupling revenues from volumetric sales through a fixed maximum load fee, which would completely offset current disincentives for energy efficiency; the creation of a market for negawatts (saved megawatts) using the current Brazilian mechanism of public auctions for the acquisition of wholesale energy; and the integration of technologies, especially through the growth of unregulated products and services. Through these measures, we believe that Brazil could improve both energy security and overall sustainability of its power sector in the long run. - Highlights: • Necessary changes in the Brazilian regulatory framework towards energy efficiency. • How to incorporate demand side management, behavioral strategies, and smart grids. • Proposition of a market for negawatts at public auctions. • Measures to attain a more sustainable electricity distribution industry in Brazil.

  2. Symmetry evaluation for an interferometric fiber optic gyro coil utilizing a bidirectional distributed polarization measurement system.

    Science.gov (United States)

    Peng, Feng; Li, Chuang; Yang, Jun; Hou, Chengcheng; Zhang, Haoliang; Yu, Zhangjun; Yuan, Yonggui; Li, Hanyang; Yuan, Libo

    2017-07-10

    We propose a dual-channel measurement system for evaluating the optical path symmetry of an interferometric fiber optic gyro (IFOG) coil. Utilizing a bidirectional distributed polarization measurement system, the forward and backward transmission performances of an IFOG coil are characterized simultaneously by just a one-time measurement. The simple but practical configuration is composed of a bidirectional Mach-Zehnder interferometer and multichannel transmission devices connected to the IFOG coil under test. The static and dynamic temperature results of the IFOG coil reveal that its polarization-related symmetric properties can be effectively obtained with high accuracy. The optical path symmetry investigation is highly beneficial in monitoring and improving the winding technology of an IFOG coil and reducing the nonreciprocal effect of an IFOG.

  3. Shape Measurement of Ellipsoidal Particles in a Cross-Slot Microchannel Utilizing Viscoelastic Particle Focusing.

    Science.gov (United States)

    Kim, Junghee; Kim, Jun Young; Kim, Younghun; Lee, Seong Jae; Kim, Ju Min

    2017-09-05

    Shape measurement of nonspherical microparticles by conventional methods such as optical microscopy is challenging owing to particle aggregation or uncertainty regarding the out-of-plane arrangement of particles. In this work, we propose a facile microfluidic method to align particles in-plane utilizing the extensional flow field generated in a cross-slot microchannel. Viscoelastic particle focusing is also harnessed to move particles toward the stagnation point of the cross-slot microchannel. We demonstrate that the shapes of ellipsoidal particles with various aspect ratios can be successfully measured using our novel microfluidic method. This method is expected to be useful in a wide range of applications such as shape measurement of nonspherical cells.

  4. Expected Utility and Sequential Elimination Models of Career Decision Making.

    Science.gov (United States)

    Shaffer, Michal; Lichtenberg, James W.

    Decision-making strategies have traditionally been classified as either prescriptive/normative or descriptive/behavioral in nature. Proponents of prescriptive/normative decision-making models attempt to develop procedures for making optimal decisions while proponents of the descriptive/behavioral models look for a choice that meets a minimal set…

  5. On the Utility of Island Models in Dynamic Optimization

    DEFF Research Database (Denmark)

    Lissovoi, Andrei; Witt, Carsten

    2015-01-01

    A simple island model with λ islands and migration occurring after every τ iterations is studied on the dynamic fitness function Maze. This model is equivalent to a (1+λ) EA if τ=1, i.e., migration occurs during every iteration. It is proved that even for an increased offspring population size up...

  6. Creating a Linear Model to Optimize Satellite Communication Bandwidth Utilization

    National Research Council Canada - National Science Library

    Stone, David A

    2006-01-01

    .... The paper then presents an example of a linear model that could be expanded for implementation and used for actual problem analysis. The final section of the paper describes areas that require further study and additional steps that must be taken to convert the concept presented in this paper to an actual model suitable for use.

  7. Electric power bidding model for practical utility system

    Directory of Open Access Journals (Sweden)

    M. Prabavathi

    2018-03-01

    Full Text Available A competitive open market environment has been created due to the restructuring in the electricity market. In the new competitive market, mostly a centrally operated pool with a power exchange has been introduced to meet the offers from the competing suppliers with the bids of the customers. In such an open access environment, the formation of bidding strategy is one of the most challenging and important tasks for electricity participants to maximize their profit. To build bidding strategies for power suppliers and consumers in the restructured electricity market, a new mathematical framework is proposed in this paper. It is assumed that each participant submits several blocks of real power quantities along with their bidding prices. The effectiveness of the proposed method is tested on Indian Utility-62 bus system and IEEE-118 bus system. Keywords: Bidding strategy, Day ahead electricity market, Market clearing price, Market clearing volume, Block bid, Intermediate value theorem

  8. Computational model for microstructure and effective thermal conductivity of ash deposits in utility boilers

    Science.gov (United States)

    Kweon, Soon-Cheol

    The ash deposits formed in pulverized-coal fired power plants reduce heat transfer rate to furnace wall, super heater tubes, and other heat transfer surfaces. The thermal properties that influence strongly on this heat transfer depend mainly on the microstructure of the ash deposit. This dissertation examines three issues associated with the ash deposits in utility boilers: (1) the three-dimensional model for characterization of the ash deposit microstructures from the sample ash deposits, (2) the computational model for effective thermal conductivity of sintered packed beds with low conductive stagnant fluids, and (3) the application of thermal resistor network model for the effective thermal conductivity of ash deposits in utility boilers. The SEM image analysis was conducted on two sample ash deposits to characterize three-dimensional microstructure of the ash deposit with several structural parameters using stereology. A ballistic deposition model was adopted to simulate the deposit structure defined by the structural parameters. The inputs for the deposition model were chosen from the predicted and measured physical parameters, such as the size distribution, the probability of the particle rolling, and the degree of the particle sintering. The difference between the microstructure of the sample deposits and the simulated deposits was investigated and compared quantitatively based on the structural parameters defined. Both the sample and the simulated deposits agree in terms of the structural parameters. The computational model for predicting the effective thermal conductivity of sintered packed beds with low conductive stagnant fluid was built and the heat conduction through the contact area among sintered particles is the dominant mode of heat transfer. A thermal resistor network is used to model the heat conduction among the sintered particles and the thermal resistance among the contacting particles is estimated from both the contact area and the contact

  9. Measurement control program at model facility

    International Nuclear Information System (INIS)

    Schneider, R.A.

    1984-01-01

    A measurement control program for the model plant is described. The discussion includes the technical basis for such a program, the application of measurement control principles to each measurement, and the use of special experiments to estimate measurement error parameters for difficult-to-measure materials. The discussion also describes the statistical aspects of the program, and the documentation procedures used to record, maintain, and process the basic data

  10. Utilization of AERONET polarimetric measurements for improving retrieval of aerosol microphysics: GSFC, Beijing and Dakar data analysis

    Science.gov (United States)

    Fedarenka, Anton; Dubovik, Oleg; Goloub, Philippe; Li, Zhengqiang; Lapyonok, Tatyana; Litvinov, Pavel; Barel, Luc; Gonzalez, Louis; Podvin, Thierry; Crozel, Didier

    2016-08-01

    The study presents the efforts on including the polarimetric data to the routine inversion of the radiometric ground-based measurements for characterization of the atmospheric aerosols and analysis of the obtained advantages in retrieval results. First, to operationally process the large amount of polarimetric data the data preparation tool was developed. The AERONET inversion code adapted for inversion of both intensity and polarization measurements was used for processing. Second, in order to estimate the effect from utilization of polarimetric information on aerosol retrieval results, both synthetic data and the real measurements were processed using developed routine and analyzed. The sensitivity study has been carried out using simulated data based on three main aerosol models: desert dust, urban industrial and urban clean aerosols. The test investigated the effects of utilization of polarization data in the presence of random noise, bias in measurements of optical thickness and angular pointing shift. The results demonstrate the advantage of polarization data utilization in the cases of aerosols with pronounced concentration of fine particles. Further, the extended set of AERONET observations was processed. The data for three sites have been used: GSFC, USA (clean urban aerosol dominated by fine particles), Beijing, China (polluted industrial aerosol characterized by pronounced mixture of both fine and coarse modes) and Dakar, Senegal (desert dust dominated by coarse particles). The results revealed considerable advantage of polarimetric data applying for characterizing fine mode dominated aerosols including industrial pollution (Beijing). The use of polarization corrects particle size distribution by decreasing overestimated fine mode and increasing the coarse mode. It also increases underestimated real part of the refractive index and improves the retrieval of the fraction of spherical particles due to high sensitivity of polarization to particle shape

  11. Utility and limitations of measures of health inequities: a theoretical perspective

    Directory of Open Access Journals (Sweden)

    Olakunle Alonge

    2015-09-01

    Full Text Available What is already known on this subject?Various measures have been used in quantifying health inequities among populations in recent times; most of these measures were derived to capture the socioeconomic inequalities in health. These different measures do not always lend themselves to common interpretation by policy makers and health managers because they each reflect limited aspects of the concept of health inequities.What does this study add?To inform a more appropriate application of the different measures currently used in quantifying health inequities, this article explicates common theories underlying the definition of health inequities and uses this understanding to show the utility and limitations of these different measures. It also suggests some key features of an ideal indicator based on the conceptual understanding, with the hope of influencing future efforts in developing more robust measures of health inequities. The article also provides a conceptual ‘product label’ for the common measures of health inequities to guide users and ‘consumers’ in making more robust inferences and conclusions.This paper examines common approaches for quantifying health inequities and assesses the extent to which they incorporate key theories necessary for explicating the definition of health inequity. The first theoretical analysis examined the distinction between inter-individual and inter-group health inequalities as measures of health inequities. The second analysis considered the notion of fairness in health inequalities from different philosophical perspectives. To understand the extent to which different measures of health inequities incorporate these theoretical explanations, four criteria were used to assess each measure: 1 Does the indicator demonstrate inter-group or inter-individual health inequalities or both; 2 Does it reflect health inequalities in relation to socioeconomic position; 3 Is it sensitive to the absolute transfer of

  12. Model measurements in the cryogenic National Transonic Facility - An overview

    Science.gov (United States)

    Holmes, H. K.

    1985-01-01

    In the operation of the National Transonic Facility (NTF) higher Reynolds numbers are obtained on the basis of a utilization of low operational temperatures and high pressures. Liquid nitrogen is used as cryogenic medium, and temperatures in the range from -320 F to 160 F can be employed. A maximum pressure of 130 psi is specified, while the NTF design parameter for the Reynolds number is 120,000,000. In view of the new requirements regarding the measurement systems, major developments had to be undertaken in virtually all wind tunnel measurement areas and, in addition, some new measurement systems were needed. Attention is given to force measurement, pressure measurement, model attitude, model deformation, and the data system.

  13. Measuring the benefits and costs of utility conservation and load-management programs

    Energy Technology Data Exchange (ETDEWEB)

    Hirst, E.

    1984-06-01

    Much less has been learned than could have been from utility operation of conservation and load management programs. This general ignorance concerning the performance (i.e., benefits and costs) of these programs exists because the authors have not devoted sufficient resources to careful evaluations of past and present programs. The authors need to build evaluation into the program planning and implementation process, and the authors should use actual electricity and gas bills to measure program performance. Finally, the authors should conduct well designed experiments to learn what does and does not work.

  14. Predictive Modeling of Defibrillation utilizing Hexahedral and Tetrahedral Finite Element Models: Recent Advances

    Science.gov (United States)

    Triedman, John K.; Jolley, Matthew; Stinstra, Jeroen; Brooks, Dana H.; MacLeod, Rob

    2008-01-01

    ICD implants may be complicated by body size and anatomy. One approach to this problem has been the adoption of creative, extracardiac implant strategies using standard ICD components. Because data on safety or efficacy of such ad hoc implant strategies is lacking, we have developed image-based finite element models (FEMs) to compare electric fields and expected defibrillation thresholds (DFTs) using standard and novel electrode locations. In this paper, we review recently published studies by our group using such models, and progress in meshing strategies to improve efficiency and visualization. Our preliminary observations predict that they may be large changes in DFTs with clinically relevant variations of electrode placement. Extracardiac ICDs of various lead configurations are predicted to be effective in both children and adults. This approach may aid both ICD development and patient-specific optimization of electrode placement, but the simplified nature of current models dictates further development and validation prior to clinical or industrial utilization. PMID:18817926

  15. Tritium Specific Adsorption Simulation Utilizing the OSPREY Model

    Energy Technology Data Exchange (ETDEWEB)

    Veronica Rutledge; Lawrence Tavlarides; Ronghong Lin; Austin Ladshaw

    2013-09-01

    During the processing of used nuclear fuel, volatile radionuclides will be discharged to the atmosphere if no recovery processes are in place to limit their release. The volatile radionuclides of concern are 3H, 14C, 85Kr, and 129I. Methods are being developed, via adsorption and absorption unit operations, to capture these radionuclides. It is necessary to model these unit operations to aid in the evaluation of technologies and in the future development of an advanced used nuclear fuel processing plant. A collaboration between Fuel Cycle Research and Development Offgas Sigma Team member INL and a NEUP grant including ORNL, Syracuse University, and Georgia Institute of Technology has been formed to develop off gas models and support off gas research. This report is discusses the development of a tritium specific adsorption model. Using the OSPREY model and integrating it with a fundamental level isotherm model developed under and experimental data provided by the NEUP grant, the tritium specific adsorption model was developed.

  16. Standard Model measurements with the ATLAS detector

    Directory of Open Access Journals (Sweden)

    Hassani Samira

    2015-01-01

    Full Text Available Various Standard Model measurements have been performed in proton-proton collisions at a centre-of-mass energy of √s = 7 and 8 TeV using the ATLAS detector at the Large Hadron Collider. A review of a selection of the latest results of electroweak measurements, W/Z production in association with jets, jet physics and soft QCD is given. Measurements are in general found to be well described by the Standard Model predictions.

  17. Measurement Model Specification Error in LISREL Structural Equation Models.

    Science.gov (United States)

    Baldwin, Beatrice; Lomax, Richard

    This LISREL study examines the robustness of the maximum likelihood estimates under varying degrees of measurement model misspecification. A true model containing five latent variables (two endogenous and three exogenous) and two indicator variables per latent variable was used. Measurement model misspecification considered included errors of…

  18. Evaluating the performance and utility of regional climate models

    DEFF Research Database (Denmark)

    Christensen, Jens H.; Carter, Timothy R.; Rummukainen, Markku

    2007-01-01

    This special issue of Climatic Change contains a series of research articles documenting co-ordinated work carried out within a 3-year European Union project 'Prediction of Regional scenarios and Uncertainties for Defining European Climate change risks and Effects' (PRUDENCE). The main objective...... weather events and (7) implications of the results for policy. A paper summarising the related MICE (Modelling the Impact of Climate Extremes) project is also included. The second part of the issue contains 12 articles that focus in more detail on some of the themes summarised in the overarching papers....... The PRUDENCE results represent the first comprehensive, continental-scale intercomparison and evaluation of high resolution climate models and their applications, bringing together climate modelling, impact research and social sciences expertise on climate change....

  19. Recursive inter-generational utility in global climate risk modeling

    Energy Technology Data Exchange (ETDEWEB)

    Minh, Ha-Duong [Centre International de Recherche sur l' Environnement et le Developpement (CIRED-CNRS), 75 - Paris (France); Treich, N. [Institut National de Recherches Agronomiques (INRA-LEERNA), 31 - Toulouse (France)

    2003-07-01

    This paper distinguishes relative risk aversion and resistance to inter-temporal substitution in climate risk modeling. Stochastic recursive preferences are introduced in a stylized numeric climate-economy model using preliminary IPCC 1998 scenarios. It shows that higher risk aversion increases the optimal carbon tax. Higher resistance to inter-temporal substitution alone has the same effect as increasing the discount rate, provided that the risk is not too large. We discuss implications of these findings for the debate upon discounting and sustainability under uncertainty. (author)

  20. Utilization of an ultrasound beam steering angle for measurements of tissue displacement vector and lateral displacement

    Directory of Open Access Journals (Sweden)

    Chikayoshi Sumi

    2010-09-01

    of a lateral displacement. However, for displacement vector measurements to describe complex tissue motions (eg, cardiac motion, if the axial coordinate corresponds to the depth direction in the target tissue, an ideal steering angle will be 45°. A two-dimensional echo simulation shows that for the block-matching methods, LM yields more accurate displacement vector measurements than ASTA, whereas with MAM and MDM using a moving average and a mirror setting and 1D methods, ASTA yields more accurate lateral displacement measurements than LM. The block-matching method requires fewer calculations than the moving average method; however, a lower degree of accuracy is obtained. As with LM, multidimensional measurement methods yield more accurate measurements with ASTA than the corresponding 1D measurement methods. Summarizing, for displacement vector measurements or lateral displacement measurements using the multidimensional measurement methods, the ranking of the degree of measurement accuracy and stability is ASTA with a mirror setting > LM with a moving average > LM with block matching > ASTA with block matching. Because every tissue has its own motion (heart, liver, etc and occasionally obstacles, such as bones, interfere with the measurements, the target tissue will determine the selection of the proper beamforming method with a choice between LM and ASTA. As for use with LM previously clarified, an appropriate displacement measurement method should also be selected for use with ASTA according to the echo signal-to-noise ratio, a required spatial resolution and a required calculation speed. ASTA, together with LM, can potentially enable the utilization of new aspects of displacement measurements.Keywords: a steering angle, lateral modulation, displacement vector measurement, lateral displacement measurement

  1. Method and Apparatus for measuring Gravitational Acceleration Utilizing a high Temperature Superconducting Bearing

    Energy Technology Data Exchange (ETDEWEB)

    Hull, John R.

    1998-11-06

    Gravitational acceleration is measured in all spatial dimensions with improved sensitivity by utilizing a high temperature superconducting (HTS) gravimeter. The HTS gravimeter is comprised of a permanent magnet suspended in a spaced relationship from a high temperature superconductor, and a cantilever having a mass at its free end is connected to the permanent magnet at its fixed end. The permanent magnet and superconductor combine to form a bearing platform with extremely low frictional losses, and the rotational displacement of the mass is measured to determine gravitational acceleration. Employing a high temperature superconductor component has the significant advantage of having an operative temperature at or below 77K, whereby cooling maybe accomplished with liquid nitrogen.

  2. Method and apparatus for measuring gravitational acceleration utilizing a high temperature superconducting bearing

    Science.gov (United States)

    Hull, John R.

    2000-01-01

    Gravitational acceleration is measured in all spatial dimensions with improved sensitivity by utilizing a high temperature superconducting (HTS) gravimeter. The HTS gravimeter is comprised of a permanent magnet suspended in a spaced relationship from a high temperature superconductor, and a cantilever having a mass at its free end is connected to the permanent magnet at its fixed end. The permanent magnet and superconductor combine to form a bearing platform with extremely low frictional losses, and the rotational displacement of the mass is measured to determine gravitational acceleration. Employing a high temperature superconductor component has the significant advantage of having an operating temperature at or below 77K, whereby cooling may be accomplished with liquid nitrogen.

  3. Utilization of Software Tools for Uncertainty Calculation in Measurement Science Education

    International Nuclear Information System (INIS)

    Zangl, Hubert; Zine-Zine, Mariam; Hoermaier, Klaus

    2015-01-01

    Despite its importance, uncertainty is often neglected by practitioners in the design of system even in safety critical applications. Thus, problems arising from uncertainty may only be identified late in the design process and thus lead to additional costs. Although there exists numerous tools to support uncertainty calculation, reasons for limited usage in early design phases may be low awareness of the existence of the tools and insufficient training in the practical application. We present a teaching philosophy that addresses uncertainty from the very beginning of teaching measurement science, in particular with respect to the utilization of software tools. The developed teaching material is based on the GUM method and makes use of uncertainty toolboxes in the simulation environment. Based on examples in measurement science education we discuss advantages and disadvantages of the proposed teaching philosophy and include feedback from students

  4. Utilization of Software Tools for Uncertainty Calculation in Measurement Science Education

    Science.gov (United States)

    Zangl, Hubert; Zine-Zine, Mariam; Hoermaier, Klaus

    2015-02-01

    Despite its importance, uncertainty is often neglected by practitioners in the design of system even in safety critical applications. Thus, problems arising from uncertainty may only be identified late in the design process and thus lead to additional costs. Although there exists numerous tools to support uncertainty calculation, reasons for limited usage in early design phases may be low awareness of the existence of the tools and insufficient training in the practical application. We present a teaching philosophy that addresses uncertainty from the very beginning of teaching measurement science, in particular with respect to the utilization of software tools. The developed teaching material is based on the GUM method and makes use of uncertainty toolboxes in the simulation environment. Based on examples in measurement science education we discuss advantages and disadvantages of the proposed teaching philosophy and include feedback from students.

  5. [Thermal energy utilization analysis and energy conservation measures of fluidized bed dryer].

    Science.gov (United States)

    Xing, Liming; Zhao, Zhengsheng

    2012-07-01

    To propose measures for enhancing thermal energy utilization by analyzing drying process and operation principle of fluidized bed dryers,in order to guide optimization and upgrade of fluidized bed drying equipment. Through a systematic analysis on drying process and operation principle of fluidized beds,the energy conservation law was adopted to calculate thermal energy of dryers. The thermal energy of fluidized bed dryers is mainly used to make up for thermal consumption of water evaporation (Qw), hot air from outlet equipment (Qe), thermal consumption for heating and drying wet materials (Qm) and heat dissipation to surroundings through hot air pipelines and cyclone separators. Effective measures and major approaches to enhance thermal energy utilization of fluidized bed dryers were to reduce exhaust gas out by the loss of heat Qe, recycle dryer export air quantity of heat, preserve heat for dry towers, hot air pipes and cyclone separators, dehumidify clean air in inlets and reasonably control drying time and air temperature. Such technical parameters such air supply rate, air inlet temperature and humidity, material temperature and outlet temperature and humidity are set and controlled to effectively save energy during the drying process and reduce the production cost.

  6. Medical Specialty Decision Model: Utilizing Social Cognitive Career Theory

    Science.gov (United States)

    Gibson, Denise D.; Borges, Nicole J.

    2004-01-01

    Objectives: The purpose of this study was to develop a working model to explain medical specialty decision-making. Using Social Cognitive Career Theory, we examined personality, medical specialty preferences, job satisfaction, and expectations about specialty choice to create a conceptual framework to guide specialty choice decision-making.…

  7. Modeling Resource Utilization of a Large Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current ...

  8. Modelling Resource Utilization of a Large Data Acquisition System

    CERN Document Server

    Santos, Alejandro; The ATLAS collaboration

    2017-01-01

    The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current ...

  9. Lunar-Forming Giant Impact Model Utilizing Modern Graphics ...

    Indian Academy of Sciences (India)

    2016-01-27

    Jan 27, 2016 ... Recent giant impact models focus on producing a circumplanetary disk of the proper composition around the Earth and defer to earlier works for the accretion of this disk into the Moon. The discontinuity between creating the circumplanetary disk and accretion of the Moon is unnatural and lacks simplicity.

  10. Asset transformation and the challenges to servitize a utility business model

    International Nuclear Information System (INIS)

    Helms, Thorsten

    2016-01-01

    The traditional energy utility business model is under pressure, and energy services are expected to play an important role for the energy transition. Experts and scholars argue that utilities need to innovate their business models, and transform from commodity suppliers to service providers. The transition from a product-oriented, capital-intensive business model based on tangible assets, towards a service-oriented, expense-intensive business model based on intangible assets may present great managerial and organizational challenges. Little research exists about such transitions for capital-intensive commodity providers, and particularly energy utilities, where the challenges to servitize are expected to be greatest. This qualitative paper explores the barriers to servitization within selected Swiss and German utility companies through a series of interviews with utility managers. One of them is ‘asset transformation’, the shift from tangible to intangible assets as major input factor for the value proposition, which is proposed as a driver for the complexity of business model transitions. Managers need to carefully manage those challenges, and find ways to operate both new service and established utility business models aside. Policy makers can support the transition of utilities through more favorable regulatory frameworks for energy services, and by supporting the exchange of knowledge in the industry. - Highlights: •The paper analyses the expected transformation of utilities into service-providers. •Service and utility business models possess very different attributes. •The former is based on intangible, the latter on tangible assets. •The transformation into a service-provider is related with great challenges. •Asset transformation is proposed as a barrier for business model innovation.

  11. Radio propagation measurement and channel modelling

    CERN Document Server

    Salous, Sana

    2013-01-01

    While there are numerous books describing modern wireless communication systems that contain overviews of radio propagation and radio channel modelling, there are none that contain detailed information on the design, implementation and calibration of radio channel measurement equipment, the planning of experiments and the in depth analysis of measured data. The book would begin with an explanation of the fundamentals of radio wave propagation and progress through a series of topics, including the measurement of radio channel characteristics, radio channel sounders, measurement strategies

  12. Indoor MIMO Channel Measurement and Modeling

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ødum; Andersen, Jørgen Bach

    2005-01-01

    Forming accurate models of the multiple input multiple output (MIMO) channel is essential both for simulation as well as understanding of the basic properties of the channel. This paper investigates different known models using measurements obtained with a 16x32 MIMO channel sounder for the 5.8GHz...... band. The measurements were carried out in various indoor scenarios including both temporal and spatial aspects of channel changes. The models considered include the so-called Kronecker model, a model proposed by Weichselberger et. al., and a model involving the full covariance matrix, the most...... accurate model for Gaussian channels. For each of the environments different sizes of both the transmitter and receiver antenna arrays are investigated, 2x2 up to 16x32. Generally it was found that in terms of capacity cumulative distribution functions (CDFs) all models fit well for small array sizes...

  13. Decision modelling tools for utilities in the deregulated energy market

    Energy Technology Data Exchange (ETDEWEB)

    Makkonen, S. [Process Vision Oy, Helsinki (Finland)

    2005-07-01

    This thesis examines the impact of the deregulation of the energy market on decision making and optimisation in utilities and demonstrates how decision support applications can solve specific encountered tasks in this context. The themes of the thesis are presented in different frameworks in order to clarify the complex decision making and optimisation environment where new sources of uncertainties arise due to the convergence of energy markets, globalisation of energy business and increasing competition. This thesis reflects the changes in the decision making and planning environment of European energy companies during the period from 1995 to 2004. It also follows the development of computational performance and evolution of energy information systems during the same period. Specifically, this thesis consists of studies at several levels of the decision making hierarchy ranging from top-level strategic decision problems to specific optimisation algorithms. On the other hand, the studies also follow the progress of the liberalised energy market from the monopolistic era to the fully competitive market with new trading instruments and issues like emissions trading. This thesis suggests that there is an increasing need for optimisation and multiple criteria decision making methods, and that new approaches based on the use of operations research are welcome as the deregulation proceeds and uncertainties increase. Technically, the optimisation applications presented are based on Lagrangian relaxation techniques and the dedicated Power Simplex algorithm supplemented with stochastic scenario analysis for decision support, a heuristic method to allocate common benefits and potential losses of coalitions of power companies, and an advanced Branch- and-Bound algorithm to solve efficiently nonconvex optimisation problems. The optimisation problems are part of the operational and tactical decision making process that has become very complex in the recent years. Similarly

  14. Decision modelling tools for utilities in the deregulated energy market

    International Nuclear Information System (INIS)

    Makkonen, S.

    2005-01-01

    This thesis examines the impact of the deregulation of the energy market on decision making and optimisation in utilities and demonstrates how decision support applications can solve specific encountered tasks in this context. The themes of the thesis are presented in different frameworks in order to clarify the complex decision making and optimisation environment where new sources of uncertainties arise due to the convergence of energy markets, globalisation of energy business and increasing competition. This thesis reflects the changes in the decision making and planning environment of European energy companies during the period from 1995 to 2004. It also follows the development of computational performance and evolution of energy information systems during the same period. Specifically, this thesis consists of studies at several levels of the decision making hierarchy ranging from top-level strategic decision problems to specific optimisation algorithms. On the other hand, the studies also follow the progress of the liberalised energy market from the monopolistic era to the fully competitive market with new trading instruments and issues like emissions trading. This thesis suggests that there is an increasing need for optimisation and multiple criteria decision making methods, and that new approaches based on the use of operations research are welcome as the deregulation proceeds and uncertainties increase. Technically, the optimisation applications presented are based on Lagrangian relaxation techniques and the dedicated Power Simplex algorithm supplemented with stochastic scenario analysis for decision support, a heuristic method to allocate common benefits and potential losses of coalitions of power companies, and an advanced Branch- and-Bound algorithm to solve efficiently nonconvex optimisation problems. The optimisation problems are part of the operational and tactical decision making process that has become very complex in the recent years. Similarly

  15. Utility of a three-dimensional wound measurement device in pressure ulcers

    Directory of Open Access Journals (Sweden)

    Goto T

    2017-10-01

    Full Text Available Taichi Goto,1–3 Gojiro Nakagami,1,4 Ayano Nakai,1 Shuhei Noyori,1,5 Sanae Sasaki,6 Chieko Hayashi,6 Tomomitsu Miyagaki,7 Kaname Akamata,7 Hiromi Sanada1,4 1Department of Gerontological Nursing/Wound Care Management, Graduate School of Medicine, 2Global Leadership Initiative for an Age-Friendly Society, The University of Tokyo, Bunkyo-ku, 3Japan Society for the Promotion of Science, Chiyoda-ku, Japan; 4Global Nursing Research Center, Graduate School of Medicine, The University of Tokyo, 5Graduate Program for Social ICT Global Creative Leaders, The University of Tokyo, 6Department of Nursing, 7Department of Dermatology, The University of Tokyo Hospital, Bunkyo-ku, Tokyo, Japan Introduction: Depth assessment is important for severe pressure ulcers (PUs; however, a device for the metric measurement of wounds, including depth, is lacking in clinical settings. Recent technological advancements have enabled the evaluation of the depth of wounds, and three-dimensional measurements are now available. The aim of this study was to test the utility of a newly developed three-dimensional wound measurement device in the clinical setting.Methods: We recruited three patients, each with a PU, who were being treated by a PU team at a university hospital. We measured the length, width, area, and maximal depth of the ulcers by using the device and with the conventional method. The ulcer volume was measured only with the device. The difference in measurement results of the device before and after debridement was compared in the first patient. The difference in measurement results between the conventional method and the device was compared in the second patient. Correlation coefficients between the conventional method and the device obtained from longitudinal data were calculated in the third patient.Results: The changes in measurements between before and after debridement were easily detected by the device in the first patient. Although the maximal depth was

  16. The determination of chromium-50 in human blood and its utilization for blood volume measurements

    International Nuclear Information System (INIS)

    Zeisler, R.; Young, I.

    1986-01-01

    Possible relationships between insufficient blood volume increases during pregnancy and infant mortality could be established with an adequate measurement procedure. An accurate and precise technique for blood volume measurements has been found in the isotope dilution technique using chromium-51 as a label for red blood cells. However, in a study involving pregnant women, only stable isotopes can be used for labeling. Stable chromium-50 can be determined in total blood samples before and after dilution experiments by neutron activation analysis (NAA) or mass spectrometry. However, both techniques may be affected by insufficient sensitivity and contamination problems at the inherently low natural chromium concentrations to be measured in the blood. NAA procedures involving irradiations with highly thermalized neutrons at a fluence rate of 2x10 13 n/cm 2 xs and low background gamma spectrometry are applied to the analysis of total blood. Natural levels of chromium-50 in human and animal blood have been found to be <0.1 ng/mL; i.e., total chromium levels of <3 ng/mL. Based on the NAA procedure, a new approach to the blood volume measurement via chromium-50 isotope dilution has been developed which utilizes the ratio of the induced activities of chromium-51 to the iron-59 in three blood samples taken from each individual, namely blank, labeled and diluted labeled blood. (author)

  17. Biomimetic peptide-based models of [FeFe]-hydrogenases: utilization of phosphine-containing peptides

    Energy Technology Data Exchange (ETDEWEB)

    Roy, Souvik [Department of Chemistry and Biochemistry; Arizona State University; Tempe, USA; Nguyen, Thuy-Ai D. [Department of Chemistry and Biochemistry; Arizona State University; Tempe, USA; Gan, Lu [Department of Chemistry and Biochemistry; Arizona State University; Tempe, USA; Jones, Anne K. [Department of Chemistry and Biochemistry; Arizona State University; Tempe, USA

    2015-01-01

    Peptide based models for [FeFe]-hydrogenase were synthesized utilizing unnatural phosphine-amino acids and their electrocatalytic properties were investigated in mixed aqueous-organic solvents.

  18. Computer model for estimating electric utility environmental noise

    International Nuclear Information System (INIS)

    Teplitzky, A.M.; Hahn, K.J.

    1991-01-01

    This paper reports on a computer code for estimating environmental noise emissions from the operation and the construction of electric power plants that was developed based on algorithms. The computer code (Model) is used to predict octave band sound power levels for power plant operation and construction activities on the basis of the equipment operating characteristics and calculates off-site sound levels for each noise source and for an entire plant. Estimated noise levels are presented either as A-weighted sound level contours around the power plant or as octave band levels at user defined receptor locations. Calculated sound levels can be compared with user designated noise criteria, and the program can assist the user in analyzing alternative noise control strategies

  19. Utilization of FEM model for steel microstructure determination

    Science.gov (United States)

    Kešner, A.; Chotěborský, R.; Linda, M.; Hromasová, M.

    2018-02-01

    Agricultural tools which are used in soil processing, they are worn by abrasive wear mechanism cases by hard minerals particles in the soil. The wear rate is influenced by mechanical characterization of tools material and wear rate is influenced also by soil mineral particle contents. Mechanical properties of steel can be affected by a technology of heat treatment that it leads to a different microstructures. Experimental work how to do it is very expensive and thanks to numerical methods like FEM we can assumed microstructure at low cost but each of numerical model is necessary to be verified. The aim of this work has shown a procedure of prediction microstructure of steel for agricultural tools. The material characterizations of 51CrV4 grade steel were used for numerical simulation like TTT diagram, heat capacity, heat conduction and other physical properties of material. A relationship between predicted microstructure by FEM and real microstructure after heat treatment shows a good correlation.

  20. A customer satisfaction model for a utility service industry

    Science.gov (United States)

    Jamil, Jastini Mohd; Nawawi, Mohd Kamal Mohd; Ramli, Razamin

    2016-08-01

    This paper explores the effect of Image, Customer Expectation, Perceived Quality and Perceived Value on Customer Satisfaction, and to investigate the effect of Image and Customer Satisfaction on Customer Loyalty of mobile phone provider in Malaysia. The result of this research is based on data gathered online from international students in one of the public university in Malaysia. Partial Least Squares Structural Equation Modeling (PLS-SEM) has been used to analyze the data that have been collected from the international students' perceptions. The results found that Image and Perceived Quality have significant impact on Customer Satisfaction. Image and Customer Satisfaction ware also found to have significantly related to Customer Loyalty. However, no significant impact has been found between Customer Expectation with Customer Satisfaction, Perceived Value with Customer Satisfaction, and Customer Expectation with Perceived Value. We hope that the findings may assist the mobile phone provider in production and promotion of their services.

  1. Model SH intelligent instrument for thickness measuring

    International Nuclear Information System (INIS)

    Liu Juntao; Jia Weizhuang; Zhao Yunlong

    1995-01-01

    The authors introduce Model SH Intelligent Instrument for thickness measuring by using principle of beta back-scattering and its application range, features, principle of operation, system design, calibration and specifications

  2. Factors Impacting Student Service Utilization at Ontario Colleges: Key Performance Indicators as a Measure of Success: A Niagara College View

    Science.gov (United States)

    Veres, David

    2015-01-01

    Student success in Ontario College is significantly influenced by the utilization of student services. At Niagara College there has been a significant investment in student services as a strategy to support student success. Utilizing existing KPI data, this quantitative research project is aimed at measuring factors that influence both the use of…

  3. Double-label autoradiographic deoxyglucose method for sequential measurement of regional cerebral glucose utilization

    Energy Technology Data Exchange (ETDEWEB)

    Redies, C.; Diksic, M.; Evans, A.C.; Gjedde, A.; Yamamoto, Y.L.

    1987-08-01

    A new double-label autoradiographic glucose analog method for the sequential measurement of altered regional cerebral metabolic rates for glucose in the same animal is presented. This method is based on the sequential injection of two boluses of glucose tracer labeled with two different isotopes (short-lived /sup 18/F and long-lived /sup 3/H, respectively). An operational equation is derived which allows the determination of glucose utilization for the time period before the injection of the second tracer; this equation corrects for accumulation and loss of the first tracer from the metabolic pool occurring after the injection of the second tracer. An error analysis of this operational equation is performed. The double-label deoxyglucose method is validated in the primary somatosensory (''barrel'') cortex of the anesthetized rat. Two different rows of whiskers were stimulated sequentially in each rat; the two periods of stimulation were each preceded by an injection of glucose tracer. After decapitation, dried brain slices were first exposed, in direct contact, to standard X-ray film and then to uncoated, ''tritium-sensitive'' film. Results show that the double-label deoxyglucose method proposed in this paper allows the quantification and complete separation of glucose utilization patterns elicited by two different stimulations sequentially applied in the same animal.

  4. Utility of salivary enzyme immunoassays for measuring estradiol and testosterone in adolescents: a pilot study.

    Science.gov (United States)

    Amatoury, Mazen; Lee, Jennifer W; Maguire, Ann M; Ambler, Geoffrey R; Steinbeck, Katharine S

    2016-04-09

    We investigated the utility of enzyme immunoassay kits for measuring low levels of salivary estradiol and testosterone in adolescents and objectively assessed prevalence of blood contamination. Endocrine patients provided plasma and saliva for estradiol (females) or testosterone (males) assay. Saliva samples were also tested with a blood contamination kit. Picomolar levels of salivary estradiol in females failed to show any significant correlation with plasma values (r=0.20, p=0.37). The nanomolar levels of salivary testosterone in males showed a strong correlation (r=0.78, p<0.001). A significant number of saliva samples had blood contamination. After exclusion, correlations remained non-significant for estradiol, but strengthened for testosterone (r=0.88, p<0.001). The salivary estradiol enzyme immunoassay is not clinically informative at low levels. Users should interpret clinical saliva with caution due to potential blood contamination. Our data supports the utility of the salivary testosterone enzyme immunoassay for monitoring adolescent boys on hormone developmental therapy.

  5. Smart Kinesthetic Measurement Model in Dance Composision

    OpenAIRE

    Triana, Dinny Devi

    2017-01-01

    This research aimed to discover a model of assessment that could measure kinesthetic intelligence in arranging a dance from several related variable, both direct variable and indirect variable. The research method used was a qualitative method using path analysis to determine the direct and indirect variable; therefore, the dominant variable that supported the measurement model of kinesthetic intelligence in arranging dance could be discovered. The population used was the students of the art ...

  6. A generalized measurement model to quantify health: the multi-attribute preference response model.

    Science.gov (United States)

    Krabbe, Paul F M

    2013-01-01

    After 40 years of deriving metric values for health status or health-related quality of life, the effective quantification of subjective health outcomes is still a challenge. Here, two of the best measurement tools, the discrete choice and the Rasch model, are combined to create a new model for deriving health values. First, existing techniques to value health states are briefly discussed followed by a reflection on the recent revival of interest in patients' experience with regard to their possible role in health measurement. Subsequently, three basic principles for valid health measurement are reviewed, namely unidimensionality, interval level, and invariance. In the main section, the basic operation of measurement is then discussed in the framework of probabilistic discrete choice analysis (random utility model) and the psychometric Rasch model. It is then shown how combining the main features of these two models yields an integrated measurement model, called the multi-attribute preference response (MAPR) model, which is introduced here. This new model transforms subjective individual rank data into a metric scale using responses from patients who have experienced certain health states. Its measurement mechanism largely prevents biases such as adaptation and coping. Several extensions of the MAPR model are presented. The MAPR model can be applied to a wide range of research problems. If extended with the self-selection of relevant health domains for the individual patient, this model will be more valid than existing valuation techniques.

  7. Modelling of limestone injection for SO2 capture in a coal fired utility boiler

    International Nuclear Information System (INIS)

    Kovacik, G.J.; Reid, K.; McDonald, M.M.; Knill, K.

    1997-01-01

    A computer model was developed for simulating furnace sorbent injection for SO 2 capture in a full scale utility boiler using TASCFlow TM computational fluid dynamics (CFD) software. The model makes use of a computational grid of the superheater section of a tangentially fired utility boiler. The computer simulations are three dimensional so that the temperature and residence time distribution in the boiler could be realistically represented. Results of calculations of simulated sulphur capture performance of limestone injection in a typical utility boiler operation were presented

  8. Automated statistical modeling of analytical measurement systems

    International Nuclear Information System (INIS)

    Jacobson, J.J.

    1992-01-01

    The statistical modeling of analytical measurement systems at the Idaho Chemical Processing Plant (ICPP) has been completely automated through computer software. The statistical modeling of analytical measurement systems is one part of a complete quality control program used by the Remote Analytical Laboratory (RAL) at the ICPP. The quality control program is an integration of automated data input, measurement system calibration, database management, and statistical process control. The quality control program and statistical modeling program meet the guidelines set forth by the American Society for Testing Materials and American National Standards Institute. A statistical model is a set of mathematical equations describing any systematic bias inherent in a measurement system and the precision of a measurement system. A statistical model is developed from data generated from the analysis of control standards. Control standards are samples which are made up at precise known levels by an independent laboratory and submitted to the RAL. The RAL analysts who process control standards do not know the values of those control standards. The object behind statistical modeling is to describe real process samples in terms of their bias and precision and, to verify that a measurement system is operating satisfactorily. The processing of control standards gives us this ability

  9. Utilization of AERONET polarimetric measurements for improving retrieval of aerosol microphysics: GSFC, Beijing and Dakar data analysis

    International Nuclear Information System (INIS)

    Fedarenka, Anton; Dubovik, Oleg; Goloub, Philippe; Li, Zhengqiang; Lapyonok, Tatyana; Litvinov, Pavel; Barel, Luc; Gonzalez, Louis; Podvin, Thierry; Crozel, Didier

    2016-01-01

    The study presents the efforts on including the polarimetric data to the routine inversion of the radiometric ground-based measurements for characterization of the atmospheric aerosols and analysis of the obtained advantages in retrieval results. First, to operationally process the large amount of polarimetric data the data preparation tool was developed. The AERONET inversion code adapted for inversion of both intensity and polarization measurements was used for processing. Second, in order to estimate the effect from utilization of polarimetric information on aerosol retrieval results, both synthetic data and the real measurements were processed using developed routine and analyzed. The sensitivity study has been carried out using simulated data based on three main aerosol models: desert dust, urban industrial and urban clean aerosols. The test investigated the effects of utilization of polarization data in the presence of random noise, bias in measurements of optical thickness and angular pointing shift. The results demonstrate the advantage of polarization data utilization in the cases of aerosols with pronounced concentration of fine particles. Further, the extended set of AERONET observations was processed. The data for three sites have been used: GSFC, USA (clean urban aerosol dominated by fine particles), Beijing, China (polluted industrial aerosol characterized by pronounced mixture of both fine and coarse modes) and Dakar, Senegal (desert dust dominated by coarse particles). The results revealed considerable advantage of polarimetric data applying for characterizing fine mode dominated aerosols including industrial pollution (Beijing). The use of polarization corrects particle size distribution by decreasing overestimated fine mode and increasing the coarse mode. It also increases underestimated real part of the refractive index and improves the retrieval of the fraction of spherical particles due to high sensitivity of polarization to particle shape

  10. A Framework for Organizing Current and Future Electric Utility Regulatory and Business Models

    Energy Technology Data Exchange (ETDEWEB)

    Satchwell, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cappers, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Schwartz, Lisa C. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fadrhonc, Emily Martin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-06-01

    Many regulators, utilities, customer groups, and other stakeholders are reevaluating existing regulatory models and the roles and financial implications for electric utilities in the context of today’s environment of increasing distributed energy resource (DER) penetrations, forecasts of significant T&D investment, and relatively flat or negative utility sales growth. When this is coupled with predictions about fewer grid-connected customers (i.e., customer defection), there is growing concern about the potential for serious negative impacts on the regulated utility business model. Among states engaged in these issues, the range of topics under consideration is broad. Most of these states are considering whether approaches that have been applied historically to mitigate the impacts of previous “disruptions” to the regulated utility business model (e.g., energy efficiency) as well as to align utility financial interests with increased adoption of such “disruptive technologies” (e.g., shareholder incentive mechanisms, lost revenue mechanisms) are appropriate and effective in the present context. A handful of states are presently considering more fundamental changes to regulatory models and the role of regulated utilities in the ownership, management, and operation of electric delivery systems (e.g., New York “Reforming the Energy Vision” proceeding).

  11. Measuring the Capacity Utilization of Public District Hospitals in Tunisia: Using Dual Data Envelopment Analysis Approach

    Directory of Open Access Journals (Sweden)

    Chokri Arfa

    2017-01-01

    Full Text Available Background Public district hospitals (PDHs in Tunisia are not operating at full plant capacity and underutilize their operating budget. Methods Individual PDHs capacity utilization (CU is measured for 2000 and 2010 using dual data envelopment analysis (DEA approach with shadow prices input and output restrictions. The CU is estimated for 101 of 105 PDH in 2000 and 94 of 105 PDH in 2010. Results In average, unused capacity is estimated at 18% in 2010 vs. 13% in 2000. Of PDHs 26% underutilize their operating budget in 2010 vs. 21% in 2000. Conclusion Inadequate supply, health quality and the lack of operating budget should be tackled to reduce unmet user’s needs and the bypassing of the PDHs and, thus to increase their CU. Social health insurance should be turned into a direct purchaser of curative and preventive care for the PDHs.

  12. Co-firing straw and coal in a 150-MWe utility boiler: in situ measurements

    DEFF Research Database (Denmark)

    Hansen, P. F.B.; Andersen, Karin Hedebo; Wieck-Hansen, K.

    1998-01-01

    A 2-year demonstration program is carried out by the Danish utility I/S Midtkraft at a 150-MWe PF-boiler unit reconstructed for co-firing straw and coal. As a part of the demonstration program, a comprehensive in situ measurement campaign was conducted during the spring of 1996 in collaboration...... with the Technical University of Denmark. Six sample positions have been established between the upper part of the furnace and the economizer. The campaign included in situ sampling of deposits on water/air-cooled probes, sampling of fly ash, flue gas and gas phase alkali metal compounds, and aerosols as well...... deposition propensities and high temperature corrosion during co-combustion of straw and coal in PF-boilers. Danish full scale results from co-firing straw and coal, the test facility and test program, and the potential theoretical support from the Technical University of Denmark are presented in this paper...

  13. On how access to an insurance market affects investments in safety measures, based on the expected utility theory

    International Nuclear Information System (INIS)

    Bjorheim Abrahamsen, Eirik; Asche, Frank

    2011-01-01

    This paper focuses on how access to an insurance market should influence investments in safety measures in accordance with the ruling paradigm for decision-making under uncertainty-the expected utility theory. We show that access to an insurance market in most situations will influence investments in safety measures. For an expected utility maximizer, an overinvestment in safety measures is likely if access to an insurance market is ignored, while an underinvestment in safety measures is likely if insurance is purchased without paying attention to the possibility for reducing the probability and/or consequences of an accidental event by safety measures.

  14. Measuring Model Rocket Engine Thrust Curves

    Science.gov (United States)

    Penn, Kim; Slaton, William V.

    2010-01-01

    This paper describes a method and setup to quickly and easily measure a model rocket engine's thrust curve using a computer data logger and force probe. Horst describes using Vernier's LabPro and force probe to measure the rocket engine's thrust curve; however, the method of attaching the rocket to the force probe is not discussed. We show how a…

  15. Models Used for Measuring Customer Engagement

    Directory of Open Access Journals (Sweden)

    Mihai TICHINDELEAN

    2013-12-01

    Full Text Available The purpose of the paper is to define and measure the customer engagement as a forming element of the relationship marketing theory. In the first part of the paper, the authors review the marketing literature regarding the concept of customer engagement and summarize the main models for measuring it. One probability model (Pareto/NBD model and one parametric model (RFM model specific for the customer acquisition phase are theoretically detailed. The second part of the paper is an application of the RFM model; the authors demonstrate that there is no statistical significant variation within the clusters formed on two different data sets (training and test set if the cluster centroids of the training set are used as initial cluster centroids for the second test set.

  16. Promoting target models by potential measures

    OpenAIRE

    Dubiel, Joerg

    2010-01-01

    Direct marketers use target models in order to minimize the spreading loss of sales efforts. The application of target models has become more widespread with the increasing range of sales efforts. Target models are relevant for offline marketers sending printed mails as well as for online marketers who have to avoid intensity. However business has retained its evaluation since the late 1960s. Marketing decision-makers still prefer managerial performance measures of the economic benefit of a t...

  17. Examining the utility of satellite-based wind sheltering estimates for lake hydrodynamic modeling

    Science.gov (United States)

    Van Den Hoek, Jamon; Read, Jordan S.; Winslow, Luke A.; Montesano, Paul; Markfort, Corey D.

    2015-01-01

    Satellite-based measurements of vegetation canopy structure have been in common use for the last decade but have never been used to estimate canopy's impact on wind sheltering of individual lakes. Wind sheltering is caused by slower winds in the wake of topography and shoreline obstacles (e.g. forest canopy) and influences heat loss and the flux of wind-driven mixing energy into lakes, which control lake temperatures and indirectly structure lake ecosystem processes, including carbon cycling and thermal habitat partitioning. Lakeshore wind sheltering has often been parameterized by lake surface area but such empirical relationships are only based on forested lakeshores and overlook the contributions of local land cover and terrain to wind sheltering. This study is the first to examine the utility of satellite imagery-derived broad-scale estimates of wind sheltering across a diversity of land covers. Using 30 m spatial resolution ASTER GDEM2 elevation data, the mean sheltering height, hs, being the combination of local topographic rise and canopy height above the lake surface, is calculated within 100 m-wide buffers surrounding 76,000 lakes in the U.S. state of Wisconsin. Uncertainty of GDEM2-derived hs was compared to SRTM-, high-resolution G-LiHT lidar-, and ICESat-derived estimates of hs, respective influences of land cover type and buffer width on hsare examined; and the effect of including satellite-based hs on the accuracy of a statewide lake hydrodynamic model was discussed. Though GDEM2 hs uncertainty was comparable to or better than other satellite-based measures of hs, its higher spatial resolution and broader spatial coverage allowed more lakes to be included in modeling efforts. GDEM2 was shown to offer superior utility for estimating hs compared to other satellite-derived data, but was limited by its consistent underestimation of hs, inability to detect within-buffer hs variability, and differing accuracy across land cover types. Nonetheless

  18. Markowitz portfolio optimization model employing fuzzy measure

    Science.gov (United States)

    Ramli, Suhailywati; Jaaman, Saiful Hafizah

    2017-04-01

    Markowitz in 1952 introduced the mean-variance methodology for the portfolio selection problems. His pioneering research has shaped the portfolio risk-return model and become one of the most important research fields in modern finance. This paper extends the classical Markowitz's mean-variance portfolio selection model applying the fuzzy measure to determine the risk and return. In this paper, we apply the original mean-variance model as a benchmark, fuzzy mean-variance model with fuzzy return and the model with return are modeled by specific types of fuzzy number for comparison. The model with fuzzy approach gives better performance as compared to the mean-variance approach. The numerical examples are included to illustrate these models by employing Malaysian share market data.

  19. On model selections for repeated measurement data in clinical studies.

    Science.gov (United States)

    Zou, Baiming; Jin, Bo; Koch, Gary G; Zhou, Haibo; Borst, Stephen E; Menon, Sandeep; Shuster, Jonathan J

    2015-05-10

    Repeated measurement designs have been widely used in various randomized controlled trials for evaluating long-term intervention efficacies. For some clinical trials, the primary research question is how to compare two treatments at a fixed time, using a t-test. Although simple, robust, and convenient, this type of analysis fails to utilize a large amount of collected information. Alternatively, the mixed-effects model is commonly used for repeated measurement data. It models all available data jointly and allows explicit assessment of the overall treatment effects across the entire time spectrum. In this paper, we propose an analytic strategy for longitudinal clinical trial data where the mixed-effects model is coupled with a model selection scheme. The proposed test statistics not only make full use of all available data but also utilize the information from the optimal model deemed for the data. The performance of the proposed method under various setups, including different data missing mechanisms, is evaluated via extensive Monte Carlo simulations. Our numerical results demonstrate that the proposed analytic procedure is more powerful than the t-test when the primary interest is to test for the treatment effect at the last time point. Simulations also reveal that the proposed method outperforms the usual mixed-effects model for testing the overall treatment effects across time. In addition, the proposed framework is more robust and flexible in dealing with missing data compared with several competing methods. The utility of the proposed method is demonstrated by analyzing a clinical trial on the cognitive effect of testosterone in geriatric men with low baseline testosterone levels. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Multiple indicators, multiple causes measurement error models.

    Science.gov (United States)

    Tekwe, Carmen D; Carter, Randy L; Cullings, Harry M; Carroll, Raymond J

    2014-11-10

    Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methods for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. As a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure. Copyright © 2014 John Wiley & Sons, Ltd.

  1. The Utility of Using a Near-Infrared (NIR) Camera to Measure Beach Surface Moisture

    Science.gov (United States)

    Nelson, S.; Schmutz, P. P.

    2017-12-01

    Surface moisture content is an important factor that must be considered when studying aeolian sediment transport in a beach environment. A few different instruments and procedures are available for measuring surface moisture content (i.e. moisture probes, LiDAR, and gravimetric moisture data from surface scrapings); however, these methods can be inaccurate, costly, and inapplicable, particularly in the field. Near-infrared (NIR) spectral band imagery is another technique used to obtain moisture data. NIR imagery has been predominately used through remote sensing and has yet to be used for ground-based measurements. Dry sand reflects infrared radiation given off by the sun and wet sand absorbs IR radiation. All things considered, this study assesses the utility of measuring surface moisture content of beach sand with a modified NIR camera. A traditional point and shoot digital camera was internally modified with the placement of a visible light-blocking filter. Images were taken of three different types of beach sand at controlled moisture content values, with sunlight as the source of infrared radiation. A technique was established through trial and error by comparing resultant histogram values using Adobe Photoshop with the various moisture conditions. The resultant IR absorption histogram values were calibrated to actual gravimetric moisture content from surface scrapings of the samples. Overall, the results illustrate that the NIR spectrum modified camera does not provide the ability to adequately measure beach surface moisture content. However, there were noted differences in IR absorption histogram values among the different sediment types. Sediment with darker quartz mineralogy provided larger variations in histogram values, but the technique is not sensitive enough to accurately represent low moisture percentages, which are of most importance when studying aeolian sediment transport.

  2. Utilization of MRI for Cerebral White Matter Injury in a Hypobaric Swine Model-Validation of Technique

    Science.gov (United States)

    2017-05-23

    disruption of axonal integrity as measured by DTI.11,12 The traditional neuropathological model posits gaseous embolic damage to the brain as the main...embolic cerebral damage and change in permeabil- ity of the blood brain barrier (BBB).26,27 DSC utilizes an exogenous contrast imaging agent that...spatial (1.6 × 1.6 × 2.0 mm) resolution, acquiring 23 slices for full brain coverage with no gaps. Imaging slices were prescribed axially. Image

  3. Utility values associated with advanced or metastatic non-small cell lung cancer: data needs for economic modeling.

    Science.gov (United States)

    Brown, Jacqueline; Cook, Keziah; Adamski, Kelly; Lau, Jocelyn; Bargo, Danielle; Breen, Sarah; Chawla, Anita

    2017-04-01

    Cost-effectiveness analyses often inform healthcare reimbursement decisions. The preferred measure of effectiveness is the quality adjusted life year (QALY) gained, where the quality of life adjustment is measured in terms of utility. Areas covered: We assessed the availability and variation of utility values for health states associated with advanced or metastatic non-small cell lung cancer (NSCLC) to identify values appropriate for cost-effectiveness models assessing alternative treatments. Our systematic search of six electronic databases (January 2000 to August 2015) found the current literature to be sparse in terms of utility values associated with NSCLC, identifying 27 studies. Utility values were most frequently reported over time and by treatment type, and less frequently by disease response, stage of disease, adverse events or disease comorbidities. Expert commentary: In response to rising healthcare costs, payers increasingly consider the cost-effectiveness of novel treatments in reimbursement decisions, especially in oncology. As the number of therapies available to treat NSCLC increases, cost-effectiveness analyses will play a key role in reimbursement decisions in this area. Quantifying the relationship between health and quality of life for NSCLC patients via utility values is an important component of assessing the cost effectiveness of novel treatments.

  4. Utilizing Multidimensional Measures of Race in Education Research: The Case of Teacher Perceptions.

    Science.gov (United States)

    Irizarry, Yasmiyn

    2015-10-01

    Education scholarship on race using quantitative data analysis consists largely of studies on the black-white dichotomy, and more recently, on the experiences of student within conventional racial/ethnic categories (white, Hispanic/Latina/o, Asian, black). Despite substantial shifts in the racial and ethnic composition of American children, studies continue to overlook the diverse racialized experiences for students of Asian and Latina/o descent, the racialization of immigration status, and the educational experiences of Native American students. This study provides one possible strategy for developing multidimensional measures of race using large-scale datasets and demonstrates the utility of multidimensional measures for examining educational inequality, using teacher perceptions of student behavior as a case in point. With data from the first grade wave of the Early Childhood Longitudinal Study, Kindergarten Cohort of 1998-1999, I examine differences in teacher ratings of Externalizing Problem Behaviors and Approaches to Learning across fourteen racialized subgroups at the intersections of race, ethnicity, and immigrant status. Results show substantial subgroup variation in teacher perceptions of problem and learning behaviors, while also highlighting key points of divergence and convergence within conventional racial/ethnic categories.

  5. An Analysis/Synthesis System of Audio Signal with Utilization of an SN Model

    Directory of Open Access Journals (Sweden)

    G. Rozinaj

    2004-12-01

    Full Text Available An SN (sinusoids plus noise model is a spectral model, in which theperiodic components of the sound are represented by sinusoids withtime-varying frequencies, amplitudes and phases. The remainingnon-periodic components are represented by a filtered noise. Thesinusoidal model utilizes physical properties of musical instrumentsand the noise model utilizes the human inability to perceive the exactspectral shape or the phase of stochastic signals. SN modeling can beapplied in a compression, transformation, separation of sounds, etc.The designed system is based on methods used in the SN modeling. Wehave proposed a model that achieves good results in audio perception.Although many systems do not save phases of the sinusoids, they areimportant for better modelling of transients, for the computation ofresidual and last but not least for stereo signals, too. One of thefundamental properties of the proposed system is the ability of thesignal reconstruction not only from the amplitude but from the phasepoint of view, as well.

  6. Testing substellar models with dynamical mass measurements

    Directory of Open Access Journals (Sweden)

    Liu M.C.

    2011-07-01

    Full Text Available We have been using Keck laser guide star adaptive optics to monitor the orbits of ultracool binaries, providing dynamical masses at lower luminosities and temperatures than previously available and enabling strong tests of theoretical models. We have identified three specific problems with theory: (1 We find that model color–magnitude diagrams cannot be reliably used to infer masses as they do not accurately reproduce the colors of ultracool dwarfs of known mass. (2 Effective temperatures inferred from evolutionary model radii are typically inconsistent with temperatures derived from fitting atmospheric models to observed spectra by 100–300 K. (3 For the only known pair of field brown dwarfs with a precise mass (3% and age determination (≈25%, the measured luminosities are ~2–3× higher than predicted by model cooling rates (i.e., masses inferred from Lbol and age are 20–30% larger than measured. To make progress in understanding the observed discrepancies, more mass measurements spanning a wide range of luminosity, temperature, and age are needed, along with more accurate age determinations (e.g., via asteroseismology for primary stars with brown dwarf binary companions. Also, resolved optical and infrared spectroscopy are needed to measure lithium depletion and to characterize the atmospheres of binary components in order to better assess model deficiencies.

  7. Measures of Quality in Business Process Modelling

    Directory of Open Access Journals (Sweden)

    Radek Hronza

    2015-06-01

    Full Text Available Business process modelling and analysing is undoubtedly one of the most important parts of Applied (Business Informatics. Quality of business process models (diagrams is crucial for any purpose in this area. The goal of a process analyst’s work is to create generally understandable, explicit and error free models. If a process is properly described, created models can be used as an input into deep analysis and optimization. It can be assumed that properly designed business process models (similarly as in the case of correctly written algorithms contain characteristics that can be mathematically described. Besides it will be possible to create a tool that will help process analysts to design proper models. As part of this review will be conducted systematic literature review in order to find and analyse business process model’s design and business process model’s quality measures. It was found that mentioned area had already been the subject of research investigation in the past. Thirty-three suitable scietific publications and twenty-two quality measures were found. Analysed scientific publications and existing quality measures do not reflect all important attributes of business process model’s clarity, simplicity and completeness. Therefore it would be appropriate to add new measures of quality.

  8. A stochastic model for quantum measurement

    International Nuclear Information System (INIS)

    Budiyono, Agung

    2013-01-01

    We develop a statistical model of microscopic stochastic deviation from classical mechanics based on a stochastic process with a transition probability that is assumed to be given by an exponential distribution of infinitesimal stationary action. We apply the statistical model to stochastically modify a classical mechanical model for the measurement of physical quantities reproducing the prediction of quantum mechanics. The system+apparatus always has a definite configuration at all times, as in classical mechanics, fluctuating randomly following a continuous trajectory. On the other hand, the wavefunction and quantum mechanical Hermitian operator corresponding to the physical quantity arise formally as artificial mathematical constructs. During a single measurement, the wavefunction of the whole system+apparatus evolves according to a Schrödinger equation and the configuration of the apparatus acts as the pointer of the measurement so that there is no wavefunction collapse. We will also show that while the outcome of each single measurement event does not reveal the actual value of the physical quantity prior to measurement, its average in an ensemble of identical measurements is equal to the average of the actual value of the physical quantity prior to measurement over the distribution of the configuration of the system. (paper)

  9. Utilization of a mental health collaborative care model among patients who require interpreter services.

    Science.gov (United States)

    Njeru, Jane W; DeJesus, Ramona S; St Sauver, Jennifer; Rutten, Lila J; Jacobson, Debra J; Wilson, Patrick; Wieland, Mark L

    2016-01-01

    Immigrants and refugees to the United States have a higher prevalence of depression compared to the general population and are less likely to receive adequate mental health services and treatment. Those with limited English proficiency (LEP) are at an even higher risk of inadequate mental health care. Collaborative care management (CCM) models for depression are effective in achieving treatment goals among a wide range of patient populations, including patients with LEP. The purpose of this study was to assess the utilization of a statewide initiative that uses CCM for depression management, among patients with LEP in a large primary care practice. This was a retrospective cohort study of patients with depression in a large primary care practice in Minnesota. Patients who met criteria for enrollment into the CCM [with a provider-generated diagnosis of depression or dysthymia in the electronic medical records, and a Patient Health Questionnaire-9 (PHQ-9) score ≥10]. Patient-identified need for interpreter services was used as a proxy for LEP. Rates of enrollment into the DIAMOND (Depression Improvement Across Minnesota, Offering A New Direction) program, a statewide initiative that uses CCM for depression management were measured. These rates were compared between eligible patients who require interpreter services versus patients who do not. Of the 7561 patients who met criteria for enrollment into the DIAMOND program during the study interval, 3511 were enrolled. Only 18.2 % of the eligible patients with LEP were enrolled into DIAMOND compared with the 47.2 % of the eligible English proficient patients. This finding persisted after adjustment for differences in age, gender and depression severity scores (adjusted OR [95 % confidence interval] = 0.43 [0.23, 0.81]). Within primary care practices, tailored interventions are needed, including those that address cultural competence and language navigation, to improve the utilization of this effective model among

  10. Clinical utility of the Five-Factor Model of personality disorder.

    Science.gov (United States)

    Mullins-Sweatt, Stephanie N; Lengel, Gregory J

    2012-12-01

    There exists a great deal of research regarding the validity of the Five-Factor Model (FFM) of personality disorder. One of the most common objections to this model is concern regarding clinical utility. This article discusses clinical utility in terms of three fundamental components (i.e., ease of usage, communication, and treatment). In addition, a considerable number of recent empirical studies have examined whether the FFM compares well to personality disorder diagnostic categories with respect to all three components of clinical utility. The purpose of the current article is to provide a description of the implications of each component of clinical utility as it relates to the FFM and to acknowledge and address the empirical findings. © 2012 The Authors. Journal of Personality © 2012, Wiley Periodicals, Inc.

  11. Awareness of occupational hazards and utilization of safety measures among welders in Kaduna metropolis, northern Nigeria.

    Science.gov (United States)

    Sabitu, K; Iliyasu, Z; Dauda, M M

    2009-01-01

    Welders are exposed to a variety of occupational hazards with untoward health effects. However, little is known of welders' awareness of health hazards and their adherence to safety precautions in developing countries. This study assessed the awareness of occupational hazards and adherence to safety measures among welders in Kaduna metropolis in northern Nigeria. A structured questionnaire was administered on a cross-section of 330 welders in Kaduna metropolis in northern Nigeria. Information was sought on their socio-demographic characteristics, their awareness of occupational hazards and adherence to safety measures. All welders were males with a mean age of 35.7 +/- 8.4 years. The illiteracy rate was 7.6%. Overall, 257 (77.9%) of the welders were aware of one or more workplace hazards. This was positively influenced by educational attainment, age, nature of training and work experience. Of the 330 respondents, 282 (85.3%) had experienced one or more work-related accidents in the preceding year. The most common injuries sustained were cut/injuries to the hands and fingers (38.0%), back/waist pain (19%), arc eye injuries/foreign bodies (17.0%), burns (14.0%), hearing impairment (7.0%), fractures (4.0%) and amputation (1.0%). Only 113 (34.2%) welders used one or more types of protective device with eye goggles (60.9%), hand gloves (50.3%) and boots (34.5%) being more frequently used. Regular use of safety device, shorter working hours and increasing experience were protective of occupational accidents. The level of awareness of occupational hazards was high with sub optimal utilization of protective measures against the hazards. There is therefore need for health and safety education of these workers for health and increased productivity.

  12. Cross-bridge blocker BTS permits direct measurement of SR Ca2+ pump ATP utilization in toadfish swimbladder muscle fibers.

    Science.gov (United States)

    Young, Iain S; Harwood, Claire L; Rome, Lawrence C

    2003-10-01

    Because the major processes involved in muscle contraction require rapid utilization of ATP, measurement of ATP utilization can provide important insights into the mechanisms of contraction. It is necessary, however, to differentiate between the contribution made by cross-bridges and that of the sarcoplasmic reticulum (SR) Ca2+ pumps. Specific and potent SR Ca2+ pump blockers have been used in skinned fibers to permit direct measurement of cross-bridge ATP utilization. Up to now, there was no analogous cross-bridge blocker. Recently, N-benzyl-p-toluene sulfonamide (BTS) was found to suppress force generation at micromolar concentrations. We tested whether BTS could be used to block cross-bridge ATP utilization, thereby permitting direct measurement of SR Ca2+ pump ATP utilization in saponin-skinned fibers. At 25 microM, BTS virtually eliminates force and cross-bridge ATP utilization (both BTS. At 25 microM, BTS had no effect on SR pump ATP utilization. Hence, we used BTS to make some of the first direct measurements of ATP utilization of intact SR over a physiological range of [Ca2+]at 15 degrees C. Curve fits to SR Ca2+ pump ATP utilization vs. pCa indicate that they have much lower Hill coefficients (1.49) than that describing cross-bridge force generation vs. pCa (approximately 5). Furthermore, we found that BTS also effectively eliminates force generation in bundles of intact swimbladder muscle, suggesting that it will be an important tool for studying integrated SR function during normal motor behavior.

  13. Assessment of the biophysical impacts of utility-scale photovoltaics through observations and modelling

    Science.gov (United States)

    Broadbent, A. M.; Georgescu, M.; Krayenhoff, E. S.; Sailor, D.

    2017-12-01

    Utility-scale solar power plants are a rapidly growing component of the solar energy sector. Utility-scale photovoltaic (PV) solar power generation in the United States has increased by 867% since 2012 (EIA, 2016). This expansion is likely to continue as the cost PV technologies decrease. While most agree that solar power can decrease greenhouse gas emissions, the biophysical effects of PV systems on surface energy balance (SEB), and implications for surface climate, are not well understood. To our knowledge, there has never been a detailed observational study of SEB at a utility-scale solar array. This study presents data from an eddy covariance observational tower, temporarily placed above a utility-scale PV array in Southern Arizona. Comparison of PV SEB with a reference (unmodified) site, shows that solar panels can alter the SEB and near surface climate. SEB observations are used to develop and validate a new and more complete SEB PV model. In addition, the PV model is compared to simpler PV modelling methods. The simpler PV models produce differing results to our newly developed model and cannot capture the more complex processes that influence PV SEB. Finally, hypothetical scenarios of PV expansion across the continental United States (CONUS) were developed using various spatial mapping criteria. CONUS simulations of PV expansion reveal regional variability in biophysical effects of PV expansion. The study presents the first rigorous and validated simulations of the biophysical effects of utility-scale PV arrays.

  14. THE BUSINESS MODEL AND FINANCIAL ASSETS MEASUREMENT

    OpenAIRE

    NICULA Ileana

    2012-01-01

    The paper work analyses some aspects regarding the implementation of IFRS 9, the relationship between the business model approach and the assets classification and measurement. It does not discuss the cash flows characteristics, another important aspect of assets classification, or the reclassifications. The business model is related to some characteristics of the banks (opaqueness, leverage ratio, compliance to capital, sound liquidity requirements and risk management) and to Special Purpose...

  15. Magnetic measurement of creep damage: modeling and measurement

    Science.gov (United States)

    Sablik, Martin J.; Jiles, David C.

    1996-11-01

    Results of inspection of creep damage by magnetic hysteresis measurements on Cr-Mo steel are presented. It is shown that structure-sensitive parameters such as coercivity, remanence and hysteresis loss are sensitive to creep damage. Previous metallurgical studies have shown that creep changes the microstructure of he material by introducing voids, dislocations, and grain boundary cavities. As cavities develop, dislocations and voids move out to grain boundaries; therefore, the total pinning sources for domain wall motion are reduced.This, together with the introduction of a demagnetizing field due to the cavities, results in the decrease of both coercivity, remanence and hence, concomitantly, hysteresis loss. Incorporating these structural effects into a magnetomechanical hysteresis model developed previously by us produces numerical variations of coercivity, remanence and hysteresis loss consistent with what is measured. The magnetic model has therefore been used to obtain appropriately modified magnetization curves for each element of creep-damaged material in a finite element (FE) calculation. The FE calculation has been used to simulate magnetic detection of non-uniform creep damage around a seam weld in a 2.25 Cr 1Mo steam pipe. In particular, in the simulation, a magnetic C-core with primary and secondary coils was placed with its pole pieces flush against the specimen in the vicinity of the weld. The secondary emf was shown to be reduced when creep damage was present inside the pipe wall at the cusp of the weld and in the vicinity of the cusp. The calculation showed that the C- core detected creep damage best if it spanned the weld seam width and if the current in the primary was such that the C- core was not magnetically saturated. Experimental measurements also exhibited the dip predicted in emf, but the measurements are not yet conclusive because the effects of magnetic property changes of weld materials, heat- affected material, and base material have

  16. Measurement of Laser Weld Temperatures for 3D Model Input

    Energy Technology Data Exchange (ETDEWEB)

    Dagel, Daryl [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grossetete, Grant [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Maccallum, Danny O. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-10-01

    Laser welding is a key joining process used extensively in the manufacture and assembly of critical components for several weapons systems. Sandia National Laboratories advances the understanding of the laser welding process through coupled experimentation and modeling. This report summarizes the experimental portion of the research program, which focused on measuring temperatures and thermal history of laser welds on steel plates. To increase confidence in measurement accuracy, researchers utilized multiple complementary techniques to acquire temperatures during laser welding. This data serves as input to and validation of 3D laser welding models aimed at predicting microstructure and the formation of defects and their impact on weld-joint reliability, a crucial step in rapid prototyping of weapons components.

  17. Measuring the Capacity Utilization of Public District Hospitals in Tunisia: Using Dual Data Envelopment Analysis Approach.

    Science.gov (United States)

    Arfa, Chokri; Leleu, Hervé; Goaïed, Mohamed; van Mosseveld, Cornelis

    2016-06-06

    Public district hospitals (PDHs) in Tunisia are not operating at full plant capacity and underutilize their operating budget. Individual PDHs capacity utilization (CU) is measured for 2000 and 2010 using dual data envelopment analysis (DEA) approach with shadow prices input and output restrictions. The CU is estimated for 101 of 105 PDH in 2000 and 94 of 105 PDH in 2010. In average, unused capacity is estimated at 18% in 2010 vs. 13% in 2000. Of PDHs 26% underutilize their operating budget in 2010 vs. 21% in 2000. Inadequate supply, health quality and the lack of operating budget should be tackled to reduce unmet user's needs and the bypassing of the PDHs and, thus to increase their CU. Social health insurance should be turned into a direct purchaser of curative and preventive care for the PDHs. © 2017 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

  18. Calculating realistic voltages across the US power grid utilizing measured impedances and magnetic fields

    Science.gov (United States)

    Lucas, G.; Love, J. J.; Kelbert, A.; Bedrosian, P.; Rigler, E. J.

    2017-12-01

    Space weather induces significant geoelectric fields within Earth's subsurface that can adversely affect electric power grids. The complex interaction between space weather and the solid Earth has traditionally been approached with the use of simple 1-D impedance functions relating the inducing magnetic field to the induced geoelectric field. Ongoing data collection through the NSF EarthScope program has produced measured impedance data across much of the continental US. In this work, impedance data are convolved with magnetic field variations, obtained from USGS magnetic observatories, during a geomagnetic storm. This convolution produces geoelectric fields within the earth. These geoelectric fields are then integrated across power transmission lines to determine the voltage generated within each power line as a function of time during a geomagnetic storm. The voltages generated within the electric power grid will be shown for several historic geomagnetic storms. The estimated voltages calculated from 1-D and 3-D impedances differ by more than 100 V across some transmission lines. In combination with grounding resistance data and network topology, these voltage estimates can be utilized by power companies to estimate geomagnetically-induced currents throughout the network. These voltage estimates can provide information on which power lines are most vulnerable to geomagnetic storms, and assist power grid companies investigating where to install additional protections within their grid.

  19. A simple method for measuring glucose utilization of insulin-sensitive tissues by using the brain as a reference

    International Nuclear Information System (INIS)

    Namba, Hiroki; Nakagawa, Keiichi; Iyo, Masaomi; Fukushi, Kiyoshi; Irie, Toshiaki

    1994-01-01

    A simple method, without measurement of the plasma input function, to obtain semiquantitative values of glucose utilization in tissues other than the brain with radioactive deoxyglucose is reported. The brain, in which glucose utilization is essentially insensitive to plasma glucose and insulin concentrations, was used as an internal reference. The effects of graded doses of oral glucose loading (0.5, 1 and 2 mg/g body weight) on insulin-sensitive tissues (heart, muscle and fat tissue) were studied in the rat. By using the brain-reference method, dose-dependent increases in glucose utilization were clearly shown in all the insulin-sensitive tissues examined. The method seems to be of value for measurement of glucose utilization using radioactive deoxyglucose and positron emission tomography in the heart or other insulin-sensitive tissues, especially during glucose loading. (orig.)

  20. The elastic body model: a pedagogical approach integrating real time measurements and modelling activities

    International Nuclear Information System (INIS)

    Fazio, C; Guastella, I; Tarantino, G

    2007-01-01

    In this paper, we describe a pedagogical approach to elastic body movement based on measurements of the contact times between a metallic rod and small bodies colliding with it and on modelling of the experimental results by using a microcomputer-based laboratory and simulation tools. The experiments and modelling activities have been built in the context of the laboratory of mechanical wave propagation of the two-year graduate teacher education programme of Palermo's University. Some considerations about observed modifications in trainee teachers' attitudes in utilizing experiments and modelling are discussed

  1. Semi-automatic learning of simple diagnostic scores utilizing complexity measures.

    Science.gov (United States)

    Atzmueller, Martin; Baumeister, Joachim; Puppe, Frank

    2006-05-01

    Knowledge acquisition and maintenance in medical domains with a large application domain ontology is a difficult task. To reduce knowledge elicitation costs, semi-automatic learning methods can be used to support the domain specialists. They are usually not only interested in the accuracy of the learned knowledge: the understandability and interpretability of the learned models is of prime importance as well. Then, often simple models are more favorable than complex ones. We propose diagnostic scores as a promising approach for the representation of simple diagnostic knowledge, and present a method for inductive learning of diagnostic scores. It can be incrementally refined by including background knowledge. We present complexity measures for determining the complexity of the learned scores. We give an evaluation of the presented approach using a case base from the fielded system SonoConsult. We further discuss that the user can easily balance between accuracy and complexity of the learned knowledge applying the presented measures. We argue that semi-automatic learning methods can support the domain specialist efficiently when building (diagnostic) knowledge systems from scratch. The presented complexity measures allow for an intuitive assessment of the learned patterns.

  2. Adolescent idiopathic scoliosis screening for school, community, and clinical health promotion practice utilizing the PRECEDE-PROCEED model

    Directory of Open Access Journals (Sweden)

    Wyatt Lawrence A

    2005-11-01

    Full Text Available Abstract Background Screening for adolescent idiopathic scoliosis (AIS is a commonly performed procedure for school children during the high risk years. The PRECEDE-PROCEDE (PP model is a health promotion planning model that has not been utilized for the clinical diagnosis of AIS. The purpose of this research is to study AIS in the school age population using the PP model and its relevance for community, school, and clinical health promotion. Methods MEDLINE was utilized to locate AIS data. Studies were screened for relevance and applicability under the auspices of the PP model. Where data was unavailable, expert opinion was utilized based on consensus. Results The social assessment of quality of life is limited with few studies approaching the long-term effects of AIS. Epidemiologically, AIS is the most common form of scoliosis and leading orthopedic problem in children. Behavioral/environmental studies focus on discovering etiologic relationships yet this data is confounded because AIS is not a behavioral. Illness and parenting health behaviors can be appreciated. The educational diagnosis is confounded because AIS is an orthopedic disorder and not behavioral. The administration/policy diagnosis is hindered in that scoliosis screening programs are not considered cost-effective. Policies are determined in some schools because 26 states mandate school scoliosis screening. There exists potential error with the Adam's test. The most widely used measure in the PP model, the Health Belief Model, has not been utilized in any AIS research. Conclusion The PP model is a useful tool for a comprehensive study of a particular health concern. This research showed where gaps in AIS research exist suggesting that there may be problems to the implementation of school screening. Until research disparities are filled, implementation of AIS screening by school, community, and clinical health promotion will be compromised. Lack of data and perceived importance by

  3. Modeling the Dynamic Interrelations between Mobility, Utility, and Land Asking Price

    Science.gov (United States)

    Hidayat, E.; Rudiarto, I.; Siegert, F.; Vries, W. D.

    2018-02-01

    Limited and insufficient information about the dynamic interrelation among mobility, utility, and land price is the main reason to conduct this research. Several studies, with several approaches, and several variables have been conducted so far in order to model the land price. However, most of these models appear to generate primarily static land prices. Thus, a research is required to compare, design, and validate different models which calculate and/or compare the inter-relational changes of mobility, utility, and land price. The applied method is a combination of analysis of literature review, expert interview, and statistical analysis. The result is newly improved mathematical model which have been validated and is suitable for the case study location. This improved model consists of 12 appropriate variables. This model can be implemented in the Salatiga city as the case study location in order to arrange better land use planning to mitigate the uncontrolled urban growth.

  4. Causal Measurement Models: Can Criticism Stimulate Clarification?

    Science.gov (United States)

    Markus, Keith A.

    2016-01-01

    In their 2016 work, Aguirre-Urreta et al. provided a contribution to the literature on causal measurement models that enhances clarity and stimulates further thinking. Aguirre-Urreta et al. presented a form of statistical identity involving mapping onto the portion of the parameter space involving the nomological net, relationships between the…

  5. Experimental Measurement, Analysis and Modelling of Dependency ...

    African Journals Online (AJOL)

    We propose a direct method of measurement of the total emissivity of opaque samples on a range of temperature around the ambient one. The method rests on the modulation of the temperature of the sample and the infra-red signal processing resulting from the surface of the sample we model the total emissivity obtained ...

  6. Model measurements for new accelerating techniques

    International Nuclear Information System (INIS)

    Aronson, S.; Haseroth, H.; Knott, J.; Willis, W.

    1988-06-01

    We summarize the work carried out for the past two years, concerning some different ways for achieving high-field gradients, particularly in view of future linear lepton colliders. These studies and measurements on low power models concern the switched power principle and multifrequency excitation of resonant cavities. 15 refs., 12 figs

  7. Time versus frequency domain measurements: layered model ...

    African Journals Online (AJOL)

    The effect of receiver coil alignment errors δ on the response of electromagnetic measurements in a layered earth model is studied. The statistics of generalized least square inverse was employed to analyzed the errors on three different geophysical applications. The following results were obtained: (i) The FEM ellipiticity is ...

  8. Multivariate linear models and repeated measurements revisited

    DEFF Research Database (Denmark)

    Dalgaard, Peter

    2009-01-01

    Methods for generalized analysis of variance based on multivariate normal theory have been known for many years. In a repeated measurements context, it is most often of interest to consider transformed responses, typically within-subject contrasts or averages. Efficiency considerations leads to s...... method involving differences between orthogonal projections onto subspaces generated by within-subject models....

  9. Identifying auditory cortex encoding abnormalities in schizophrenia: The utility of low-frequency versus 40 Hz steady-state measures.

    Science.gov (United States)

    Edgar, J C; Fisk, Charles L; Chen, Yu-Han; Stone-Howell, Breannan; Liu, Song; Hunter, Michael A; Huang, Mingxiong; Bustillo, Juan; Cañive, José M; Miller, Gregory A

    2018-03-23

    Magnetoencephalography (MEG) and EEG have identified poststimulus low frequency and 40 Hz steady-state auditory encoding abnormalities in schizophrenia (SZ). Negative findings have also appeared. To identify factors contributing to these inconsistencies, healthy control (HC) and SZ group differences were examined in MEG and EEG source space and EEG sensor space, with better group differentiation hypothesized for source than sensor measures given greater predictive utility for source measures. Fifty-five HC and 41 chronic SZ were presented 500 Hz sinusoidal stimuli modulated at 40 Hz during simultaneous whole-head MEG and EEG. MEG and EEG source models using left and right superior temporal gyrus (STG) dipoles estimated trial-to-trial phase similarity and percent change from prestimulus baseline. Group differences in poststimulus low-frequency activity and 40 Hz steady-state response were evaluated. Several EEG sensor analysis strategies were also examined. Poststimulus low-frequency group differences were observed across all methods. Given an age-related decrease in left STG 40 Hz steady-state activity in HC (HC > SZ), 40 Hz steady-state group differences were evident only in younger participants' source measures. Findings thus indicated that optimal data collection and analysis methods depend on the auditory encoding measure of interest. In addition, whereas results indicated that HC and SZ auditory encoding low-frequency group differences are generally comparable across modality and analysis strategy (and thus not dependent on obtaining construct-valid measures of left and right auditory cortex activity), 40 Hz steady-state group-difference findings are much more dependent on analysis strategy, with 40 Hz steady-state source-space findings providing the best group differentiation. © 2018 Society for Psychophysiological Research.

  10. Utilizing Gaze Behavior for Inferring Task Transitions Using Abstract Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Daniel Fernando Tello Gamarra

    2016-12-01

    Full Text Available We demonstrate an improved method for utilizing observed gaze behavior and show that it is useful in inferring hand movement intent during goal directed tasks. The task dynamics and the relationship between hand and gaze behavior are learned using an Abstract Hidden Markov Model (AHMM. We show that the predicted hand movement transitions occur consistently earlier in AHMM models with gaze than those models that do not include gaze observations.

  11. Modeling Displacement Measurement using Vibration Transducers

    Directory of Open Access Journals (Sweden)

    AGOSTON Katalin

    2014-05-01

    Full Text Available This paper presents some aspects regarding to small displacement measurement using vibration transducers. Mechanical faults, usages, slackness’s, cause different noises and vibrations with different amplitude and frequency against the normal sound and movement of the equipment. The vibration transducers, accelerometers and microphone are used for noise and/or sound and vibration detection with fault detection purpose. The output signal of the vibration transducers or accelerometers is an acceleration signal and can be converted to either velocity or displacement, depending on the preferred measurement parameter. Displacement characteristics are used to indicate when the machine condition has changed. There are many problems using accelerometers to measure position or displacement. It is important to determine displacement over time. To determinate the movement from acceleration a double integration is needed. A transfer function and Simulink model was determinate for accelerometers with capacitive sensing element. Using these models the displacement was reproduced by low frequency input.

  12. Measurement and Modelling of Scaling Minerals

    DEFF Research Database (Denmark)

    Villafafila Garcia, Ada

    2005-01-01

    of temperature and pressure. Reliable experimental solubility measurements under conditions similar to those found in reality will help the development of strong and consistent models. Chapter 1 is a short introduction to the problem of scale formation, the model chosen to study it, and the experiments performed......). Chapters 8 and 9 focus on the experimental part of this dissertation, analyzing different experimental procedures to determine salt solubility at high temperature and pressure, and developing a setup to perform those measurements. The motivation behind both parts of the Ph.D. project is the problem...... of scale formation found in many industrial processes, and especially in oilfield and geothermal operations. We want to contribute to the study of this problem by releasing a simple and accurate thermodynamic model capable of calculating the behaviour of scaling minerals, covering a wide range...

  13. Measurements of temperature on LHC thermal models

    CERN Document Server

    Darve, C

    2001-01-01

    Full-scale thermal models for the Large Hadron Collider (LHC) accelerator cryogenic system have been studied at CERN and at Fermilab. Thermal measurements based on two different models permitted us to evaluate the performance of the LHC dipole cryostats as well as to validate the LHC Interaction Region (IR) inner triplet cooling scheme. The experimental procedures made use of temperature sensors supplied by industry and assembled on specially designed supports. The described thermal models took the advantage of advances in cryogenic thermometry which will be implemented in the future LHC accelerator to meet the strict requirements of the LHC for precision, accuracy, reliability, and ease-of-use. The sensors used in the temperature measurement of the superfluid (He II) systems are the primary focus of this paper, although some aspects of the LHC control system and signal conditioning are also reviewed. (15 refs).

  14. An explanatory model of the dental care utilization of low-income children.

    Science.gov (United States)

    Milgrom, P; Mancl, L; King, B; Weinstein, P; Wells, N; Jeffcott, E

    1998-04-01

    Factors related to the utilization of dental care by 5- to 11-year-old children from low-income households were investigated using a comprehensive multivariate model that assessed the contribution of structure, history, cognition, and expectations. The influence of dentist-patient interactions, psychosocial and health beliefs, particularly fear of the dentist, on utilization were investigated. Children were chosen randomly from public schools, and 895 mothers were surveyed and their children were interviewed in the home. Utilization was studied during the 1991-1992 school year, including a 6-month follow-up period after the interview. The overall utilization rate was 63.2%, and the rate for nonemergent (preventive) visits was 59.9%. Utilization was unrelated to actual oral health status. Race and years the guardian lived in the United States were predictive of an episode of care. Preventive medical visits and perceived need were strong predictors of a visit to the dentist, as were beliefs in the efficacy of dental care. Mothers who were satisfied with their own care and oral health and whose children were covered by insurance were more likely to utilize children's dental care. In contrast, child dental fear and absences from school for family problems were associated with lower rates of utilization. Mutable factors that govern the use of care in this population were identified. These findings have implications for the design of dental care delivery systems for children and their families.

  15. Utilizing The Synergy of Airborne Backscatter Lidar and In-Situ Measurements for Evaluating CALIPSO

    Directory of Open Access Journals (Sweden)

    Tsekeri Alexandra

    2016-01-01

    Full Text Available Airborne campaigns dedicated to satellite validation are crucial for the effective global aerosol monitoring. CALIPSO is currently the only active remote sensing satellite mission, acquiring the vertical profiles of the aerosol backscatter and extinction coefficients. Here we present a method for CALIPSO evaluation from combining lidar and in-situ airborne measurements. The limitations of the method have to do mainly with the in-situ instrumentation capabilities and the hydration modelling. We also discuss the future implementation of our method in the ICE-D campaign (Cape Verde, August 2015.

  16. Model project to promote cultivation and utilization of renewable resources. Modellvorhaben zur Foerderung des Anbaus und der Verwertung nachwachsender Rohstoffe

    Energy Technology Data Exchange (ETDEWEB)

    1991-09-01

    This revised report on the model projects presents individual projects and measures complementary to each other, documenting, in their totality, an advanced state of development. Moreover it shows the following: that the basic challenge of a model project, especially in the field of the energetic use of biomass, can be met by marrying agriculture to power utilities. So, projects are under way where cultivation of China reed and its utilization in power-and-heat cogeneration plants will, in the future, complement each other. Further questions that are not represented in the research programme of Lower Saxonia are dealt with at the federal level, so that the field of renewable resurces may currently be considered as comprehensively covered. (orig./EF).

  17. DIAMOND: A model of incremental decision making for resource acquisition by electric utilities

    Energy Technology Data Exchange (ETDEWEB)

    Gettings, M.; Hirst, E.; Yourstone, E.

    1991-02-01

    Uncertainty is a major issue facing electric utilities in planning and decision making. Substantial uncertainties exist concerning future load growth; the lifetimes and performances of existing power plants; the construction times, costs, and performances of new resources being brought online; and the regulatory and economic environment in which utilities operate. This report describes a utility planning model that focuses on frequent and incremental decisions. The key features of this model are its explicit treatment of uncertainty, frequent user interaction with the model, and the ability to change prior decisions. The primary strength of this model is its representation of the planning and decision-making environment that utility planners and executives face. Users interact with the model after every year or two of simulation, which provides an opportunity to modify past decisions as well as to make new decisions. For example, construction of a power plant can be started one year, and if circumstances change, the plant can be accelerated, mothballed, canceled, or continued as originally planned. Similarly, the marketing and financial incentives for demand-side management programs can be changed from year to year, reflecting the short lead time and small unit size of these resources. This frequent user interaction with the model, an operational game, should build greater understanding and insights among utility planners about the risks associated with different types of resources. The model is called DIAMOND, Decision Impact Assessment Model. In consists of four submodels: FUTURES, FORECAST, SIMULATION, and DECISION. It runs on any IBM-compatible PC and requires no special software or hardware. 19 refs., 13 figs., 15 tabs.

  18. Structural Modeling of Measurement Error in Generalized Linear Models with Rasch Measures as Covariates

    Science.gov (United States)

    Battauz, Michela; Bellio, Ruggero

    2011-01-01

    This paper proposes a structural analysis for generalized linear models when some explanatory variables are measured with error and the measurement error variance is a function of the true variables. The focus is on latent variables investigated on the basis of questionnaires and estimated using item response theory models. Latent variable…

  19. Utility of local health registers in measuring perinatal mortality: a case study in rural Indonesia.

    Science.gov (United States)

    Burke, Leona; Suswardany, Dwi Linna; Michener, Keryl; Mazurki, Setiawaty; Adair, Timothy; Elmiyati, Catur; Rao, Chalapati

    2011-03-17

    Perinatal mortality is an important indicator of obstetric and newborn care services. Although the vast majority of global perinatal mortality is estimated to occur in developing countries, there is a critical paucity of reliable data at the local level to inform health policy, plan health care services, and monitor their impact. This paper explores the utility of information from village health registers to measure perinatal mortality at the sub district level in a rural area of Indonesia. A retrospective pregnancy cohort for 2007 was constructed by triangulating data from antenatal care, birth, and newborn care registers in a sample of villages in three rural sub districts in Central Java, Indonesia. For each pregnancy, birth outcome and first week survival were traced and recorded from the different registers, as available. Additional local death records were consulted to verify perinatal mortality, or identify deaths not recorded in the health registers. Analyses were performed to assess data quality from registers, and measure perinatal mortality rates. Qualitative research was conducted to explore knowledge and practices of village midwives in register maintenance and reporting of perinatal mortality. Field activities were conducted in 23 villages, covering a total of 1759 deliveries that occurred in 2007. Perinatal mortality outcomes were 23 stillbirths and 15 early neonatal deaths, resulting in a perinatal mortality rate of 21.6 per 1000 live births in 2007. Stillbirth rates for the study population were about four times the rates reported in the routine Maternal and Child Health program information system. Inadequate awareness and supervision, and alternate workload were cited by local midwives as factors resulting in inconsistent data reporting. Local maternal and child health registers are a useful source of information on perinatal mortality in rural Indonesia. Suitable training, supervision, and quality control, in conjunction with computerisation to

  20. Nonclassical measurements errors in nonlinear models

    DEFF Research Database (Denmark)

    Madsen, Edith; Mulalic, Ismir

    that contains very detailed information about incomes. This gives a unique opportunity to learn about the magnitude and nature of the measurement error in income reported by the respondents in the Danish NTS compared to income from the administrative register (correct measure). We find that the classical...... of a households face. In this case an important policy parameter is the effect of income (reflecting the household budget) on the choice of travel mode. This paper deals with the consequences of measurement error in income (an explanatory variable) in discrete choice models. Since it is likely to give misleading...... estimates of the income effect it is of interest to investigate the magnitude of the estimation bias and if possible use estimation techniques that take the measurement error problem into account. We use data from the Danish National Travel Survey (NTS) and merge it with administrative register data...

  1. Smart kinesthetic measurement model in dance composision

    Directory of Open Access Journals (Sweden)

    Dinny Devi Triana

    2017-06-01

    Full Text Available This research aimed to discover a model of assessment that could measure kinesthetic intelligence in arranging a dance from several related variable, both direct variable and indirect variable. The research method used was a qualitative method using path analysis to determine the direct and indirect variable; therefore, the dominant variable that supported the measurement model of kinesthetic intelligence in arranging dance could be discovered. The population used was the students of the art of dance department and were chosen by using purposive sampling technique so that the kinesthetic intelligence could be well measured. The result of this research was that the correlation between the ability in perceiving movement and the ability in conveying movement was 3.8048. The correlation between the ability in perceiving movement and kinesthetic intelligence was 0.3137. The correlation between the ability in perceiving movement and arranging a dance was -0.3751. The correlation between conveying movement and kinesthetic intelligence was 0.1333. The correlation between conveying movement and arranging a dance was -0.2399. The correlation between kinesthetic intelligence and arranging a dance was 0.8529. These result proved that kinesthetic intelligence has significant influence to the ability in arranging a dance. It could be concluded that a smart assessment model of kinesthetic intelligence in arranging a dance that was needed should measure the kinesthetic intelligence first while the ability to perceive and convey movement became the supporting element to strengthen the kinesthetic intelligence in arranging a dance.

  2. Varying coefficients model with measurement error.

    Science.gov (United States)

    Li, Liang; Greene, Tom

    2008-06-01

    We propose a semiparametric partially varying coefficient model to study the relationship between serum creatinine concentration and the glomerular filtration rate (GFR) among kidney donors and patients with chronic kidney disease. A regression model is used to relate serum creatinine to GFR and demographic factors in which coefficient of GFR is expressed as a function of age to allow its effect to be age dependent. GFR measurements obtained from the clearance of a radioactively labeled isotope are assumed to be a surrogate for the true GFR, with the relationship between measured and true GFR expressed using an additive error model. We use locally corrected score equations to estimate parameters and coefficient functions, and propose an expected generalized cross-validation (EGCV) method to select the kernel bandwidth. The performance of the proposed methods, which avoid distributional assumptions on the true GFR and residuals, is investigated by simulation. Accounting for measurement error using the proposed model reduced apparent inconsistencies in the relationship between serum creatinine and GFR among different clinical data sets derived from kidney donor and chronic kidney disease source populations.

  3. Consumer preferences for alternative fuel vehicles: Comparing a utility maximization and a regret minimization model

    International Nuclear Information System (INIS)

    Chorus, Caspar G.; Koetse, Mark J.; Hoen, Anco

    2013-01-01

    This paper presents a utility-based and a regret-based model of consumer preferences for alternative fuel vehicles, based on a large-scale stated choice-experiment held among company car leasers in The Netherlands. Estimation and application of random utility maximization and random regret minimization discrete choice models shows that while the two models achieve almost identical fit with the data and differ only marginally in terms of predictive ability, they generate rather different choice probability-simulations and policy implications. The most eye-catching difference between the two models is that the random regret minimization model accommodates a compromise-effect, as it assigns relatively high choice probabilities to alternative fuel vehicles that perform reasonably well on each dimension instead of having a strong performance on some dimensions and a poor performance on others. - Highlights: • Utility- and regret-based models of preferences for alternative fuel vehicles. • Estimation based on stated choice-experiment among Dutch company car leasers. • Models generate rather different choice probabilities and policy implications. • Regret-based model accommodates a compromise-effect

  4. Mathematical model of radon activity measurements

    Energy Technology Data Exchange (ETDEWEB)

    Paschuk, Sergei A.; Correa, Janine N.; Kappke, Jaqueline; Zambianchi, Pedro, E-mail: sergei@utfpr.edu.br, E-mail: janine_nicolosi@hotmail.com [Universidade Tecnologica Federal do Parana (UTFPR), Curitiba, PR (Brazil); Denyak, Valeriy, E-mail: denyak@gmail.com [Instituto de Pesquisa Pele Pequeno Principe, Curitiba, PR (Brazil)

    2015-07-01

    Present work describes a mathematical model that quantifies the time dependent amount of {sup 222}Rn and {sup 220}Rn altogether and their activities within an ionization chamber as, for example, AlphaGUARD, which is used to measure activity concentration of Rn in soil gas. The differential equations take into account tree main processes, namely: the injection of Rn into the cavity of detector by the air pump including the effect of the traveling time Rn takes to reach the chamber; Rn release by the air exiting the chamber; and radioactive decay of Rn within the chamber. Developed code quantifies the activity of {sup 222}Rn and {sup 220}Rn isotopes separately. Following the standard methodology to measure Rn activity in soil gas, the air pump usually is turned off over a period of time in order to avoid the influx of Rn into the chamber. Since {sup 220}Rn has a short half-life time, approximately 56s, the model shows that after 7 minutes the activity concentration of this isotope is null. Consequently, the measured activity refers to {sup 222}Rn, only. Furthermore, the model also addresses the activity of {sup 220}Rn and {sup 222}Rn progeny, which being metals represent potential risk of ionization chamber contamination that could increase the background of further measurements. Some preliminary comparison of experimental data and theoretical calculations is presented. Obtained transient and steady-state solutions could be used for planning of Rn in soil gas measurements as well as for accuracy assessment of obtained results together with efficiency evaluation of chosen measurements procedure. (author)

  5. Flavor release measurement from gum model system

    DEFF Research Database (Denmark)

    Ovejero-López, I.; Haahr, Anne-Mette; van den Berg, Frans W.J.

    2004-01-01

    Flavor release from a mint-flavored chewing gum model system was measured by atmospheric pressure chemical ionization mass spectroscopy (APCI-MS) and sensory time-intensity (TI). A data analysis method for handling the individual curves from both methods is presented. The APCI-MS data are ratio...... composition can be measured by both instrumental and sensory techniques, providing comparable information. The peppermint oil level (0.5-2% w/w) in the gum influenced both the retronasal concentration and the perceived peppermint flavor. The sweeteners' (sorbitol or xylitol) effect is less apparent. Sensory...

  6. Modeling Water Utility Investments and Improving Regulatory Policies using Economic Optimisation in England and Wales

    Science.gov (United States)

    Padula, S.; Harou, J. J.

    2012-12-01

    Water utilities in England and Wales are regulated natural monopolies called 'water companies'. Water companies must obtain periodic regulatory approval for all investments (new supply infrastructure or demand management measures). Both water companies and their regulators use results from least economic cost capacity expansion optimisation models to develop or assess water supply investment plans. This presentation first describes the formulation of a flexible supply-demand planning capacity expansion model for water system planning. The model uses a mixed integer linear programming (MILP) formulation to choose the least-cost schedule of future supply schemes (reservoirs, desalination plants, etc.) and demand management (DM) measures (leakage reduction, water efficiency and metering options) and bulk transfers. Decisions include what schemes to implement, when to do so, how to size schemes and how much to use each scheme during each year of an n-year long planning horizon (typically 30 years). In addition to capital and operating (fixed and variable) costs, the estimated social and environmental costs of schemes are considered. Each proposed scheme is costed discretely at one or more capacities following regulatory guidelines. The model uses a node-link network structure: water demand nodes are connected to supply and demand management (DM) options (represented as nodes) or to other demand nodes (transfers). Yields from existing and proposed are estimated separately using detailed water resource system simulation models evaluated over the historical period. The model simultaneously considers multiple demand scenarios to ensure demands are met at required reliability levels; use levels of each scheme are evaluated for each demand scenario and weighted by scenario likelihood so that operating costs are accurately evaluated. Multiple interdependency relationships between schemes (pre-requisites, mutual exclusivity, start dates, etc.) can be accounted for by

  7. Estimating health state utility values from discrete choice experiments--a QALY space model approach.

    Science.gov (United States)

    Gu, Yuanyuan; Norman, Richard; Viney, Rosalie

    2014-09-01

    Using discrete choice experiments (DCEs) to estimate health state utility values has become an important alternative to the conventional methods of Time Trade-Off and Standard Gamble. Studies using DCEs have typically used the conditional logit to estimate the underlying utility function. The conditional logit is known for several limitations. In this paper, we propose two types of models based on the mixed logit: one using preference space and the other using quality-adjusted life year (QALY) space, a concept adapted from the willingness-to-pay literature. These methods are applied to a dataset collected using the EQ-5D. The results showcase the advantages of using QALY space and demonstrate that the preferred QALY space model provides lower estimates of the utility values than the conditional logit, with the divergence increasing with worsening health states. Copyright © 2014 John Wiley & Sons, Ltd.

  8. Implications of Model Structure and Detail for Utility Planning: Scenario Case Studies Using the Resource Planning Model

    Energy Technology Data Exchange (ETDEWEB)

    Mai, Trieu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Barrows, Clayton [National Renewable Energy Lab. (NREL), Golden, CO (United States); Lopez, Anthony [National Renewable Energy Lab. (NREL), Golden, CO (United States); Hale, Elaine [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dyson, Mark [National Renewable Energy Lab. (NREL), Golden, CO (United States); Eurek, Kelly [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-04-01

    In this report, we analyze the impacts of model configuration and detail in capacity expansion models, computational tools used by utility planners looking to find the least cost option for planning the system and by researchers or policy makers attempting to understand the effects of various policy implementations. The present analysis focuses on the importance of model configurations — particularly those related to capacity credit, dispatch modeling, and transmission modeling — to the construction of scenario futures. Our analysis is primarily directed toward advanced tools used for utility planning and is focused on those impacts that are most relevant to decisions with respect to future renewable capacity deployment. To serve this purpose, we develop and employ the NREL Resource Planning Model to conduct a case study analysis that explores 12 separate capacity expansion scenarios of the Western Interconnection through 2030.

  9. Radiation budget measurement/model interface

    Science.gov (United States)

    Vonderhaar, T. H.; Ciesielski, P.; Randel, D.; Stevens, D.

    1983-01-01

    This final report includes research results from the period February, 1981 through November, 1982. Two new results combine to form the final portion of this work. They are the work by Hanna (1982) and Stevens to successfully test and demonstrate a low-order spectral climate model and the work by Ciesielski et al. (1983) to combine and test the new radiation budget results from NIMBUS-7 with earlier satellite measurements. Together, the two related activities set the stage for future research on radiation budget measurement/model interfacing. Such combination of results will lead to new applications of satellite data to climate problems. The objectives of this research under the present contract are therefore satisfied. Additional research reported herein includes the compilation and documentation of the radiation budget data set a Colorado State University and the definition of climate-related experiments suggested after lengthy analysis of the satellite radiation budget experiments.

  10. Measuring Visual Closeness of 3-D Models

    KAUST Repository

    Gollaz Morales, Jose Alejandro

    2012-09-01

    Measuring visual closeness of 3-D models is an important issue for different problems and there is still no standardized metric or algorithm to do it. The normal of a surface plays a vital role in the shading of a 3-D object. Motivated by this, we developed two applications to measure visualcloseness, introducing normal difference as a parameter in a weighted metric in Metro’s sampling approach to obtain the maximum and mean distance between 3-D models using 3-D and 6-D correspondence search structures. A visual closeness metric should provide accurate information on what the human observers would perceive as visually close objects. We performed a validation study with a group of people to evaluate the correlation of our metrics with subjective perception. The results were positive since the metrics predicted the subjective rankings more accurately than the Hausdorff distance.

  11. Work-Life Benefits and Organizational Attachment: Self-Interest Utility and Signaling Theory Models

    Science.gov (United States)

    Casper, Wendy J.; Harris, Christopher M.

    2008-01-01

    This study examines two competing theoretical explanations for why work-life policies such as dependent care assistance and flexible schedules influence organizational attachment. The self-interest utility model posits that work-life policies influence organizational attachment because employee use of these policies facilitates attachment. The…

  12. IAPCS: A COMPUTER MODEL THAT EVALUATES POLLUTION CONTROL SYSTEMS FOR UTILITY BOILERS

    Science.gov (United States)

    The IAPCS model, developed by U.S. EPA`s Air and Energy Engineering Research Laboratory and made available to the public through the National Technical Information Service, can be used by utility companies, architectural and engineering companies, and regulatory agencies at all l...

  13. Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model

    Science.gov (United States)

    Custer, Michael

    2015-01-01

    This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…

  14. The Dynamics of Mobile Learning Utilization in Vocational Education: Frame Model Perspective Review

    Science.gov (United States)

    Mahande, Ridwan Daud; Susanto, Adhi; Surjono, Herman Dwi

    2017-01-01

    This study aimed to describe the dynamics of content aspects, user aspects and social aspects of mobile learning utilization (m-learning) in vocational education from the FRAME Model perspective review. This study was quantitative descriptive research. The population in this study was teachers and students of state vocational school and private…

  15. Flux measurements and maintenance energy for carbon dioxide utilization by Methanococcus maripaludis.

    Science.gov (United States)

    Goyal, Nishu; Padhiary, Mrutyunjay; Karimi, Iftekhar A; Zhou, Zhi

    2015-09-16

    The rapidly growing mesophilic methanogen Methanococcus maripaludis S2 has a unique ability to consume both CO2 and N2, the main components of a flue gas, and produce methane with H2 as the electron donor. The existing literature lacks experimental measurements of CO2 and H2 uptake rates and CH4 production rates on M. maripaludis. Furthermore, it lacks estimates of maintenance energies for use with genome-scale models. In this paper, we performed batch culture experiments on M. maripaludis S2 using CO2 as the sole carbon substrate to quantify three key extracellular fluxes (CO2, H2, and CH4) along with specific growth rates. For precise computation of these fluxes from experimental measurements, we developed a systematic process simulation approach. Then, using an existing genome-scale model, we proposed an optimization procedure to estimate maintenance energy parameters: growth associated maintenance (GAM) and non-growth associated maintenance (NGAM). The measured extracellular fluxes for M. maripaludis showed excellent agreement with in silico predictions from a validated genome-scale model (iMM518) for NGAM = 7.836 mmol/gDCW/h and GAM = 27.14 mmol/gDCW. M. maripaludis achieved a CO2 to CH4 conversion yield of 70-95 % and a growth yield of 3.549 ± 0.149 g DCW/mol CH4 during the exponential phase. The ATP gain of 0.35 molATP/molCH4 for M. maripaludis, computed using NGAM, is in the acceptable range of 0.3-0.7 mol ATP/molCH4 reported for methanogens. Interestingly, the uptake distribution of amino acids, quantified using iMM518, confirmed alanine to be the most preferred amino acids for growth and methanogenesis. This is the first study to report experimental gas consumption and production rates for the growth of M. maripaludis on CO2 and H2 in minimal media. A systematic process simulation and optimization procedure was successfully developed to precisely quantify extracellular fluxes along with cell growth and maintenance energy parameters. Our growth yields

  16. Smoking Cessation Benefit Utilization: Comparing Methodologies for Measurement using New York State's Medicaid Data.

    Science.gov (United States)

    Malloy, Kevin; Proj, Anisa; Battles, Haven; Juster, Theresa; Ortega-Peluso, Christina; Wu, Meng; Juster, Harlan

    2017-11-09

    Pharmacotherapy and counseling for tobacco cessation are evidence based methods that increase successful smoking cessation attempts. Medicaid programs are required to provide coverage for smoking cessation services. Monitoring utilization is desirable for program evaluation and quality improvement. Various methodologies have been used to study utilization. Many factors can influence results, perhaps none more than how smokers are identified. This study evaluated utilization of smoking cessation services using various methods to estimate the number of smokers within New York State's (NYS's) Medicaid program in 2015. Estimates of utilization were generated based on Medicaid claims and encounters and four sources of smoking prevalence: two population surveys, one Medicaid enrollee survey, and diagnosis codes. We compared the percentage of (estimated) smokers utilizing cessation services, and the average number of services used, across fee-for-service and managed care populations, and by cessation service category. Statewide, smoking prevalence estimates ranged from 10.9% to 31.5%. Diagnosis codes identified less than 45% of smokers estimated by surveys. A similar number of cessation counseling (199,106) and pharmacotherapy services (197,728) were used, yet more members utilized counseling (126,839) than pharmacotherapy (91,433). The estimated percentage of smokers who used smoking cessation services ranged from 15.1% to 43.4%, and the estimated average number of cessation services used ranged from 0.31 to 0.90 per smoker. Smoking prevalence estimates obtained through surveys greatly exceed prevalence observed in diagnosis codes in NYS's Medicaid data. Use of diagnosis codes in analysis of smoking cessation benefit utilization may result in overestimates. Selection of a smoking prevalence data source for similar analyses should ultimately be based on completeness of the data and applicability to the population of interest. Evaluation of smoking cessation benefit

  17. Estimators for longitudinal latent exposure models: examining measurement model assumptions.

    Science.gov (United States)

    Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D

    2017-06-15

    Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Modeling measurement error in tumor characterization studies

    Directory of Open Access Journals (Sweden)

    Marjoram Paul

    2011-07-01

    Full Text Available Abstract Background Etiologic studies of cancer increasingly use molecular features such as gene expression, DNA methylation and sequence mutation to subclassify the cancer type. In large population-based studies, the tumor tissues available for study are archival specimens that provide variable amounts of amplifiable DNA for molecular analysis. As molecular features measured from small amounts of tumor DNA are inherently noisy, we propose a novel approach to improve statistical efficiency when comparing groups of samples. We illustrate the phenomenon using the MethyLight technology, applying our proposed analysis to compare MLH1 DNA methylation levels in males and females studied in the Colon Cancer Family Registry. Results We introduce two methods for computing empirical weights to model heteroscedasticity that is caused by sampling variable quantities of DNA for molecular analysis. In a simulation study, we show that using these weights in a linear regression model is more powerful for identifying differentially methylated loci than standard regression analysis. The increase in power depends on the underlying relationship between variation in outcome measure and input DNA quantity in the study samples. Conclusions Tumor characteristics measured from small amounts of tumor DNA are inherently noisy. We propose a statistical analysis that accounts for the measurement error due to sampling variation of the molecular feature and show how it can improve the power to detect differential characteristics between patient groups.

  19. Validation of Storm Water Management Model Storm Control Measures Modules

    Science.gov (United States)

    Simon, M. A.; Platz, M. C.

    2017-12-01

    EPA's Storm Water Management Model (SWMM) is a computational code heavily relied upon by industry for the simulation of wastewater and stormwater infrastructure performance. Many municipalities are relying on SWMM results to design multi-billion-dollar, multi-decade infrastructure upgrades. Since the 1970's, EPA and others have developed five major releases, the most recent ones containing storm control measures modules for green infrastructure. The main objective of this study was to quantify the accuracy with which SWMM v5.1.10 simulates the hydrologic activity of previously monitored low impact developments. Model performance was evaluated with a mathematical comparison of outflow hydrographs and total outflow volumes, using empirical data and a multi-event, multi-objective calibration method. The calibration methodology utilized PEST++ Version 3, a parameter estimation tool, which aided in the selection of unmeasured hydrologic parameters. From the validation study and sensitivity analysis, several model improvements were identified to advance SWMM LID Module performance for permeable pavements, infiltration units and green roofs, and these were performed and reported herein. Overall, it was determined that SWMM can successfully simulate low impact development controls given accurate model confirmation, parameter measurement, and model calibration.

  20. Developing a clinical utility framework to evaluate prediction models in radiogenomics

    Science.gov (United States)

    Wu, Yirong; Liu, Jie; Munoz del Rio, Alejandro; Page, David C.; Alagoz, Oguzhan; Peissig, Peggy; Onitilo, Adedayo A.; Burnside, Elizabeth S.

    2015-03-01

    Combining imaging and genetic information to predict disease presence and behavior is being codified into an emerging discipline called "radiogenomics." Optimal evaluation methodologies for radiogenomics techniques have not been established. We aim to develop a clinical decision framework based on utility analysis to assess prediction models for breast cancer. Our data comes from a retrospective case-control study, collecting Gail model risk factors, genetic variants (single nucleotide polymorphisms-SNPs), and mammographic features in Breast Imaging Reporting and Data System (BI-RADS) lexicon. We first constructed three logistic regression models built on different sets of predictive features: (1) Gail, (2) Gail+SNP, and (3) Gail+SNP+BI-RADS. Then, we generated ROC curves for three models. After we assigned utility values for each category of findings (true negative, false positive, false negative and true positive), we pursued optimal operating points on ROC curves to achieve maximum expected utility (MEU) of breast cancer diagnosis. We used McNemar's test to compare the predictive performance of the three models. We found that SNPs and BI-RADS features augmented the baseline Gail model in terms of the area under ROC curve (AUC) and MEU. SNPs improved sensitivity of the Gail model (0.276 vs. 0.147) and reduced specificity (0.855 vs. 0.912). When additional mammographic features were added, sensitivity increased to 0.457 and specificity to 0.872. SNPs and mammographic features played a significant role in breast cancer risk estimation (p-value < 0.001). Our decision framework comprising utility analysis and McNemar's test provides a novel framework to evaluate prediction models in the realm of radiogenomics.

  1. Measured, modeled, and causal conceptions of fitness

    Science.gov (United States)

    Abrams, Marshall

    2012-01-01

    This paper proposes partial answers to the following questions: in what senses can fitness differences plausibly be considered causes of evolution?What relationships are there between fitness concepts used in empirical research, modeling, and abstract theoretical proposals? How does the relevance of different fitness concepts depend on research questions and methodological constraints? The paper develops a novel taxonomy of fitness concepts, beginning with type fitness (a property of a genotype or phenotype), token fitness (a property of a particular individual), and purely mathematical fitness. Type fitness includes statistical type fitness, which can be measured from population data, and parametric type fitness, which is an underlying property estimated by statistical type fitnesses. Token fitness includes measurable token fitness, which can be measured on an individual, and tendential token fitness, which is assumed to be an underlying property of the individual in its environmental circumstances. Some of the paper's conclusions can be outlined as follows: claims that fitness differences do not cause evolution are reasonable when fitness is treated as statistical type fitness, measurable token fitness, or purely mathematical fitness. Some of the ways in which statistical methods are used in population genetics suggest that what natural selection involves are differences in parametric type fitnesses. Further, it's reasonable to think that differences in parametric type fitness can cause evolution. Tendential token fitnesses, however, are not themselves sufficient for natural selection. Though parametric type fitnesses are typically not directly measurable, they can be modeled with purely mathematical fitnesses and estimated by statistical type fitnesses, which in turn are defined in terms of measurable token fitnesses. The paper clarifies the ways in which fitnesses depend on pragmatic choices made by researchers. PMID:23112804

  2. Thermal effects in shales: measurements and modeling

    International Nuclear Information System (INIS)

    McKinstry, H.A.

    1977-01-01

    Research is reported concerning thermal and physical measurements and theoretical modeling relevant to the storage of radioactive wastes in a shale. Reference thermal conductivity measurements are made at atmospheric pressure in a commercial apparatus; and equipment for permeability measurements has been developed, and is being extended with respect to measurement ranges. Thermal properties of shales are being determined as a function of temperature and pressures. Apparatus was developed to measure shales in two different experimental configurations. In the first, a disk 15 mm in diameter of the material is measured by a steady state technique using a reference material to measure the heat flow within the system. The sample is sandwiched between two disks of a reference material (single crystal quartz is being used initially as reference material). The heat flow is determined twice in order to determine that steady state conditions prevail; the temperature drop over the two references is measured. When these indicate an equal heat flow, the thermal conductivity of the sample can be calculated from the temperature difference of the two faces. The second technique is for determining effect of temperature in a water saturated shale on a larger scale. Cylindrical shale (or siltstone) specimens that are being studied (large for a laboratory sample) are to be heated electrically at the center, contained in a pressure vessel that will maintain a fixed water pressure around it. The temperature is monitored at many points within the shale sample. The sample dimensions are 25 cm diameter, 20 cm long. A micro computer system has been constructed to monitor 16 thermocouples to record variation of temperature distribution with time

  3. Utility of Social Modeling in Assessment of a State’s Propensity for Nuclear Proliferation

    Energy Technology Data Exchange (ETDEWEB)

    Coles, Garill A.; Brothers, Alan J.; Whitney, Paul D.; Dalton, Angela C.; Olson, Jarrod; White, Amanda M.; Cooley, Scott K.; Youchak, Paul M.; Stafford, Samuel V.

    2011-06-01

    This report is the third and final report out of a set of three reports documenting research for the U.S. Department of Energy (DOE) National Security Administration (NASA) Office of Nonproliferation Research and Development NA-22 Simulations, Algorithms, and Modeling program that investigates how social modeling can be used to improve proliferation assessment for informing nuclear security, policy, safeguards, design of nuclear systems and research decisions. Social modeling has not to have been used to any significant extent in a proliferation studies. This report focuses on the utility of social modeling as applied to the assessment of a State's propensity to develop a nuclear weapons program.

  4. Testing alternative kinetic models for utilization of crystalline cellulose (Avicel) by batch cultures of Clostridium thermocellum.

    Science.gov (United States)

    Holwerda, Evert K; Lynd, Lee R

    2013-09-01

    Descriptive kinetics of batch cellulose (Avicel) and cellobiose fermentation by Clostridium thermocellum were examined with residual substrate and biosynthate concentrations inferred based on elemental analysis. Biosynthate was formed in constant proportion to substrate consumption until substrate was exhausted for cellobiose fermentation, and until near the point of substrate exhaustion for cellulose fermentation. Cell yields (g pellet biosynthate carbon/g substrate carbon) of 0.214 and 0.200 were obtained for cellulose and cellobiose, respectively. For cellulose fermentation a sigmoidal curve fit was applied to substrate and biosynthate concentrations over time, which was then differentiated to calculate instantaneous rates of growth and substrate consumption. Three models were tested to describe the kinetics of Avicel utilization by C. thermocellum: (A) first order in cells, (B) first order in substrate, and (C) first order in cells and substrate, and second order overall. Models (A) and (B) have been proposed in the literature to describe cultures of cellulolytic microorganisms, whereas model (C) has not. Of the three models tested, model (c) provided by far the best fit to batch culture data. A second order rate constant equal to 0.735 L g C(-1)  h(-1) was found for utilization of Avicel by C. thermocellum. Adding an endogenous metabolism term improved the descriptive quality of the model as substrate exhaustion was approached. Such rate constants may in the future find utility for describing and comparing cellulose fermentation involving other microbes and other substrates. Copyright © 2013 Wiley Periodicals, Inc.

  5. Measurements and numerical simulations for optimization of the combustion process in a utility boiler

    Energy Technology Data Exchange (ETDEWEB)

    Vikhansky, A.; Bar-Ziv, E. [Ben-Gurion Univ. of the Negev, Dept. of Biotechnology and Environmental Engineering, Beer-Sheva (Israel); Chudnovsky, B.; Talanker, A. [Israel Electric Corp. (IEC),, Mechanical Systems Div., Haifa (Israel); Eddings, E.; Sarofim, A. [Reaction Engineering International, Salt Lake City, UT (United States); Utah Univ., Dept. of Chemical and Fuel Engineering, Salt Lake City, UT (United States)

    2004-07-01

    A three-dimensional computational fluid dynamics code was used to analyse the performance of 550MW pulverized coal combustion opposite a wall-fired boiler (of IEC) at different operation modes. The main objective of this study was to prove that connecting plant measurements with three-dimensional furnace modelling is a cost-effective method for design, optimization and problem solving in power plant operation. Heat flux results from calculations were compared with measurements in the boiler and showed good agreement. Consequently, the code was used to study hydrodynamic aspects of air-flue gases mixing in the upper part of the boiler. It was demonstrated that effective mixing between flue gases and overfire air is of essential importance for CO reburning. From our complementary experimental-numerical effort, IEC considers a possibility to improve the boiler performance by replacing the existing OFA nozzles by those with higher penetration depth of the air jets, with the aim to ensure proper mixing to achieve better CO reburning. (Author)

  6. Measurements and numerical simulations for optimization of the combustion process in a utility boiler

    Energy Technology Data Exchange (ETDEWEB)

    A. Vikhansky; E. Bar-Ziv; B. Chudnovsky; A. Talanker; E. Eddings; A. Sarofim [Ben-Gurion University of the Negev, Beer-Sheva (Israel). Department of Biotechnology and Environmental Engineering

    2004-04-01

    A three-dimensional computational fluid dynamics code was used to analyse the performance of 550MW pulverized coal combustion opposite a wall-fired boiler (of the Israel Electric Corporation (IEC)) at different operation modes. The main objective of this study was to prove that connecting plant measurements with three-dimensional furnace modelling is a cost-effective method for design, optimization and problem solving in power plant operation. Heat flux results from calculations were compared with measurements in the boiler and showed good agreement. Consequently, the code was used to study hydrodynamic aspects of air-flue gases mixing in the upper part of the boiler. It was demonstrated that effective mixing between flue gases and overfire air is of essential importance for CO reburning. From the complementary experimental-numerical effort, IEC considers a possibility to improve the boiler performance by replacing the existing OFA nozzles by those with higher penetration depth of the air jets, with the aim to ensure proper mixing to achieve better CO reburning. 7 refs., 7 figs., 1 tab.

  7. The waiting time distribution as a graphical approach to epidemiologic measures of drug utilization

    DEFF Research Database (Denmark)

    Hallas, J; Gaist, D; Bjerrum, L

    1997-01-01

    The emergence of large, computerized pharmacoepidemiologic databases has enabled us to study drug utilization with the individual user as the statistical unit. A recurrent problem in such analyses, however, is the overwhelming volume and complexity of data. We here describe a graphical approach...... that effectively conveys some essential utilization parameters for a drug. The waiting time distribution for a group of drug users is a charting of their first prescription presentations within a specified time window. For a drug used for chronic treatment, most current users will be captured at the beginning...... information about the period prevalence, point prevalence, incidence, duration of use, seasonality, and rate of prescription renewal or relapse for specific drugs. Each of these parameters has a visual correlate. The waiting time distributions may be an informative supplement to conventional drug utilization...

  8. SEE Action Guide for States: Evaluation, Measurement, and Verification Frameworks$-$Guidance for Energy Efficiency Portfolios Funded by Utility Customers

    Energy Technology Data Exchange (ETDEWEB)

    Li, Michael [Dept. of Energy (DOE), Washington DC (United States); Dietsch, Niko [US Environmental Protection Agency (EPA), Cincinnati, OH (United States)

    2018-01-01

    This guide describes frameworks for evaluation, measurement, and verification (EM&V) of utility customer–funded energy efficiency programs. The authors reviewed multiple frameworks across the United States and gathered input from experts to prepare this guide. This guide provides the reader with both the contents of an EM&V framework, along with the processes used to develop and update these frameworks.

  9. Hydrological Modeling in the Bull Run Watershed in Support of a Piloting Utility Modeling Applications (PUMA) Project

    Science.gov (United States)

    Nijssen, B.; Chiao, T. H.; Lettenmaier, D. P.; Vano, J. A.

    2016-12-01

    Hydrologic models with varying complexities and structures are commonly used to evaluate the impact of climate change on future hydrology. While the uncertainties in future climate projections are well documented, uncertainties in streamflow projections associated with hydrologic model structure and parameter estimation have received less attention. In this study, we implemented and calibrated three hydrologic models (the Distributed Hydrology Soil Vegetation Model (DHSVM), the Precipitation-Runoff Modeling System (PRMS), and the Variable Infiltration Capacity model (VIC)) for the Bull Run watershed in northern Oregon using consistent data sources and best practice calibration protocols. The project was part of a Piloting Utility Modeling Applications (PUMA) project with the Portland Water Bureau (PWB) under the umbrella of the Water Utility Climate Alliance (WUCA). Ultimately PWB would use the model evaluation to select a model to perform in-house climate change analysis for Bull Run Watershed. This presentation focuses on the experimental design of the comparison project, project findings and the collaboration between the team at the University of Washington and at PWB. After calibration, the three models showed similar capability to reproduce seasonal and inter-annual variations in streamflow, but differed in their ability to capture extreme events. Furthermore, the annual and seasonal hydrologic sensitivities to changes in climate forcings differed among models, potentially attributable to different model representations of snow and vegetation processes.

  10. Evaluating Longitudinal Mathematics Achievement Growth: Modeling and Measurement Considerations for Assessing Academic Progress

    Science.gov (United States)

    Shanley, Lina

    2016-01-01

    Accurately measuring and modeling academic achievement growth is critical to support educational policy and practice. Using a nationally representative longitudinal data set, this study compared various models of mathematics achievement growth on the basis of both practical utility and optimal statistical fit and explored relationships within and…

  11. The Utilization of University Students as an Effective Measure for Reducing STIs among Teens

    Science.gov (United States)

    Spain, Adam

    2017-01-01

    Nearly 50% of all new sexually transmitted infections were found in teen and young adult populations in 2015, with the number of new infections expected to keep rising. This study evaluated the knowledge and opinions of university students to determine if changes should be made to the current sexual health education curricula utilized in high…

  12. Academic Self-Concept: Modeling and Measuring for Science

    Science.gov (United States)

    Hardy, Graham

    2014-08-01

    In this study, the author developed a model to describe academic self-concept (ASC) in science and validated an instrument for its measurement. Unlike previous models of science ASC, which envisage science as a homogenous single global construct, this model took a multidimensional view by conceiving science self-concept as possessing distinctive facets including conceptual and procedural elements. In the first part of the study, data were collected from 1,483 students attending eight secondary schools in England, through the use of a newly devised Secondary Self-Concept Science Instrument, and structural equation modeling was employed to test and validate a model. In the second part of the study, the data were analysed within the new self-concept framework to examine learners' ASC profiles across the domains of science, with particular attention paid to age- and gender-related differences. The study found that the proposed science self-concept model exhibited robust measures of fit and construct validity, which were shown to be invariant across gender and age subgroups. The self-concept profiles were heterogeneous in nature with the component relating to self-concept in physics, being surprisingly positive in comparison to other aspects of science. This outcome is in stark contrast to data reported elsewhere and raises important issues about the nature of young learners' self-conceptions about science. The paper concludes with an analysis of the potential utility of the self-concept measurement instrument as a pedagogical device for science educators and learners of science.

  13. Electrostatic sensor modeling for torque measurements

    Science.gov (United States)

    Mika, Michał; Dannert, Mirjam; Mett, Felix; Weber, Harry; Mathis, Wolfgang; Nackenhorst, Udo

    2017-09-01

    Torque load measurements play an important part in various engineering applications, as for automotive industry, in which the drive torque of a motor has to be determined. A widely used measuring method are strain gauges. A thin flexible foil, which supports a metallic pattern, is glued to the surface of the object the torque is being applied to. In case of a deformation due to the torque load, the change in the electrical resistance is measured. With the combination of constitutive equations the applied torque load is determined by the change of electrical resistance. The creep of the glue and the foil material, together with the temperature and humidity dependence, may become an obstacle for some applications Kapralov and Fesenko (1984). Thus, there have been optical and magnetical, as well as capacitive sensors introduced). This paper discusses the general idea behind an electrostatic capacitive sensor based on a simple draft of an exemplary measurement setup. For better understanding an own electrostatical, geometrical and mechanical model of this setup has been developed.

  14. Electrostatic sensor modeling for torque measurements

    Directory of Open Access Journals (Sweden)

    M. Mika

    2017-09-01

    Full Text Available Torque load measurements play an important part in various engineering applications, as for automotive industry, in which the drive torque of a motor has to be determined. A widely used measuring method are strain gauges. A thin flexible foil, which supports a metallic pattern, is glued to the surface of the object the torque is being applied to. In case of a deformation due to the torque load, the change in the electrical resistance is measured. With the combination of constitutive equations the applied torque load is determined by the change of electrical resistance. The creep of the glue and the foil material, together with the temperature and humidity dependence, may become an obstacle for some applications Kapralov and Fesenko(1984. Thus, there have been optical and magnetical, as well as capacitive sensors introduced . This paper discusses the general idea behind an electrostatic capacitive sensor based on a simple draft of an exemplary measurement setup. For better understanding an own electrostatical, geometrical and mechanical model of this setup has been developed.

  15. The Utility of Cognitive Plausibility in Language Acquisition Modeling: Evidence From Word Segmentation.

    Science.gov (United States)

    Phillips, Lawrence; Pearl, Lisa

    2015-11-01

    The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's cognitive plausibility. We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition model can aim to be cognitively plausible in multiple ways. We discuss these cognitive plausibility checkpoints generally and then apply them to a case study in word segmentation, investigating a promising Bayesian segmentation strategy. We incorporate cognitive plausibility by using an age-appropriate unit of perceptual representation, evaluating the model output in terms of its utility, and incorporating cognitive constraints into the inference process. Our more cognitively plausible model shows a beneficial effect of cognitive constraints on segmentation performance. One interpretation of this effect is as a synergy between the naive theories of language structure that infants may have and the cognitive constraints that limit the fidelity of their inference processes, where less accurate inference approximations are better when the underlying assumptions about how words are generated are less accurate. More generally, these results highlight the utility of incorporating cognitive plausibility more fully into computational models of language acquisition. Copyright © 2015 Cognitive Science Society, Inc.

  16. Quality of Life: Meaning, Measurement, and Models

    Science.gov (United States)

    1992-05-01

    occupation of head of household, education, religion , and sex. In the Rosen and Moghadam (1988) study of the quality of life of Army wives, only 3...Navy Personnel Research and Development Center San Diego, California 92152-6800 TN-92-15 May 1992 AD-A250 813 Quality of Life : Meaning, Measurement...and Models Elyse W. Kerce 92-13297 $9ý 1 4 Approved for public release: distribuior , is unlimited. NPRDC-TN-92-15 May 1992 Quality of Life : Meaning

  17. Hydrogen recycle modeling and measurements in tokamaks

    International Nuclear Information System (INIS)

    Howe, H.C.

    1980-01-01

    A model for hydrogen recycling developed for use in a tokamak transport code is described and compared with measurements on ISX-B and DITE. The model includes kinetic reflection of charge-exchange neutrals from the wall and deposition, thermal diffusion, and desorption processes in the wall. In a tokamak with a limiter, the inferred recycle coefficient of 0.9-1.0 is due primarily to reflection (0.8-0.9) with the remainder (0.1-0.2) being due to desorption. Laboratory experiments supply much of the data for the model and several areas are discussed where additional data are needed, such as reflection from hydrogen-loaded walls at low (approx. equal to100 eV) energy. Simulation of ISX-B shows that the recently observed density decrease with neutral beam injection may be partially due to a decrease in recycling caused by hardening of the charge-exchange flux incident on the wall from the plasma. Modeling of isotopic exchange in DITE indicates the need for an ion-induced desorption process which responds on a timescale shorter than the wall thermal diffusion time. (orig.)

  18. On the Path to SunShot - Utility Regulatory Business Model Reforms forAddressing the Financial Impacts of Distributed Solar on Utilities

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2016-05-01

    Net-energy metering (NEM) with volumetric retail electricity pricing has enabled rapid proliferation of distributed photovoltaics (DPV) in the United States. However, this transformation is raising concerns about the potential for higher electricity rates and cost-shifting to non-solar customers, reduced utility shareholder profitability, reduced utility earnings opportunities, and inefficient resource allocation. Although DPV deployment in most utility territories remains too low to produce significant impacts, these concerns have motivated real and proposed reforms to utility regulatory and business models, with profound implications for future DPV deployment. This report explores the challenges and opportunities associated with such reforms in the context of the U.S. Department of Energy’s SunShot Initiative. As such, the report focuses on a subset of a broader range of reforms underway in the electric utility sector. Drawing on original analysis and existing literature, we analyze the significance of DPV’s financial impacts on utilities and non-solar ratepayers under current NEM rules and rate designs, the projected effects of proposed NEM and rate reforms on DPV deployment, and alternative reforms that could address utility and ratepayer concerns while supporting continued DPV growth. We categorize reforms into one or more of four conceptual strategies. Understanding how specific reforms map onto these general strategies can help decision makers identify and prioritize options for addressing specific DPV concerns that balance stakeholder interests.

  19. Standing Height and its Estimation Utilizing Foot Length Measurements in Adolescents from Western Region in Kosovo

    Directory of Open Access Journals (Sweden)

    Stevo Popović

    2017-10-01

    Full Text Available The purpose of this research is to examine standing height in both Kosovan genders in the Western Region as well as its association with foot length, as an alternative to estimating standing height. A total of 664 individuals (338 male and 326 female participated in this research. The anthropometric measurements were taken according to the protocol of ISAK. The relationships between body height and foot length were determined using simple correlation coefficients at a ninety-five percent confidence interval. A comparison of means of standing height and foot length between genders was performed using a t-test. After that a linear regression analysis were carried out to examine extent to which foot length can reliably predict standing height. Results displayed that Western Kosovan male are 179.71±6.00cm tall and have a foot length of 26.73±1.20cm, while Western Kosovan female are 166.26±5.23cm tall and have a foot length of 23.66±1.06cm. The results have shown that both genders made Western-Kosovans a tall group, a little bit taller that general Kosovan population. Moreover, the foot length reliably predicts standing height in both genders; but, not reliably enough as arm span. This study also confirms the necessity for developing separate height models for each region in Kosovo as the results from Western-Kosovans don’t correspond to the general values.

  20. The utility of comparative models and the local model quality for protein crystal structure determination by Molecular Replacement

    Directory of Open Access Journals (Sweden)

    Pawlowski Marcin

    2012-11-01

    Full Text Available Abstract Background Computational models of protein structures were proved to be useful as search models in Molecular Replacement (MR, a common method to solve the phase problem faced by macromolecular crystallography. The success of MR depends on the accuracy of a search model. Unfortunately, this parameter remains unknown until the final structure of the target protein is determined. During the last few years, several Model Quality Assessment Programs (MQAPs that predict the local accuracy of theoretical models have been developed. In this article, we analyze whether the application of MQAPs improves the utility of theoretical models in MR. Results For our dataset of 615 search models, the real local accuracy of a model increases the MR success ratio by 101% compared to corresponding polyalanine templates. On the contrary, when local model quality is not utilized in MR, the computational models solved only 4.5% more MR searches than polyalanine templates. For the same dataset of the 615 models, a workflow combining MR with predicted local accuracy of a model found 45% more correct solution than polyalanine templates. To predict such accuracy MetaMQAPclust, a “clustering MQAP” was used. Conclusions Using comparative models only marginally increases the MR success ratio in comparison to polyalanine structures of templates. However, the situation changes dramatically once comparative models are used together with their predicted local accuracy. A new functionality was added to the GeneSilico Fold Prediction Metaserver in order to build models that are more useful for MR searches. Additionally, we have developed a simple method, AmIgoMR (Am I good for MR?, to predict if an MR search with a template-based model for a given template is likely to find the correct solution.

  1. Federal and State Structures to Support Financing Utility-Scale Solar Projects and the Business Models Designed to Utilize Them

    Energy Technology Data Exchange (ETDEWEB)

    Mendelsohn, M.; Kreycik, C.

    2012-04-01

    Utility-scale solar projects have grown rapidly in number and size over the last few years, driven in part by strong renewable portfolio standards (RPS) and federal incentives designed to stimulate investment in renewable energy technologies. This report provides an overview of such policies, as well as the project financial structures they enable, based on industry literature, publicly available data, and questionnaires conducted by the National Renewable Energy Laboratory (NREL).

  2. The waiting time distribution as a graphical approach to epidemiologic measures of drug utilization

    DEFF Research Database (Denmark)

    Hallas, J; Gaist, D; Bjerrum, L

    1997-01-01

    The emergence of large, computerized pharmacoepidemiologic databases has enabled us to study drug utilization with the individual user as the statistical unit. A recurrent problem in such analyses, however, is the overwhelming volume and complexity of data. We here describe a graphical approach...... that effectively conveys some essential utilization parameters for a drug. The waiting time distribution for a group of drug users is a charting of their first prescription presentations within a specified time window. For a drug used for chronic treatment, most current users will be captured at the beginning...... of the window. After a few months, the graph will be dominated by new, incident users. As examples, we present waiting time distributions for insulin, ulcer drugs, systemic corticosteroids, antidepressants, and disulfiram. Appropriately analyzed and interpreted, the waiting time distributions can provide...

  3. A Steam Utility Network Model for the Evaluation of Heat Integration Retrofits – A Case Study of an Oil Refinery

    Directory of Open Access Journals (Sweden)

    Sofie Marton

    2017-12-01

    Full Text Available This paper presents a real industrial example in which the steam utility network of a refinery is modelled in order to evaluate potential Heat Integration retrofits proposed for the site. A refinery, typically, has flexibility to optimize the operating strategy for the steam system depending on the operation of the main processes. This paper presents a few examples of Heat Integration retrofit measures from a case study of a large oil refinery. In order to evaluate expected changes in fuel and electricity imports to the refinery after implementation of the proposed retrofits, a steam system model has been developed. The steam system model has been tested and validated with steady state data from three different operating scenarios and can be used to evaluate how changes to steam balances at different pressure levels would affect overall steam balances, generation of shaft power in turbines, and the consumption of fuel gas.

  4. Changes in fibrinogen availability and utilization in an animal model of traumatic coagulopathy

    DEFF Research Database (Denmark)

    Hagemo, Jostein S; Jørgensen, Jørgen; Ostrowski, Sisse R

    2013-01-01

    Impaired haemostasis following shock and tissue trauma is frequently detected in the trauma setting. These changes occur early, and are associated with increased mortality. The mechanism behind trauma-induced coagulopathy (TIC) is not clear. Several studies highlight the crucial role of fibrinogen...... in posttraumatic haemorrhage. This study explores the coagulation changes in a swine model of early TIC, with emphasis on fibrinogen levels and utilization of fibrinogen....

  5. Utility Function for modeling Group Multicriteria Decision Making problems as games

    OpenAIRE

    Alexandre Bevilacqua Leoneti

    2016-01-01

    To assist in the decision making process, several multicriteria methods have been proposed. However, the existing methods assume a single decision-maker and do not consider decision under risk, which is better addressed by Game Theory. Hence, the aim of this research is to propose a Utility Function that makes it possible to model Group Multicriteria Decision Making problems as games. The advantage of using Game Theory for solving Group Multicriteria Decision Making problems is to evaluate th...

  6. Optimizing Availability of a Framework in Series Configuration Utilizing Markov Model and Monte Carlo Simulation Techniques

    Directory of Open Access Journals (Sweden)

    Mansoor Ahmed Siddiqui

    2017-06-01

    Full Text Available This research work is aimed at optimizing the availability of a framework comprising of two units linked together in series configuration utilizing Markov Model and Monte Carlo (MC Simulation techniques. In this article, effort has been made to develop a maintenance model that incorporates three distinct states for each unit, while taking into account their different levels of deterioration. Calculations are carried out using the proposed model for two distinct cases of corrective repair, namely perfect and imperfect repairs, with as well as without opportunistic maintenance. Initially, results are accomplished using an analytical technique i.e., Markov Model. Validation of the results achieved is later carried out with the help of MC Simulation. In addition, MC Simulation based codes also work well for the frameworks that follow non-exponential failure and repair rates, and thus overcome the limitations offered by the Markov Model.

  7. Business model innovation for Local Energy Management: a perspective from Swiss utilities

    Directory of Open Access Journals (Sweden)

    Emanuele Facchinetti

    2016-08-01

    Full Text Available The successful deployment of the energy transition relies on a deep reorganization of the energy market. Business model innovation is recognized as a key driver of this process. This work contributes to this topic by providing to potential Local Energy Management stakeholders and policy makers a conceptual framework guiding the Local Energy Management business model innovation. The main determinants characterizing Local Energy Management concepts and impacting its business model innovation are identified through literature reviews on distributed generation typologies and customer/investor preferences related to new business opportunities emerging with the energy transition. Afterwards, the relation between the identified determinants and the Local Energy Management business model solution space is analyzed based on semi-structured interviews with managers of Swiss utilities companies. The collected managers’ preferences serve as explorative indicators supporting the business model innovation process and provide insights to policy makers on challenges and opportunities related to Local Energy Management.

  8. Business Model Innovation for Local Energy Management: A Perspective from Swiss Utilities

    International Nuclear Information System (INIS)

    Facchinetti, Emanuele; Eid, Cherrelle; Bollinger, Andrew; Sulzer, Sabine

    2016-01-01

    The successful deployment of the energy transition relies on a deep reorganization of the energy market. Business model innovation is recognized as a key driver of this process. This work contributes to this topic by providing to potential local energy management (LEM) stakeholders and policy makers a conceptual framework guiding the LEM business model innovation. The main determinants characterizing LEM concepts and impacting its business model innovation are identified through literature reviews on distributed generation typologies and customer/investor preferences related to new business opportunities emerging with the energy transition. Afterwards, the relation between the identified determinants and the LEM business model solution space is analyzed based on semi-structured interviews with managers of Swiss utilities companies. The collected managers’ preferences serve as explorative indicators supporting the business model innovation process and provide insights into policy makers on challenges and opportunities related to LEM.

  9. Modelling and multi objective optimization of laser peening process using Taguchi utility concept

    Science.gov (United States)

    Ranjith Kumar, G.; Rajyalakshmi, G.

    2017-11-01

    Laser peening is considered as one of the innovative surface treatment technique. This work focuses on determining the optimal peening parameters for finding optimal responses like residual stresses and deformation. The modelling was done using ANSYS and values are optimised using Taguchi Utility concept for simultaneous optimization of responses. Three parameters viz. overlap; Pulse duration and Pulse density are considered as process parameters for modelling and optimization. Through Multi objective optimization, it is showing that Overlap is showing maximum influence on Stress and deformation followed by Power density and pulse duration.

  10. Inservice inspection a preventative measure for a utility to improve availability

    International Nuclear Information System (INIS)

    Hausermann, R.

    1985-01-01

    The wish of an everlasting good performance of a machine is as old as the dream to invent the perpetum mobile. The real technical world is different in that the material in use for a structure or a machine depends greatly on the fabrication process, the environment and the loads within the period of use of the equipment. In this paper a method is discussed how, by applying a well balanced maintenance strategy coupled with NDT (Non Destructive Testing), the utility goal, to reach a high work availability, and the authority goal to keep a high safety readiness, can be achieved. The aspect of the NDT is discussed in more detail

  11. Measurement of the thyroid's iodine absorption utilizing minimal /sup 131/I dose

    Energy Technology Data Exchange (ETDEWEB)

    Paz A, B.; Villegas A, J.; Delgado B, C. (Universidad Nacional San Agustin de Arequipa (Peru). Departamento de Bioquimica)

    1981-03-01

    We utilize a minimal dose of /sup 131/I thus limiting the contact of the thyroid tissues with the isotopic materials to determine the absorption of /sup 131/I by the thyroid from 6 to 24 hours in 90 pupils of the locality of Arequipa. The average rate of absorption in 6 and 24 hours in the case considered are of 24.15% and 35.42% respectively, with a standard deviation of 6.93% and 9.61%. No significant differences were reported from the results of those of adults and our own results in all the probes which were undertaken.

  12. The episodic random utility model unifies time trade-off and discrete choice approaches in health state valuation

    NARCIS (Netherlands)

    B.M. Craig (Benjamin); J.J. van Busschbach (Jan)

    2009-01-01

    textabstractABSTRACT: BACKGROUND: To present an episodic random utility model that unifies time trade-off and discrete choice approaches in health state valuation. METHODS: First, we introduce two alternative random utility models (RUMs) for health preferences: the episodic RUM and the more common

  13. Measurement and modeling of indoor radon concentrations in residential buildings.

    Science.gov (United States)

    Park, Ji Hyun; Whang, Sungim; Lee, Hyun Young; Lee, Cheol-Min; Kang, Dae Ryong

    2018-01-08

    Radon, the primary constituent of natural radiation, is the second leading environmental cause of lung cancer after smoking. To confirm a relationship between indoor radon exposure and lung cancer, estimating cumulative levels of exposure to indoor radon for an individual or population is necessary. This study sought to develop a model for estimate indoor radon concentrations in Korea. Especially, our model and method may have wider application to other residences, not to specific site, and can be used in situations where actual measurements for input variables are lacking. In order to develop a model, indoor radon concentrations were measured at 196 ground floor residences using passive alpha-track detectors between January and April 2016. The arithmetic (AM) and geometric (GM) means of indoor radon concentrations were 117.86±72.03 and 95.13±2.02 Bq m-3, respectively. Questionnaires were administered to assess the characteristics of each residence, the environment around the measuring equipment, and lifestyles of the residents. Also, national data on indoor radon concentrations at 7643 detached houses for 2011-2014 were reviewed to determine radon concentrations in the soil, and meteorological data on temperature and wind speed were utilized to approximate ventilation rates. The estimated ventilation rates and radon exhalation rates from the soil varied from 0.18 to 0.98 h-1 (AM=0.59±0.17 h-1) and 326.33 to 1392.77 Bq m-2 h-1 (AM=777.45±257.39 and GM=735.67±1.40 Bq m-2 h-1), respectively. With these results, the developed model was applied to estimate indoor radon concentrations for 157 residences (80% of all 196 residences), which were randomly sampled. The results were in better agreement for Gyeongi and Seoul than for other regions of Korea. Overall, the actual and estimated radon concentrations were in better agreement, except for a few low-concentration residences.

  14. The Joint Venture Model of Knowledge Utilization: a guide for change in nursing.

    Science.gov (United States)

    Edgar, Linda; Herbert, Rosemary; Lambert, Sylvie; MacDonald, Jo-Ann; Dubois, Sylvie; Latimer, Margot

    2006-05-01

    Knowledge utilization (KU) is an essential component of today's nursing practice and healthcare system. Despite advances in knowledge generation, the gap in knowledge transfer from research to practice continues. KU models have moved beyond factors affecting the individual nurse to a broader perspective that includes the practice environment and the socio-political context. This paper proposes one such theoretical model the Joint Venture Model of Knowledge Utilization (JVMKU). Key components of the JVMKU that emerged from an extensive multidisciplinary review of the literature include leadership, emotional intelligence, person, message, empowered workplace and the socio-political environment. The model has a broad and practical application and is not specific to one type of KU or one population. This paper provides a description of the JVMKU, its development and suggested uses at both local and organizational levels. Nurses in both leadership and point-of-care positions will recognize the concepts identified and will be able to apply this model for KU in their own workplace for assessment of areas requiring strengthening and support.

  15. A Review of Acculturation Measures and Their Utility in Studies Promoting Latino Health

    OpenAIRE

    Wallace, Phyllis M.; Pomery, Elizabeth A.; Latimer, Amy E.; Martinez, Josefa L.; Salovey, Peter

    2010-01-01

    The authors reviewed the acculturation literature with the goal of identifying measures used to assess acculturation in Hispanic populations in the context of studies of health knowledge, attitudes, and behavior change. Twenty-six acculturation measures were identified and summarized. As the Hispanic population continues to grow in the United States, there is a need to develop rigorous acculturation measures that include health indicators. Findings suggest that multidimensional acculturation ...

  16. Capacitor Voltages Measurement and Balancing in Flying Capacitor Multilevel Converters Utilizing a Single Voltage Sensor

    DEFF Research Database (Denmark)

    Farivar, Glen; Ghias, Amer M. Y. M.; Hredzak, Branislav

    2017-01-01

    This paper proposes a new method for measuring capacitor voltages in multilevel flying capacitor (FC) converters that requires only one voltage sensor per phase leg. Multiple dc voltage sensors traditionally used to measure the capacitor voltages are replaced with a single voltage sensor at the ac...... side of the phase leg. The proposed method is subsequently used to balance the capacitor voltages using only the measured ac voltage. The operation of the proposed measurement and balancing method is independent of the number of the converter levels. Experimental results presented for a five-level FC...

  17. Measurement of thermal conductivity and diffusivity in situ: Literature survey and theoretical modelling of measurements

    Energy Technology Data Exchange (ETDEWEB)

    Kukkonen, I.; Suppala, I. [Geological Survey of Finland, Espoo (Finland)

    1999-01-01

    In situ measurements of thermal conductivity and diffusivity of bedrock were investigated with the aid of a literature survey and theoretical simulations of a measurement system. According to the surveyed literature, in situ methods can be divided into `active` drill hole methods, and `passive` indirect methods utilizing other drill hole measurements together with cutting samples and petrophysical relationships. The most common active drill hole method is a cylindrical heat producing probe whose temperature is registered as a function of time. The temperature response can be calculated and interpreted with the aid of analytical solutions of the cylindrical heat conduction equation, particularly the solution for an infinite perfectly conducting cylindrical probe in a homogeneous medium, and the solution for a line source of heat in a medium. Using both forward and inverse modellings, a theoretical measurement system was analysed with an aim at finding the basic parameters for construction of a practical measurement system. The results indicate that thermal conductivity can be relatively well estimated with borehole measurements, whereas thermal diffusivity is much more sensitive to various disturbing factors, such as thermal contact resistance and variations in probe parameters. In addition, the three-dimensional conduction effects were investigated to find out the magnitude of axial `leak` of heat in long-duration experiments. The radius of influence of a drill hole measurement is mainly dependent on the duration of the experiment. Assuming typical conductivity and diffusivity values of crystalline rocks, the measurement yields information within less than a metre from the drill hole, when the experiment lasts about 24 hours. We propose the following factors to be taken as basic parameters in the construction of a practical measurement system: the probe length 1.5-2 m, heating power 5-20 Wm{sup -1}, temperature recording with 5-7 sensors placed along the probe, and

  18. Measurement of thermal conductivity and diffusivity in situ: Literature survey and theoretical modelling of measurements

    International Nuclear Information System (INIS)

    Kukkonen, I.; Suppala, I.

    1999-01-01

    In situ measurements of thermal conductivity and diffusivity of bedrock were investigated with the aid of a literature survey and theoretical simulations of a measurement system. According to the surveyed literature, in situ methods can be divided into 'active' drill hole methods, and 'passive' indirect methods utilizing other drill hole measurements together with cutting samples and petrophysical relationships. The most common active drill hole method is a cylindrical heat producing probe whose temperature is registered as a function of time. The temperature response can be calculated and interpreted with the aid of analytical solutions of the cylindrical heat conduction equation, particularly the solution for an infinite perfectly conducting cylindrical probe in a homogeneous medium, and the solution for a line source of heat in a medium. Using both forward and inverse modellings, a theoretical measurement system was analysed with an aim at finding the basic parameters for construction of a practical measurement system. The results indicate that thermal conductivity can be relatively well estimated with borehole measurements, whereas thermal diffusivity is much more sensitive to various disturbing factors, such as thermal contact resistance and variations in probe parameters. In addition, the three-dimensional conduction effects were investigated to find out the magnitude of axial 'leak' of heat in long-duration experiments. The radius of influence of a drill hole measurement is mainly dependent on the duration of the experiment. Assuming typical conductivity and diffusivity values of crystalline rocks, the measurement yields information within less than a metre from the drill hole, when the experiment lasts about 24 hours. We propose the following factors to be taken as basic parameters in the construction of a practical measurement system: the probe length 1.5-2 m, heating power 5-20 Wm -1 , temperature recording with 5-7 sensors placed along the probe, and

  19. Body Height and Its Estimation Utilizing Arm Span Measurements in Bosnian and Herzegovinian Adults

    Directory of Open Access Journals (Sweden)

    Stevo Popovic

    2015-03-01

    Full Text Available Anthropologists recognized the tallness of nations in the Dinaric Alps long time ago. As the modern Bosnian and Herzegovinian fall more into the Dinaric racial classification, the purpose of this study was to examine the body height in Bosnian and Herzegovinian adults as well as the relationship between arm span as an alternative to estimating the body height and body height, which vary in different ethnic and racial groups. The nature and scope of this study analyzes 212 students (178 men, aged 22.42±2.79 and 34 women, aged 21.56±2.06 from the University of Banjaluka to be subjects. The anthropometric measurements were taken according to the protocol of the ISAK. Means and standard deviations were obtained. A comparison of means of body heights and arm spans within each gender group and between genders were carried out using a t-test. The relationships between body height and arm span were determined using simple correlation coefficients and their 95% confidence interval. Then a linear regression analysis was performed to examine the extent to which the arm span can reliably predict body height. The results have shown that male Bosnian and Herzegovinians are 183.87±7.11 cm tall and have an arm span of 184.50±8.28 cm, while female Bosnian and Herzegovinians are 171.82±6.56 cm tall and have an arm span of 169.85±8.01 cm. Compared to other studies, the results of this one have shown that both genders make Bosnian and Herzegovinian population one of the tallest nations on the earth, maybe the tallest one. Moreover, the arm span reliably predicts body height in both genders. However, the estimation equations, which were obtained in Bosnian and Herzegovinians, are substantially different alike in other populations, since arm span was close to body heights: in men 0.73±1.17 cm more than the body height and in women 1.97±1.45 centimeters less than the body height. This confirms the necessity for developing separate height models for each

  20. Utilization of Model Predictive Control to Balance Power Absorption Against Load Accumulation

    Energy Technology Data Exchange (ETDEWEB)

    Abbas, Nikhar [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Tom, Nathan M [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-06-03

    Wave energy converter (WEC) control strategies have been primarily focused on maximizing power absorption. The use of model predictive control strategies allows for a finite-horizon, multiterm objective function to be solved. This work utilizes a multiterm objective function to maximize power absorption while minimizing the structural loads on the WEC system. Furthermore, a Kalman filter and autoregressive model were used to estimate and forecast the wave exciting force and predict the future dynamics of the WEC. The WEC's power-take-off time-averaged power and structural loads under a perfect forecast assumption in irregular waves were compared against results obtained from the Kalman filter and autoregressive model to evaluate model predictive control performance.

  1. Utilization of Model Predictive Control to Balance Power Absorption Against Load Accumulation: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Abbas, Nikhar; Tom, Nathan

    2017-09-01

    Wave energy converter (WEC) control strategies have been primarily focused on maximizing power absorption. The use of model predictive control strategies allows for a finite-horizon, multiterm objective function to be solved. This work utilizes a multiterm objective function to maximize power absorption while minimizing the structural loads on the WEC system. Furthermore, a Kalman filter and autoregressive model were used to estimate and forecast the wave exciting force and predict the future dynamics of the WEC. The WEC's power-take-off time-averaged power and structural loads under a perfect forecast assumption in irregular waves were compared against results obtained from the Kalman filter and autoregressive model to evaluate model predictive control performance.

  2. A Review of Acculturation Measures and Their Utility in Studies Promoting Latino Health

    Science.gov (United States)

    Wallace, Phyllis M.; Pomery, Elizabeth A.; Latimer, Amy E.; Martinez, Josefa L.; Salovey, Peter

    2010-01-01

    The authors reviewed the acculturation literature with the goal of identifying measures used to assess acculturation in Hispanic populations in the context of studies of health knowledge, attitudes, and behavior change. Twenty-six acculturation measures were identified and summarized. As the Hispanic population continues to grow in the United…

  3. Evaluating components of dental care utilization among adults with diabetes and matched controls via hurdle models

    Science.gov (United States)

    2012-01-01

    Background About one-third of adults with diabetes have severe oral complications. However, limited previous research has investigated dental care utilization associated with diabetes. This project had two purposes: to develop a methodology to estimate dental care utilization using claims data and to use this methodology to compare utilization of dental care between adults with and without diabetes. Methods Data included secondary enrollment and demographic data from Washington Dental Service (WDS) and Group Health Cooperative (GH), clinical data from GH, and dental-utilization data from WDS claims during 2002–2006. Dental and medical records from WDS and GH were linked for enrolees continuously and dually insured during the study. We employed hurdle models in a quasi-experimental setting to assess differences between adults with and without diabetes in 5-year cumulative utilization of dental services. Propensity score matching adjusted for differences in baseline covariates between the two groups. Results We found that adults with diabetes had lower odds of visiting a dentist (OR = 0.74, p dental visit, diabetes patients had lower odds of receiving prophylaxes (OR = 0.77), fillings (OR = 0.80) and crowns (OR = 0.84) (p diabetes are less likely to use dental services. Those who do are less likely to use preventive care and more likely to receive periodontal care and tooth-extractions. Future research should address the possible effectiveness of additional prevention in reducing subsequent severe oral disease in patients with diabetes. PMID:22776352

  4. Evaluating components of dental care utilization among adults with diabetes and matched controls via hurdle models

    Directory of Open Access Journals (Sweden)

    Chaudhari Monica

    2012-07-01

    Full Text Available Abstract Background About one-third of adults with diabetes have severe oral complications. However, limited previous research has investigated dental care utilization associated with diabetes. This project had two purposes: to develop a methodology to estimate dental care utilization using claims data and to use this methodology to compare utilization of dental care between adults with and without diabetes. Methods Data included secondary enrollment and demographic data from Washington Dental Service (WDS and Group Health Cooperative (GH, clinical data from GH, and dental-utilization data from WDS claims during 2002–2006. Dental and medical records from WDS and GH were linked for enrolees continuously and dually insured during the study. We employed hurdle models in a quasi-experimental setting to assess differences between adults with and without diabetes in 5-year cumulative utilization of dental services. Propensity score matching adjusted for differences in baseline covariates between the two groups. Results We found that adults with diabetes had lower odds of visiting a dentist (OR = 0.74, p  0.001. Among those with a dental visit, diabetes patients had lower odds of receiving prophylaxes (OR = 0.77, fillings (OR = 0.80 and crowns (OR = 0.84 (p 0.005 for all and higher odds of receiving periodontal maintenance (OR = 1.24, non-surgical periodontal procedures (OR = 1.30, extractions (OR = 1.38 and removable prosthetics (OR = 1.36 (p  Conclusions Patients with diabetes are less likely to use dental services. Those who do are less likely to use preventive care and more likely to receive periodontal care and tooth-extractions. Future research should address the possible effectiveness of additional prevention in reducing subsequent severe oral disease in patients with diabetes.

  5. Application of GPS Measurements for Ionospheric and Tropospheric Modelling

    Science.gov (United States)

    Rajendra Prasad, P.; Abdu, M. A.; Furlan, Benedito. M. P.; Koiti Kuga, Hélio

    military navigation. The DOD's primary purposes were to use the system in precision weapon delivery and providing a capability that would help reverse the proliferation of navigation systems in military. Subsequently, it was very quickly realized that civil use and scientific utility would far outstrip military use. A variety of scientific applications are uniquely suited to precise positioning capabilities. The relatively high precision, low cost, mobility and convenience of GPS receivers make positioning attractive. The other applications being precise time measurement, surveying and geodesy purposes apart from orbit and attitude determination along with many user services. The system operates by transmitting radio waves from satellites to receivers on the ground, aircraft, or other satellites. These signals are used to calculate location very accurately. Standard Positioning Services (SPS) which restricts access to Coarse/Access (C/A) code and carrier signals on the L1 frequency only. The accuracy thus provided by SPS fall short of most of the accuracy requirements of users. The upper atmosphere is ionized by the ultra violet radiation from the sun. The significant errors in positioning can result when the signals are refracted and slowed by ionospheric conditions, the parameter of the ionosphere that produces most effects on GPS signals is the total number of electrons in the ionospheric propagation path. This integrated number of electrons, called Total Electron Content (TEC) varies, not only from day to night, time of the year and solar flux cycle, but also with geomagnetic latitude and longitude. Being plasma the ionosphere affects the radio waves propagating through it. Effects of scintillation on GPS satellite navigation systems operating at L1 (1.5754 GHz), L2 (1.2276 GHz) frequencies have not been estimated accurately. It is generally recognized that GPS navigation systems are vulnerable in the polar and especially in the equatorial region during the

  6. Study on Emission Measurement of Vehicle on Road Based on Binomial Logit Model

    OpenAIRE

    Aly, Sumarni Hamid; Selintung, Mary; Ramli, Muhammad Isran; Sumi, Tomonori

    2011-01-01

    This research attempts to evaluate emission measurement of on road vehicle. In this regard, the research develops failure probability model of vehicle emission test for passenger car which utilize binomial logit model. The model focuses on failure of CO and HC emission test for gasoline cars category and Opacity emission test for diesel-fuel cars category as dependent variables, while vehicle age, engine size, brand and type of the cars as independent variables. In order to imp...

  7. Brain metabolism in autism. Resting cerebral glucose utilization rates as measured with positron emission tomography

    International Nuclear Information System (INIS)

    Rumsey, J.M.; Duara, R.; Grady, C.; Rapoport, J.L.; Margolin, R.A.; Rapoport, S.I.; Cutler, N.R.

    1985-01-01

    The cerebral metabolic rate for glucose was studied in ten men (mean age = 26 years) with well-documented histories of infantile autism and in 15 age-matched normal male controls using positron emission tomography and (F-18) 2-fluoro-2-deoxy-D-glucose. Positron emission tomography was completed during rest, with reduced visual and auditory stimulation. While the autistic group as a whole showed significantly elevated glucose utilization in widespread regions of the brain, there was considerable overlap between the two groups. No brain region showed a reduced metabolic rate in the autistic group. Significantly more autistic, as compared with control, subjects showed extreme relative metabolic rates (ratios of regional metabolic rates to whole brain rates and asymmetries) in one or more brain regions

  8. Brain metabolism in autism. Resting cerebral glucose utilization rates as measured with positron emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Rumsey, J.M.; Duara, R.; Grady, C.; Rapoport, J.L.; Margolin, R.A.; Rapoport, S.I.; Cutler, N.R.

    1985-05-01

    The cerebral metabolic rate for glucose was studied in ten men (mean age = 26 years) with well-documented histories of infantile autism and in 15 age-matched normal male controls using positron emission tomography and (F-18) 2-fluoro-2-deoxy-D-glucose. Positron emission tomography was completed during rest, with reduced visual and auditory stimulation. While the autistic group as a whole showed significantly elevated glucose utilization in widespread regions of the brain, there was considerable overlap between the two groups. No brain region showed a reduced metabolic rate in the autistic group. Significantly more autistic, as compared with control, subjects showed extreme relative metabolic rates (ratios of regional metabolic rates to whole brain rates and asymmetries) in one or more brain regions.

  9. Energy Renovation of Buildings Utilizing the U-value Meter, a New Heat Loss Measuring Device

    Directory of Open Access Journals (Sweden)

    Lars Schiøtt Sørensen

    2010-01-01

    Full Text Available A new device with the ability to measure heat loss from building facades is proposed. Yet to be commercially developed, the U-value Meter can be used as stand-alone apparatus, or in combination with thermographic-equipment. The U-value meter complements thermographs, which only reproduce surface temperature and not the heat loss distribution. There is need for a device that measures the heat loss in a quantitative manner. Convective as well as radiative heat losses are captured and measured with a five-layer thermal system. Heat losses are measured in the SI-unit W/m2K. The aim is to achieve more cost-effective building renovation, and provide a means to check the fulfillment of Building Regulation requirements with respect to stated U-values (heat transmission coefficients. In this way it should be possible to greatly reduce energy consumption of buildings.

  10. Spring constant measurement using a MEMS force and displacement sensor utilizing paralleled piezoresistive cantilevers

    Science.gov (United States)

    Kohyama, Sumihiro; Takahashi, Hidetoshi; Yoshida, Satoru; Onoe, Hiroaki; Hirayama-Shoji, Kayoko; Tsukagoshi, Takuya; Takahata, Tomoyuki; Shimoyama, Isao

    2018-04-01

    This paper reports on a method to measure a spring constant on site using a micro electro mechanical systems (MEMS) force and displacement sensor. The proposed sensor consists of a force-sensing cantilever and a displacement-sensing cantilever. Each cantilever is composed of two beams with a piezoresistor on the sidewall for measuring the in-plane lateral directional force and displacement. The force resolution and displacement resolution of the fabricated sensor were less than 0.8 µN and 0.1 µm, respectively. We measured the spring constants of two types of hydrogel microparticles to demonstrate the effectiveness of the proposed sensor, with values of approximately 4.3 N m-1 and 15.1 N m-1 obtained. The results indicated that the proposed sensor is effective for on-site spring constant measurement.

  11. Utilization of an electronic portal imaging device for measurement of dynamic wedge data

    International Nuclear Information System (INIS)

    Elder, Eric S.; Miner, Marc S.; Butker, Elizabeth K.; Sutton, Danny S.; Davis, Lawrence W.

    1996-01-01

    Purpose/Objective: Due to the motion of the collimator during dynamic wedge treatments, the conventional method of collecting comprehensive wedge data with a water tank and a scanning ionization chamber is obsolete. It is the objective of this work to demonstrate the use of an electronic portal imaging device (EPID) and software to accomplish this task. Materials and Methods: A Varian Clinac[reg] 2300 C/D, equipped with a PortalVision TM EPID and Dosimetry Research Mode experimental software, was used to produce the radiation field. The Dosimetry Research Mode experimental software allows for a band of 10 of 256 high voltage electrodes to be continuously read and averaged by the 256 electrometer electrodes. The file that is produced contains data relating to the integrated ionization at each of the 256 points, essentially the cross plane beam profile. Software was developed using Microsoft C ++ to reformat the data for import into a Microsoft Excel spreadsheet allowing for easy mathematical manipulation and graphical display. Beam profiles were measured by the EPID with a 100 cm TSD for various field sizes. Each field size was measured open, steel wedged, and dynamically wedged. Scanning ionization chamber measurements performed in a water tank were compared to the open and steel wedged fields. Ionization chamber measurements taken in a water tank were compared with the dynamically wedged measurements. For the EPID measurements the depth was varied using Gammex RMI Solid Water TM placed directly above the EPID sensitive volume. Bolus material was placed between the Solid Water TM and the EPID to avoid an air gap. Results: Comparison of EPID measurements with those from an ion chamber in a water tank showed a discrepancy of ∼5%. Scans were successfully obtained for open, steel wedged and dynamically wedged beams. Software has been developed to allow for easy graphical display of beam profiles. Conclusions: Measurement of dynamic wedge data proves to be easily

  12. A laboratory-calibrated model of coho salmon growth with utility for ecological analyses

    Science.gov (United States)

    Manhard, Christopher V.; Som, Nicholas A.; Perry, Russell W.; Plumb, John M.

    2018-01-01

    We conducted a meta-analysis of laboratory- and hatchery-based growth data to estimate broadly applicable parameters of mass- and temperature-dependent growth of juvenile coho salmon (Oncorhynchus kisutch). Following studies of other salmonid species, we incorporated the Ratkowsky growth model into an allometric model and fit this model to growth observations from eight studies spanning ten different populations. To account for changes in growth patterns with food availability, we reparameterized the Ratkowsky model to scale several of its parameters relative to ration. The resulting model was robust across a wide range of ration allocations and experimental conditions, accounting for 99% of the variation in final body mass. We fit this model to growth data from coho salmon inhabiting tributaries and constructed ponds in the Klamath Basin by estimating habitat-specific indices of food availability. The model produced evidence that constructed ponds provided higher food availability than natural tributaries. Because of their simplicity (only mass and temperature are required as inputs) and robustness, ration-varying Ratkowsky models have utility as an ecological tool for capturing growth in freshwater fish populations.

  13. Modeling Interprovincial Cooperative Energy Saving in China: An Electricity Utilization Perspective

    Directory of Open Access Journals (Sweden)

    Lijun Zeng

    2018-01-01

    Full Text Available As the world faces great challenges from climate change and environmental pollution, China urgently requires energy saving, emission reduction, and carbon reduction programmes. However, the non-cooperative energy saving model (NCESM, the simple regulation mode that is China’s main model for energy saving, is not beneficial for optimization of energy and resource distribution, and cannot effectively motivate energy saving at the provincial level. Therefore, we propose an interprovincial cooperative energy saving model (CESM from the perspective of electricity utilization, with the object of maximizing the benefits from electricity utilization of the cooperation union based on achieving the energy saving goals of the union as a whole. The CESM consists of two parts: (1 an optimization model that calculates the optimal quantities of electricity consumption for each participating province to meet the joint energy saving goal; and (2 a model that distributes the economic benefits of the cooperation among the provinces in the cooperation based on the Shapley value method. We applied the CESM to the case of an interprovincial union of Shanghai, Sichuan, Shanxi, and Gansu in China. The results, based on the data from 2001–2014, show that cooperation can significantly increase the benefits of electricity utilization for each province in the union. The total benefits of the union from utilization of electricity increased 38.38%, or 353.98 billion CNY, while the benefits to Shanghai, Sichuan, Shanxi, and Gansu were 200.28, 58.37, 57.11, and 38.22 billion CNY respectively greater under the CESM than the NCESM. The implementation of the CESM provides the provincial governments not only a flexible and incentive way to achieve short-term goals, but also a feasible and effective path to realize long-term energy saving strategies. To test the impact of different parameter values on the results of the CESM, a sensitivity analysis was conducted. Some policy

  14. Utility of the Canadian Occupational Performance Measure as an admission and outcome measure in interdisciplinary community-based geriatric rehabilitation

    DEFF Research Database (Denmark)

    Larsen, Anette Enemark; Carlsson, Gunilla

    2012-01-01

    a questionnaire evaluating their experiences, showing that they found development in knowledge and community between the professions to benefit both therapists and citizens, and gained a better insight into their clients’ everyday lives through the COPM. In conclusion, the COPM may be useful as an admission...... and outcome measurement for the rehabilitation of elderly citizens; however, aspects of education and administration must be considered before the instrument can be successfully administered in an interdisciplinary geriatric rehabilitation context.assessment, client-centred practice, COPM, general practice...

  15. Exposure to electromagnetic fields from smart utility meters in GB; part I) laboratory measurements.

    Science.gov (United States)

    Peyman, Azadeh; Addison, Darren; Mee, Terry; Goiceanu, Cristian; Maslanyj, Myron; Mann, Simon

    2017-05-01

    Laboratory measurements of electric fields have been carried out around examples of smart meter devices used in Great Britain. The aim was to quantify exposure of people to radiofrequency signals emitted from smart meter devices operating at 2.4 GHz, and then to compare this with international (ICNIRP) health-related guidelines and with exposures from other telecommunication sources such as mobile phones and Wi-Fi devices. The angular distribution of the electric fields from a sample of 39 smart meter devices was measured in a controlled laboratory environment. The angular direction where the power density was greatest was identified and the equivalent isotropically radiated power was determined in the same direction. Finally, measurements were carried out as a function of distance at the angles where maximum field strengths were recorded around each device. The maximum equivalent power density measured during transmission around smart meter devices at 0.5 m and beyond was 15 mWm -2 , with an estimation of maximum duty factor of only 1%. One outlier device had a maximum power density of 91 mWm -2 . All power density measurements reported in this study were well below the 10 W m -2 ICNIRP reference level for the general public. Bioelectromagnetics. 2017;38:280-294. © 2017 Crown copyright. BIOELECTROMAGNETICS © 2017 Wiley Periodicals, Inc. © 2017 Crown copyright. BIOELECTROMAGNETICS © 2017 Wiley Periodicals, Inc.

  16. Radiation damage studies on natural and synthetic rock salt utilizing measurements made during electron irradiation

    International Nuclear Information System (INIS)

    Swyler, K.J.; Levy, P.W.

    1977-01-01

    The numerous radiation damage effects which will occur in the rock salt surrounding radioactive waste disposal canisters are being investigated with unique apparatus for making optical and other measurements during 1 to 3 MeV electron irradiation. This equipment, consists of a computer controlled double beam spectrophotometer which simultaneously records 256 point absorption and radioluminescence spectra, in either the 200 to 400 or 400 to 800 nm region, every 40 seconds. Most often the measurements commence as the irradiation is started and continue after it is terminated. This procedure provides information on the kinetics and other details of the damage formation process and, when the irradiation is terminated, on both the transient and stable damage components. The exposure rates may be varied between 10 2 or 10 3 to more than 10 8 rad per hour and the sample temperature maintained between 25 and 800 or 900 0 C. Although this project was started recently, measurements have been made on synthetic NaCl and on natural rock salt from two disposal sites and two mines. Both unstrained and purposely strained samples have been used. Most recently, measurements at temperatures between 25 and 200 0 C have been started. The few measurements completed to date indicate that the damage formation kinetics in natural rock salt are quite different from those observed in synthetic NaCl

  17. Analytic model utilizing the complex ABCD method for range dependency of a monostatic coherent lidar

    DEFF Research Database (Denmark)

    Olesen, Anders Sig; Pedersen, Anders Tegtmeier; Hanson, Steen Grüner

    2014-01-01

    In this work, we present an analytic model for analyzing the range and frequency dependency of a monostatic coherent lidar measuring velocities of a diffuse target. The model of the signal power spectrum includes both the contribution from the optical system as well as the contribution from the t...

  18. Bayesian statistical models to estimate EQ-5D utility scores from EORTC QLQ data in myeloma.

    Science.gov (United States)

    Kharroubi, Samer A; Edlin, Richard; Meads, David; McCabe, Christopher

    2018-02-20

    It is well documented that the modelling of health-related quality of life data is difficult as the distribution of such data is often strongly right/left skewed and it includes a significant percentage of observations at one. The objective of this study is to develop a series of two-part models (TPMs) that deal with these issues. Data from the UK Medical Research Council Myeloma IX trial were used to examine the relationship between the European Organization for Research and Treatment of Cancer (EORTC) QLQ-C30/QLQ-MY20 scores and the European QoL-5 Dimensions (EQ-5D) utility score. Four different TPMs were developed. The models fitted included TPM with normal regression, TPM with normal regression with variance a function of participant characteristics, TPM with log-transformed data, and TPM with gamma regression and a log link. The cohort of 1839 patients was divided into 75% derivation sample, to fit the different models, and 25% validation sample to assess the predictive ability of these models by comparing predicted and observed mean EQ-5D scores in the validation set, unadjusted R 2 , and root mean square error. Predictive performance in the derivation dataset depended on the criterion used, with R 2 /adjusted-R 2 favouring the TPM with normal regression and mean predicted error favouring the TPM with gamma regression. The TPM with gamma regression performs best within the validation dataset under all criteria. TPM regression models provide flexible approaches to estimate mean EQ-5D utility weights from the EORTC QLQ-C30/QLQ-MY20 for use in economic evaluation. Copyright © 2018 John Wiley & Sons, Ltd.

  19. Model-based cartilage thickness measurement in the submillimeter range

    NARCIS (Netherlands)

    Streekstra, G. J.; Strackee, S. D.; Maas, M.; ter Wee, R.; Venema, H. W.

    2007-01-01

    Current methods of image-based thickness measurement in thin sheet structures utilize second derivative zero crossings to locate the layer boundaries. It is generally acknowledged that the nonzero width of the point spread function (PSF) limits the accuracy of this measurement procedure. We propose

  20. Model Predictive Control for Integrating Traffic Control Measures

    NARCIS (Netherlands)

    Hegyi, A.

    2004-01-01

    Dynamic traffic control measures, such as ramp metering and dynamic speed limits, can be used to better utilize the available road capacity. Due to the increasing traffic volumes and the increasing number of traffic jams the interaction between the control measures has increased such that local

  1. Utilization of a pressure sensor guidewire to measure bileaflet mechanical valve gradients: hemodynamic and echocardiographic sequelae.

    Science.gov (United States)

    Doorey, Andrew J; Gakhal, Mandip; Pasquale, Michael J

    2006-04-01

    Suspected prosthetic valve dysfunction is a difficult clinical problem, because of the high risk of repeat valvular surgery. Echocardiographic measurements of prosthetic valvular dysfunction can be misleading, especially with bileaflet valves. Direct measurement of trans-valvular gradients is problematic because of potentially serious catheter entrapment issues. We report a case in which a high-fidelity pressure sensor angioplasty guidewire was used to cross prosthetic mitral and aortic valves in a patient, with hemodynamic and echocardiographic assessment. This technique was safe and effective, refuting the inaccurate non-invasive tests that over-estimated the aortic valvular gradient.

  2. Prioritization of EGFR/IGF-IR/VEGFR2 combination targeted therapies utilizing cancer models.

    Science.gov (United States)

    Tonra, James R; Corcoran, Erik; Deevi, Dhanvanthri S; Steiner, Philipp; Kearney, Jessica; Li, Huiling; Ludwig, Dale L; Zhu, Zhenping; Witte, Larry; Surguladze, David; Hicklin, Daniel J

    2009-06-01

    Rational strategies utilizing anticancer efficacy and biological principles are needed for the prioritization of specific combination targeted therapy approaches for clinical development, from among the many with experimental support. Antibodies targeting epidermal growth factor receptor (EGFR) (cetuximab), insulin-like growth factor-1 receptor (IGF-IR) (IMC-A12) or vascular endothelial growth factor receptor 2 (VEGFR2) (DC101), were dosed alone or in combination, in 11 human tumor xenograft models established in mice. Efficacy readouts included the tumor burden and incidence of metastasis, as well as tumor active hypoxia inducible factor-1 (HIF-1), human VEGF and blood vessel density. Cetuximab and DC101 contributed potent and non-overlapping benefits to the combination approach. Moreover, DC101 prevented escape from IMC-A12 + cetuximab in a colorectal cancer model and cetuximab prevented escape from DC101 therapy in a pancreatic cancer model. Targeting VEGFR2 + EGFR was prioritized over other treatment strategies utilizing EGFR, IGF-IR and VEGFR2 antibodies. The criteria that proved to be valuable were a non-overlapping spectrum of anticancer activity and the prevention of resistance to another therapy in the combination.

  3. Stability of Teacher Value-Added Rankings across Measurement Model and Scaling Conditions

    Science.gov (United States)

    Hawley, Leslie R.; Bovaird, James A.; Wu, ChaoRong

    2017-01-01

    Value-added assessment methods have been criticized by researchers and policy makers for a number of reasons. One issue includes the sensitivity of model results across different outcome measures. This study examined the utility of incorporating multivariate latent variable approaches within a traditional value-added framework. We evaluated the…

  4. Toward Intelligent Assessment: An Integration of Constructed Response Testing, Artificial Intelligence, and Model-Based Measurement.

    Science.gov (United States)

    Bennett, Randy Elliot

    A new assessment conception is described that integrates constructed-response testing, artificial intelligence, and model-based measurement. The conception incorporates complex constructed-response items for their potential to increase the validity, instructional utility, and credibility of standardized tests. Artificial intelligence methods are…

  5. Utilization of Short-Simulations for Tuning High-Resolution Climate Model

    Science.gov (United States)

    Lin, W.; Xie, S.; Ma, P. L.; Rasch, P. J.; Qian, Y.; Wan, H.; Ma, H. Y.; Klein, S. A.

    2016-12-01

    Many physical parameterizations in atmospheric models are sensitive to resolution. Tuning the models that involve a multitude of parameters at high resolution is computationally expensive, particularly when relying primarily on multi-year simulations. This work describes a complementary set of strategies for tuning high-resolution atmospheric models, using ensembles of short simulations to reduce the computational cost and elapsed time. Specifically, we utilize the hindcast approach developed through the DOE Cloud Associated Parameterization Testbed (CAPT) project for high-resolution model tuning, which is guided by a combination of short (global mean statistics and many spatial features are consistent between Atmospheric Model Intercomparison Project (AMIP)-type simulations and CAPT-type hindcasts, with just a small number of short-term simulations for the latter over the corresponding season. The use of CAPT hindcasts to find parameter choice for the reduction of large model biases dramatically improves the turnaround time for the tuning at high resolution. Improvement seen in CAPT hindcasts generally translates to improved AMIP-type simulations. An iterative CAPT-AMIP tuning approach is therefore adopted during each major tuning cycle, with the former to survey the likely responses and narrow the parameter space, and the latter to verify the results in climate context along with assessment in greater detail once an educated set of parameter choice is selected. Limitations on using short-term simulations for tuning climate model are also discussed.

  6. On the use of prior information in modelling metabolic utilization of energy in growing pigs

    DEFF Research Database (Denmark)

    Strathe, Anders Bjerring; Jørgensen, Henry; Fernández, José Adalberto

    2011-01-01

    Construction of models that provide a realistic representation of metabolic utilization of energy in growing animals tend to be over-parameterized because data generated from individual metabolic studies are often sparse. In the Bayesian framework prior information can enter the data analysis......) curves, resulting from a metabolism study on growing pigs of high genetic potential. A total of 17 crossbred pigs of three genders (barrows, boars and gilts) were used. Pigs were fed four diets based on barley, wheat and soybean meal supplemented with crystalline amino acids to meet Danish nutrient...

  7. High magnetic field measurement utilizing Faraday rotation in SF11 glass in simplified diagnostics.

    Science.gov (United States)

    Dey, Premananda; Shukla, Rohit; Venkateswarlu, D

    2017-04-01

    With the commercialization of powerful solid-state lasers as pointer lasers, it is becoming simpler nowadays for the launch and free-space reception of polarized light for polarimetric applications. Additionally, because of the high power of such laser diodes, the alignment of the received light on the small sensor area of a photo-diode with a high bandwidth response is also greatly simplified. A plastic sheet polarizer taken from spectacles of 3D television (commercially available) is simply implemented as an analyzer before the photo-receiver. SF11 glass is used as a magneto-optic modulating medium for the measurement of the magnetic field. A magnetic field of magnitude more than 8 Tesla, generated by a solenoid has been measured using this simple assembly. The measured Verdet constant of 12.46 rad/T-m is obtained at the wavelength of 672 nm for the SF11 glass. The complete measurement system is a cost-effective solution.

  8. A composite score for a measuring instrument utilizing re-scaled ...

    African Journals Online (AJOL)

    A methodology is proposed to develop a measuring instrument (metric) for evaluating subjects from a population that cannot provide data to facilitate the development of such a metric (e.g. preterm infants in the neonatal intensive care unit). Central to this methodology is the employment of an expert group that decides on ...

  9. In situ laser measurement of oxygen concentration and flue gas temperature utilizing chemical reaction kinetics.

    Science.gov (United States)

    Viljanen, J; Sorvajärvi, T; Toivonen, J

    2017-12-01

    Combustion research requires detailed localized information on the dynamic combustion conditions to improve the accuracy of the simulations and, hence, improve the performance of the combustion processes. We have applied chemical reaction kinetics of potassium to measure the local temperature and O 2 concentration in flue gas. An excess of free atomic potassium is created in the measurement volume by a photofragmenting precursor molecule such as potassium chloride or KOH which are widely released from solid fuels. The decay of the induced potassium concentration is followed with an absorption measurement using a narrow-linewidth diode laser. The temperature and O 2 concentration are solved from the decay curve features using equations obtained from calibration measurements in a temperature range of 800°C-1000°C and in O 2 concentrations of 0.1%-21%. The local flue gas temperature and O 2 concentration were recorded in real time during devolatilization, char burning, and ash cooking phases of combustion in a single-particle reactor with a 5 Hz repetition rate. The method can be further extended to other target species and applications where the chemical dynamics can be disturbed with photofragmentation.

  10. UTILIZING THE PAKS METHOD FOR MEASURING ACROLEIN AND OTHER ALDEHYDES IN DEARS

    Science.gov (United States)

    Acrolein is a hazardous air pollutant of high priority due to its high irritation potency and other potential adverse health effects. However, a reliable method is currently unavailable for measuring airborne acrolein at typical environmental levels. In the Detroit Exposure and A...

  11. Regression models for predicting anthropometric measurements of ...

    African Journals Online (AJOL)

    measure anthropometric dimensions to predict difficult-to-measure dimensions required for ergonomic design of school furniture. A total of 143 students aged between 16 and 18 years from eight public secondary schools in Ogbomoso, Nigeria ...

  12. Discussing Landscape Compositional Scenarios Generated with Maximization of Non-Expected Utility Decision Models Based on Weighted Entropies

    Directory of Open Access Journals (Sweden)

    José Pinto Casquilho

    2017-02-01

    Full Text Available The search for hypothetical optimal solutions of landscape composition is a major issue in landscape planning and it can be outlined in a two-dimensional decision space involving economic value and landscape diversity, the latter being considered as a potential safeguard to the provision of services and externalities not accounted in the economic value. In this paper, we use decision models with different utility valuations combined with weighted entropies respectively incorporating rarity factors associated to Gini-Simpson and Shannon measures. A small example of this framework is provided and discussed for landscape compositional scenarios in the region of Nisa, Portugal. The optimal solutions relative to the different cases considered are assessed in the two-dimensional decision space using a benchmark indicator. The results indicate that the likely best combination is achieved by the solution using Shannon weighted entropy and a square root utility function, corresponding to a risk-averse behavior associated to the precautionary principle linked to safeguarding landscape diversity, anchoring for ecosystem services provision and other externalities. Further developments are suggested, mainly those relative to the hypothesis that the decision models here outlined could be used to revisit the stability-complexity debate in the field of ecological studies.

  13. Measurement and modelling in anthropo-radiometry

    International Nuclear Information System (INIS)

    Carlan, Loic de

    2011-01-01

    In this HDR (Accreditation to supervise researches) report, the author gives an overview of his research activities, gives a summary of his research thesis (feasibility study of an actinide measurement system in the case of lungs), and proposes a research report on the different aspects of anthropo-radiometric measurement: context (principles, significance, sampling phantoms), development of digital phantoms (software presentation and validation), interface development and validation, application to actinide measurement in lung, taking biokinetic data into account for anthropo-radiometric measurement

  14. Utility of telomere length measurements for age determination of humpback whales

    Directory of Open Access Journals (Sweden)

    Morten Tange Olsen

    2014-12-01

    Full Text Available This study examines the applicability of telomere length measurements by quantitative PCR as a tool for minimally invasive age determination of free-ranging cetaceans. We analysed telomere length in skin samples from 28 North Atlantic humpback whales (Megaptera novaeangliae, ranging from 0 to 26 years of age. The results suggested a significant correlation between telomere length and age in humpback whales. However, telomere length was highly variable among individuals of similar age, suggesting that telomere length measured by quantitative PCR is an imprecise determinant of age in humpback whales. The observed variation in individual telomere length was found to be a function of both experimental and biological variability, with the latter perhaps reflecting patterns of inheritance, resource allocation trade-offs, and stochasticity of the marine environment.

  15. Radon and radon daughter measurements and methods utilized by EPA's Eastern Environmental Radiation Facility

    International Nuclear Information System (INIS)

    Phillips, C.R.

    1977-01-01

    The Eastern Environmental Radiation Facility (EERF), Office of Radiation Programs, has the responsibility for conducting the Environmental Protection Agency's study of the radiological impact of the phosphate industry. Numerous measurements in structures constructed on land reclaimed from phosphate mining showed that working levels in these structures range from 0.001 to 0.9 WL. Sampling is performed by drawing air through a 0.8 micrometer pore size, 25 mm diameter filter at a flow rate of 10 to 15 liters/minute for from 5 to 20 minutes, depending on the daughter levels anticipated. The detection system consists of a ruggedized silicon surface barrier detector (450 mm 2 -100 micrometer depletion) connected through an appropriate pre-amplifier-amplifier to a 1024-channel multichannel analyzer. Other measurement methods are also discussed

  16. Heat Loss Measurements in Buildings Utilizing a U-value Meter

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt

    the best basis for upgrading the energy performance, it is important to measure the heat losses at different locations on a building facade, in order to optimize the energy performance. The author has invented a U-value meter, enabling measurements of heat transfer coefficients. The meter has been used...... of energy for heating and cooling of buildings. There is a huge energy-saving potential in this area for reducing both the global climate problems as well as economy challenges. In fact, global energy efficiency can be obtained in two ordinary ways. One way is to improve the energy production and supply......Heating of buildings in Denmark accounts for approximately 40% of the entire national energy consumption. For this reason, a reduction of heat losses from building envelopes are of great importance in order to reach the Bologna CO2 emission reduction targets. Upgrading of the energy performance...

  17. Patient-Reported Shoulder Outcome Measures Utilized in Breast Cancer Survivors: A Systematic Review

    Science.gov (United States)

    Harrington, Shana; Michener, Lori A; Kendig, Tiffany; Miale, Susan; George, Steven Z

    2014-01-01

    Objective 1) To identify English Language published patient-reported upper extremity outcome measures used in breast cancer research and 2) To examine construct validity and responsiveness in patient-reported upper extremity outcome measures used in breast cancer research. Data Sources PubMed, CINAHL and ProQuest MEDLINE® databases were searched up to February 5, 2013. Study Selection Studies were included if a patient-reported upper extremity outcome measure was administered, the participants were diagnosed with breast cancer, and published in English. Data Extraction Eight hundred and sixty-five articles were screened. Fifty-nine full text articles were assessed for eligibility. A total of 46 articles met the initial eligibility criteria for aim 1. Eleven of these articles reported mean and standard deviations for the outcome scores, and included a comparison group analysis for aim 2. Data Synthesis Construct validity was evaluated by calculating effect sizes for known group differences in 6 studies using the Disabilities of Arm, Shoulder and Hand (DASH), Penn Shoulder Score, Shoulder Disability Questionnaire-Dutch, and 10 Questions by Wingate (Wingate). Responsiveness was analyzed comparing a treatment and control group by calculating the coefficient of responsiveness in 5 studies for the DASH and Wingate. Conclusions Eight different patient-reported upper extremity outcome measures have been reported in the peer-review literature for women with breast cancer, some (n=3) were specifically developed for breast cancer survivors and others that were not (n=5). Based on the current evidence we recommend administering the DASH to assess patient-reported upper extremity function in breast cancer survivors because the DASH had most consistently large effects sizes for construct validity and responsiveness. Future large studies are needed for more definitive recommendations. PMID:23932969

  18. The Utility of Personality Measures in the Admissions Process at the United States Naval Academy

    Science.gov (United States)

    2002-06-01

    ability to comply with the remaining second tier requirements to determine the strength of each applicant’s record. If their record is strong enough...when taken four at a time, form a matrix of sixteen possible personality types. The sixteen types are as follows: ISTJ, ISFJ, INFJ , INTJ, ISTP...detailed information on 16 primary personality traits. It emphasizes an individual’s strengths through measurement of such personality dimensions as

  19. Measurement and modeling of advanced coal conversion processes

    Energy Technology Data Exchange (ETDEWEB)

    Solomon, P.R.; Serio, M.A.; Hamblen, D.G. (Advanced Fuel Research, Inc., East Hartford, CT (United States)); Smoot, L.D.; Brewster, B.S. (Brigham Young Univ., Provo, UT (United States))

    1990-01-01

    The overall objective of this program is the development of predictive capability for the design, scale up, simulation, control and feedstock evaluation in advanced coal conversion devices. This technology is important to reduce the technical and economic risks inherent in utilizing coal, a feedstock whose variable and often unexpected behavior presents a significant challenge. This program will merge significant advances made at Advanced Fuel Research, Inc. (AFR) in measuring and quantitatively describing the mechanisms in coal conversion behavior, with technology being developed at Brigham Young University (BYU) in comprehensive computer codes for mechanistic modeling of entrained-bed gasification. Additional capabilities in predicting pollutant formation will be implemented and the technology will be expanded to fixed-bed reactors. The foundation to describe coal-specified conversion behavior is ARF's Functional Group (FG) and Devolatilization, Vaporization, and Crosslinking (DVC) models, developed under previous and on-going METC sponsored programs. These models have demonstrated the capability to describe the time dependent evolution of individual gas species, and the amount and characteristics of tar and char. The combined FG-DVC model will be integrated with BYU's comprehensive two-dimensional reactor model, PCGC-2, which is currently the most widely used reactor simulation for combustion or gasification. The program includes: (1) validation of the submodels by comparison with laboratory data obtained in this program, (2) extensive validation of the modified comprehensive code by comparison of predicted results with data from bench-scale and process scale investigations of gasification, mild gasification and combustion of coal or coal-derived products in heat engines, and (3) development of well documented user friendly software applicable to a workstation'' environment.

  20. Utilizing Electroencephalography Measurements for Comparison of Task-Specific Neural Efficiencies: Spatial Intelligence Tasks.

    Science.gov (United States)

    Call, Benjamin J; Goodridge, Wade; Villanueva, Idalis; Wan, Nicholas; Jordan, Kerry

    2016-08-09

    Spatial intelligence is often linked to success in engineering education and engineering professions. The use of electroencephalography enables comparative calculation of individuals' neural efficiency as they perform successive tasks requiring spatial ability to derive solutions. Neural efficiency here is defined as having less beta activation, and therefore expending fewer neural resources, to perform a task in comparison to other groups or other tasks. For inter-task comparisons of tasks with similar durations, these measurements may enable a comparison of task type difficulty. For intra-participant and inter-participant comparisons, these measurements provide potential insight into the participant's level of spatial ability and different engineering problem solving tasks. Performance on the selected tasks can be analyzed and correlated with beta activities. This work presents a detailed research protocol studying the neural efficiency of students engaged in the solving of typical spatial ability and Statics problems. Students completed problems specific to the Mental Cutting Test (MCT), Purdue Spatial Visualization test of Rotations (PSVT:R), and Statics. While engaged in solving these problems, participants' brain waves were measured with EEG allowing data to be collected regarding alpha and beta brain wave activation and use. The work looks to correlate functional performance on pure spatial tasks with spatially intensive engineering tasks to identify the pathways to successful performance in engineering and the resulting improvements in engineering education that may follow.

  1. Modeling and Analysis Compute Environments, Utilizing Virtualization Technology in the Climate and Earth Systems Science domain

    Science.gov (United States)

    Michaelis, A.; Nemani, R. R.; Wang, W.; Votava, P.; Hashimoto, H.

    2010-12-01

    Given the increasing complexity of climate modeling and analysis tools, it is often difficult and expensive to build or recreate an exact replica of the software compute environment used in past experiments. With the recent development of new technologies for hardware virtualization, an opportunity exists to create full modeling, analysis and compute environments that are “archiveable”, transferable and may be easily shared amongst a scientific community or presented to a bureaucratic body if the need arises. By encapsulating and entire modeling and analysis environment in a virtual machine image, others may quickly gain access to the fully built system used in past experiments, potentially easing the task and reducing the costs of reproducing and verify past results produced by other researchers. Moreover, these virtual machine images may be used as a pedagogical tool for others that are interested in performing an academic exercise but don't yet possess the broad expertise required. We built two virtual machine images, one with the Community Earth System Model (CESM) and one with Weather Research Forecast Model (WRF), then ran several small experiments to assess the feasibility, performance overheads costs, reusability, and transferability. We present a list of the pros and cons as well as lessoned learned from utilizing virtualization technology in the climate and earth systems modeling domain.

  2. Combining GPS measurements and IRI model predictions

    International Nuclear Information System (INIS)

    Hernandez-Pajares, M.; Juan, J.M.; Sanz, J.; Bilitza, D.

    2002-01-01

    The free electrons distributed in the ionosphere (between one hundred and thousands of km in height) produce a frequency-dependent effect on Global Positioning System (GPS) signals: a delay in the pseudo-orange and an advance in the carrier phase. These effects are proportional to the columnar electron density between the satellite and receiver, i.e. the integrated electron density along the ray path. Global ionospheric TEC (total electron content) maps can be obtained with GPS data from a network of ground IGS (international GPS service) reference stations with an accuracy of few TEC units. The comparison with the TOPEX TEC, mainly measured over the oceans far from the IGS stations, shows a mean bias and standard deviation of about 2 and 5 TECUs respectively. The discrepancies between the STEC predictions and the observed values show an RMS typically below 5 TECUs (which also includes the alignment code noise). he existence of a growing database 2-hourly global TEC maps and with resolution of 5x2.5 degrees in longitude and latitude can be used to improve the IRI prediction capability of the TEC. When the IRI predictions and the GPS estimations are compared for a three month period around the Solar Maximum, they are in good agreement for middle latitudes. An over-determination of IRI TEC has been found at the extreme latitudes, the IRI predictions being, typically two times higher than the GPS estimations. Finally, local fits of the IRI model can be done by tuning the SSN from STEC GPS observations

  3. On the Path to SunShot. Utility Regulatory and Business Model Reforms for Addressing the Financial Impacts of Distributed Solar on Utilities

    Energy Technology Data Exchange (ETDEWEB)

    Barbose, Galen [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Miller, John [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sigrin, Ben [National Renewable Energy Lab. (NREL), Golden, CO (United States); Reiter, Emerson [National Renewable Energy Lab. (NREL), Golden, CO (United States); Cory, Karlynn [National Renewable Energy Lab. (NREL), Golden, CO (United States); McLaren, Joyce [National Renewable Energy Lab. (NREL), Golden, CO (United States); Seel, Joachim [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Mills, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Darghouth, Naim [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Satchwell, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-05-01

    Net-energy metering (NEM) has helped drive the rapid growth of distributed PV (DPV) but has raised concerns about electricity cost shifts, utility financial losses, and inefficient resource allocation. These concerns have motivated real and proposed reforms to utility regulatory and business models. This report explores the challenges and opportunities associated with such reforms in the context of the U.S. Department of Energy's SunShot Initiative. Most of the reforms to date address NEM concerns by reducing the benefits provided to DPV customers and thus constraining DPV deployment. Eliminating NEM nationwide, by compensating exports of PV electricity at wholesale rather than retail rates, could cut cumulative DPV deployment by 20% in 2050 compared with a continuation of current policies. This would slow the PV cost reductions that arise from larger scale and market certainty. It could also thwart achievement of the SunShot deployment goals even if the initiative's cost targets are achieved. This undesirable prospect is stimulating the development of alternative reform strategies that address concerns about distributed PV compensation without inordinately harming PV economics and growth. These alternatives fall into the categories of facilitating higher-value DPV deployment, broadening customer access to solar, and aligning utility profits and earnings with DPV. Specific strategies include utility ownership and financing of DPV, community solar, distribution network operators, services-driven utilities, performance-based incentives, enhanced utility system planning, pricing structures that incentivize high-value DPV configurations, and decoupling and other ratemaking reforms that reduce regulatory lag. These approaches represent near- and long-term solutions for preserving the legacy of the SunShot Initiative.

  4. Measures of metacognition on signal-detection theoretic models.

    Science.gov (United States)

    Barrett, Adam B; Dienes, Zoltan; Seth, Anil K

    2013-12-01

    Analyzing metacognition, specifically knowledge of accuracy of internal perceptual, memorial, or other knowledge states, is vital for many strands of psychology, including determining the accuracy of feelings of knowing and discriminating conscious from unconscious cognition. Quantifying metacognitive sensitivity is however more challenging than quantifying basic stimulus sensitivity. Under popular signal-detection theory (SDT) models for stimulus classification tasks, approaches based on Type II receiver-operating characteristic (ROC) curves or Type II d-prime risk confounding metacognition with response biases in either the Type I (classification) or Type II (metacognitive) tasks. A new approach introduces meta-d': The Type I d-prime that would have led to the observed Type II data had the subject used all the Type I information. Here, we (a) further establish the inconsistency of the Type II d-prime and ROC approaches with new explicit analyses of the standard SDT model and (b) analyze, for the first time, the behavior of meta-d' under nontrivial scenarios, such as when metacognitive judgments utilize enhanced or degraded versions of the Type I evidence. Analytically, meta-d' values typically reflect the underlying model well and are stable under changes in decision criteria; however, in relatively extreme cases, meta-d' can become unstable. We explore bias and variance of in-sample measurements of meta-d' and supply MATLAB code for estimation in general cases. Our results support meta-d' as a useful measure of metacognition and provide rigorous methodology for its application. Our recommendations are useful for any researchers interested in assessing metacognitive accuracy. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  5. The emperor’s new measurement model

    NARCIS (Netherlands)

    Zand Scholten, A.; Maris, G.; Borsboom, D.

    2011-01-01

    In this article the author discusses professor Stephen M. Humphry's critical attitude with respect to psychometric modeling. The author criticizes Humphry's model stating that the model is theoretically interesting but cannot be tested as it is not identified. The author also states that Humphry's

  6. A numerical model for ultrasonic measurements of swelling and mechanical properties of a swollen PVA hydrogel.

    Science.gov (United States)

    Lohakan, M; Jamnongkan, T; Pintavirooj, C; Kaewpirom, S; Boonsang, S

    2010-08-01

    This paper presents a numerical model for the evaluation of mechanical properties of a relatively thin hydrogel. The model utilizes a system identification method to evaluate the acoustical parameters from ultrasonic measurement data. The model involves the calculation of the forward model based on an ultrasonic wave propagation incorporating diffraction effect. Ultrasonic measurements of a hydrogel are also performed in a reflection mode. A Nonlinear Least Square (NLS) algorithm is employed to minimize difference between the results from the model and the experimental data. The acoustical parameters associated with the model are effectively modified to achieve the minimum error. As a result, the parameters of PVA hydrogels namely thickness, density, an ultrasonic attenuation coefficient and dispersion velocity are effectively determined. In order to validate the model, the conventional density measurements of hydrogels were also performed. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  7. Measurement of the dynamic viscosity of hybrid engine oil -Cuo-MWCNT nanofluid, development of a practical viscosity correlation and utilizing the artificial neural network

    Science.gov (United States)

    Aghaei, Alireza; Khorasanizadeh, Hossein; Sheikhzadeh, Ghanbar Ali

    2018-01-01

    The main objectives of this study have been measurement of the dynamic viscosity of CuO-MWCNTs/SAE 5w-50 hybrid nanofluid, utilization of artificial neural networks (ANN) and development of a new viscosity model. The new nanofluid has been prepared by a two-stage procedure with volume fractions of 0.05, 0.1, 0.25, 0.5, 0.75 and 1%. Then, utilizing a Brookfield viscometer, its dynamic viscosity has been measured for temperatures of 5, 15, 25, 35, 45, 55 °C. The experimental results demonstrate that the viscosity increases by increasing the nanoparticles volume fraction and decreases by increasing temperature. Based on the experimental data the maximum and minimum nanofluid viscosity enhancements, when the volume fraction increases from 0.05 to 1, are 35.52% and 12.92% for constant temperatures of 55 and 15 °C, respectively. The higher viscosity of oil engine in higher temperatures is an advantage, thus this result is important. The measured nanofluid viscosity magnitudes in various shear rates show that this hybrid nanofluid is Newtonian. An ANN model has been employed to predict the viscosity of the CuO-MWCNTs/SAE 5w-50 hybrid nanofluid and the results showed that the ANN can estimate the viscosity efficiently and accurately. Eventually, for viscosity estimation a new temperature and volume fraction based third-degree polynomial empirical model has been developed. The comparison shows that this model is in good agreement with the experimental data.

  8. Analytic model comparing the cost utility of TVT versus duloxetine in women with urinary stress incontinence.

    Science.gov (United States)

    Jacklin, Paul; Duckett, Jonathan; Renganathan, Arasee

    2010-08-01

    The purpose of this study was to assess cost utility of duloxetine versus tension-free vaginal tape (TVT) as a second-line treatment for urinary stress incontinence. A Markov model was used to compare the cost utility based on a 2-year follow-up period. Quality-adjusted life year (QALY) estimation was performed by assuming a disutility rate of 0.05. Under base-case assumptions, although duloxetine was a cheaper option, TVT gave a considerably higher QALY gain. When a longer follow-up period was considered, TVT had an incremental cost-effectiveness ratio (ICER) of pound 7,710 ($12,651) at 10 years. If the QALY gain from cure was 0.09, then the ICER for duloxetine and TVT would both fall within the indicative National Institute for Health and Clinical Excellence willingness to pay threshold at 2 years, but TVT would be the cost-effective option having extended dominance over duloxetine. This model suggests that TVT is a cost-effective treatment for stress incontinence.

  9. Modeling and design of light powered biomimicry micropump utilizing transporter proteins

    Science.gov (United States)

    Liu, Jin; Sze, Tsun-Kay Jackie; Dutta, Prashanta

    2014-11-01

    The creation of compact micropumps to provide steady flow has been an on-going challenge in the field of microfluidics. We present a mathematical model for a micropump utilizing Bacteriorhodopsin and sugar transporter proteins. This micropump utilizes transporter proteins as method to drive fluid flow by converting light energy into chemical potential. The fluid flow through a microchannel is simulated using the Nernst-Planck, Navier-Stokes, and continuity equations. Numerical results show that the micropump is capable of generating usable pressure. Designing parameters influencing the performance of the micropump are investigated including membrane fraction, lipid proton permeability, illumination, and channel height. The results show that there is a substantial membrane fraction region at which fluid flow is maximized. The use of lipids with low membrane proton permeability allows illumination to be used as a method to turn the pump on and off. This capability allows the micropump to be activated and shut off remotely without bulky support equipment. This modeling work provides new insights on mechanisms potentially useful for fluidic pumping in self-sustained bio-mimic microfluidic pumps. This work is supported in part by the National Science Fundation Grant CBET-1250107.

  10. Utilization of building information modeling in infrastructure’s design and construction

    Science.gov (United States)

    Zak, Josef; Macadam, Helen

    2017-09-01

    Building Information Modeling (BIM) is a concept that has gained its place in the design, construction and maintenance of buildings in Czech Republic during recent years. This paper deals with description of usage, applications and potential benefits and disadvantages connected with implementation of BIM principles in the preparation and construction of infrastructure projects. Part of the paper describes the status of BIM implementation in Czech Republic, and there is a review of several virtual design and construction practices in Czech Republic. Examples of best practice are presented from current infrastructure projects. The paper further summarizes experiences with new technologies gained from the application of BIM related workflows. The focus is on the BIM model utilization for the machine control systems on site, quality assurance, quality management and construction management.

  11. Grid-connection of large offshore windfarms utilizing VSC-HVDC: Modeling and grid impact

    DEFF Research Database (Denmark)

    Xue, Yijing; Akhmatov, Vladislav

    2009-01-01

    Utilization of Voltage Source Converter (VSC) – High Voltage Direct Current (HVDC) systems for grid-connection of large offshore windfarms becomes relevant as installed power capacities as well as distances to the connection points of the on-land transmission systems increase. At the same time...... for grid-connection of large offshore windfarms. The VSC-HVDC model is implemented using a general approach of independent control of active and reactive power in normal operation situations. The on-land VSC inverter, which is also called a grid-side inverter, provides voltage support to the transmission...... system and comprises a LVFRT solution in short-circuit faults. The presented model, LVFRT solution and impact on the system stability are investigated as a case study of a 1,000 MW offshore windfarm grid-connected through four parallel VSC-HVDC systems each with a 280 MVA power rating. The investigation...

  12. Medição do desempenho de três linhas de fabricação instaladas em ambientes de manufatura avançada utilizando o modelo SAPROV Performance measurement of three work stations installed in advanced manufacturing environments utilizing the SAPROV model

    Directory of Open Access Journals (Sweden)

    Cosmo Severiano Filho

    1998-12-01

    Full Text Available A avaliação do desempenho constitui matéria de relevante interesse para as organizações que têm implantado novas tecnologias de produção e de gestão. Este artigo avalia o desempenho em termos de produtividade, flexibilidade e qualidade de três linhas de produção, instaladas em ambientes avançados de manufatura, utilizando a metodologia do sistema SAPROV - Sistema de Avaliação da Produtividade Vetorial. A pesquisa realizada para teste e validação da metodologia SAPROV permitiu o processamento da avaliação por quatro procedimentos básicos: (1 determinação dos critérios de valor da manufatura; (2 determinação dos padrões referenciais de desempenho para cada critério de valor; (3 auditoria para avaliação do desempenho dos critérios de valor adotados na manufatura; e (4 aplicação dos indicadores para avaliação técnica e econômica do desempenho da manufatura. Neste sentido, o artigo analisa o procedimento de avaliação realizado, discutindo os resultados obtidos em cada uma das linhas, de modo a comprovar a legitimação do modelo SAPROV.This study evaluated the performance, in terms of productivity, flexibility and quality, of three work stations installed in advanced manufacturing environments, utilizing the SAPROV Model. The research showed that the correlation between productivity, flexibility and quality is not directly proportional to investments made in these functional areas of the factory. This observation was obtained by using the SAPROV methodology, which consists of result the following four operational steps: determination of the manufacturing criteria of value; determination of the referential performance standards for each criterion of value; audit for evaluating the performance of the manufacturing criteria of value; application of the indicators for evaluation of the technical and economical manufacturing performance.

  13. Mapping to Estimate Health-State Utility from Non-Preference-Based Outcome Measures: An ISPOR Good Practices for Outcomes Research Task Force Report.

    Science.gov (United States)

    Wailoo, Allan J; Hernandez-Alava, Monica; Manca, Andrea; Mejia, Aurelio; Ray, Joshua; Crawford, Bruce; Botteman, Marc; Busschbach, Jan

    2017-01-01

    Economic evaluation conducted in terms of cost per quality-adjusted life-year (QALY) provides information that decision makers find useful in many parts of the world. Ideally, clinical studies designed to assess the effectiveness of health technologies would include outcome measures that are directly linked to health utility to calculate QALYs. Often this does not happen, and even when it does, clinical studies may be insufficient for a cost-utility assessment. Mapping can solve this problem. It uses an additional data set to estimate the relationship between outcomes measured in clinical studies and health utility. This bridges the evidence gap between available evidence on the effect of a health technology in one metric and the requirement for decision makers to express it in a different one (QALYs). In 2014, ISPOR established a Good Practices for Outcome Research Task Force for mapping studies. This task force report provides recommendations to analysts undertaking mapping studies, those that use the results in cost-utility analysis, and those that need to critically review such studies. The recommendations cover all areas of mapping practice: the selection of data sets for the mapping estimation, model selection and performance assessment, reporting standards, and the use of results including the appropriate reflection of variability and uncertainty. This report is unique because it takes an international perspective, is comprehensive in its coverage of the aspects of mapping practice, and reflects the current state of the art. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  14. Baseline comparison of three health utility measures and the feeling thermometer among participants in the action to control cardiovascular risk in diabetes trial

    Directory of Open Access Journals (Sweden)

    Raisch Dennis W

    2012-07-01

    Full Text Available Abstract Background Health utility (HU measures are used as overall measures of quality of life and to determine quality adjusted life years (QALYs in economic analyses. We compared baseline values of three HUs including Short Form 6 Dimensions (SF-6D, and Health Utilities Index, Mark II and Mark III (HUI2 and HUI3 and the feeling thermometer (FT among type 2 diabetes participants in the Action to Control Cardiovascular Risk in Diabetes (ACCORD trial. We assessed relationships between HU and FT values and patient demographics and clinical variables. Methods ACCORD was a randomized clinical trial to test if intensive controls of glucose, blood pressure and lipids can reduce the risk of major cardiovascular disease (CVD events in type 2 diabetes patients with high risk of CVD. The health-related quality of life (HRQOL sub-study includes 2,053 randomly selected participants. Interclass correlations (ICCs and agreement between measures by quartile were used to evaluate relationships between HU’s and the FT. Multivariable regression models specified relationships between patient variables and each HU and the FT. Results The ICCs were 0.245 for FT/SF-6D, 0.313 for HUI3/SF-6D, 0.437 for HUI2/SF-6D, 0.338 for FT/HUI2, 0.337 for FT/HUI3 and 0.751 for HUI2/HUI3 (P P P  Conclusions The agreements between the different HUs were poor except for the two HUI measures; therefore HU values derived different measures may not be comparable. The FT had low agreement with HUs. The relationships between HUs and demographic and clinical measures demonstrate how severity of diabetes and other clinical and demographic factors are associated with HUs and FT measures. Trial registration ClinicalTrials.gov Identifier: NCT00000620

  15. Measuring equity in utilization of emergency obstetric care at Wolisso Hospital in Oromiya, Ethiopia: a cross sectional study

    OpenAIRE

    Wilunda, Calistus; Putoto, Giovanni; Manenti, Fabio; Castiglioni, Maria; Azzimonti, Gaetano; Edessa, Wagari; Atzori, Andrea; Merialdi, Mario; Betr?n, Ana Pilar; Vogel, Joshua; Criel, Bart

    2013-01-01

    Introduction Improving equity in access to services for the treatment of complications that arise during pregnancy and childbirth, namely Emergency Obstetric Care (EmOC), is fundamental if maternal and neonatal mortality are to be reduced. Consequently, there is a growing need to monitor equity in access to EmOC. The objective of this study was to develop a simple questionnaire to measure equity in utilization of EmOC at Wolisso Hospital, Ethiopia and compare the wealth status of EmOC users w...

  16. Utilization of synchronization measures in the fast gas-dynamic experiment

    Science.gov (United States)

    Gilev, V. M.; Vnuchkov, D. A.; Nalivajchenko, D. G.; Shpak, S. I.

    2017-10-01

    This paper presents the system of synchronization for automated fast gas-dynamic experiments, including the experiments with combustion. The system permits using both super- and hypersonic processes in the pulse mode, the duration ranging from several milliseconds to seconds. Under consideration are individual elements of the system, technique of the fast experiment performance; the developed software is described. The description of the weight test of the model of a small-size high-speed aircraft realized with the present system is presented as an example.

  17. Lack of utility of measuring serum bilirubin concentration in distinguishing perforation status of pediatric appendicitis.

    Science.gov (United States)

    Bonadio, William; Bruno, Santina; Attaway, David; Dharmar, Logesh; Tam, Derek; Homel, Peter

    2017-06-01

    Pediatric appendicitis is a common, potentially serious condition. Determining perforation status is crucial to planning effective management. Determine the efficacy of serum total bilirubin concentration [STBC] in distinguishing perforation status in children with appendicitis. Retrospective review of 257 cases of appendicitis who received abdominal CT scan and measurement of STBC. There were 109 with perforation vs 148 without perforation. Although elevated STBC was significantly more common in those with [36%] vs without perforation [22%], the mean difference in elevated values between groups [0.1mg/dL] was clinically insignificant. Higher degrees of hyperbilirubinemia [>2mg/dL] were rarely encountered [5%]. Predictive values for elevated STBC in distinguishing perforation outcome were imprecise [sensitivity 38.5%, specificity 78.4%, PPV 56.8%, NPV 63.4%]. ROC curve analysis of multiple clinical and other laboratory factors for predicting perforation status was unenhanced by adding the STBC variable. Specific analysis of those with perforated appendicitis and percutaneously-drained intra-abdominal abscess which was culture-positive for Escherichia coli showed an identical rate of STBC elevation compared to all with perforation. The routine measurement of STBC does not accurately distinguish perforation status in children with appendicitis, nor discern infecting organism in those with perforation and intra-abdominal abscess. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Development of a deep inspiration breath-hold system for radiotherapy utilizing a laser distance measurer.

    Science.gov (United States)

    Jensen, Christer Andre; Skottner, Nils; Frengen, Jomar; Lund, Jo-Åsmund

    2017-01-01

    Deep inspiration breath-hold (DIBH) is a technique for treating left-sided breast cancer (LSBC). In modern radiotherapy, one of the main aims is to exclude the heart from the beam aperture with an individualized beam design for LSBC. A deep inhalation will raise the chest wall while the volume of the lungs increase, this will again push the heart away from the breast to be treated. There are a few commercial DIBH systems, both invasive and noninvasive. We present an alternative noninvasive DIBH system based upon an industrial laser distance measurer. This system can be installed in a treatment room at a low cost; it is very easy to use and requires limited amount of training for the personnel and the patient. The system is capable of measuring the position of the chest wall with high frequency and precision in real time. The patient views its breathing curve through video glasses, and gets instructions during the treatment session. The system is well tolerated by test subjects due to its noninvasiveness. © 2016 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  19. Utility of common bile duct measurement in emergency department point of care ultrasound: A prospective study.

    Science.gov (United States)

    Lahham, Shadi; Becker, Brent A; Gari, Abdulatif; Bunch, Steven; Alvarado, Maili; Anderson, Craig L; Viquez, Eric; Spann, Sophia C; Fox, John C

    2017-11-18

    Measurement of the common bile duct (CBD) is considered a fundamental component of biliary point-of-care ultrasound (POCUS), but can be technically challenging. The primary objective of this study was to determine whether CBD diameter contributes to the diagnosis of complicated biliary pathology in emergency department (ED) patients with normal laboratory values and no abnormal biliary POCUS findings aside from cholelithiasis. We performed a prospective, observational study of adult ED patients undergoing POCUS of the right upper quadrant (RUQ) and serum laboratory studies for suspected biliary pathology. The primary outcome was complicated biliary pathology occurring in the setting of normal laboratory values and a POCUS demonstrating the absence of gallbladder wall thickening (GWT), pericholecystic fluid (PCF) and sonographic Murphy's sign (SMS). The association between CBD dilation and complicated biliary pathology was assessed using logistic regression to control for other factors, including laboratory findings, cholelithiasis and other sonographic abnormalities. A total of 158 patients were included in the study. 76 (48.1%) received non-biliary diagnoses and 82 (51.9%) were diagnosed with biliary pathology. Complicated biliary pathology was diagnosed in 39 patients. Sensitivity of CBD dilation for complicated biliary pathology was 23.7% and specificity was 77.9%. Of patients diagnosed with biliary pathology, none had isolated CBD dilatation. In the absence of abnormal laboratory values and GWT, PCF or SMS on POCUS, obtaining a CBD measurement is unlikely to contribute to the evaluation of this patient population. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Smith-Purcell experiment utilizing a field-emitter array cathode measurements of radiation

    CERN Document Server

    Ishizuka, H; Yokoo, K; Shimawaki, H; Hosono, A

    2001-01-01

    Smith-Purcell (SP) radiation at wavelengths of 350-750 nm was produced in a tabletop experiment using a field-emitter array (FEA) cathode. The electron gun was 5 cm long, and a 25 mmx25 mm holographic replica grating was placed behind the slit provided in the anode. A regulated DC power supply accelerated electron beams in excess of 10 mu A up to 45 keV, while a small Van de Graaff generator accelerated smaller currents to higher energies. The grating had a 0.556 mu m period, 30 deg. blaze and a 0.2 mu m thick aluminum coating. Spectral characteristics of the radiation were measured both manually and automatically; in the latter case, the spectrometer was driven by a stepping motor to scan the wavelength, and AD-converted signals from a photomultiplier tube were processed by a personal computer. The measurement, made at 80 deg. relative to the electron beam, showed good agreement with theoretical wavelengths of the SP radiation. Diffraction orders were -2 and -3 for beam energies higher than 45 keV, -3 to -5 ...

  1. Smith-Purcell experiment utilizing a field-emitter array cathode: measurements of radiation

    Science.gov (United States)

    Ishizuka, H.; Kawamura, Y.; Yokoo, K.; Shimawaki, H.; Hosono, A.

    2001-12-01

    Smith-Purcell (SP) radiation at wavelengths of 350-750 nm was produced in a tabletop experiment using a field-emitter array (FEA) cathode. The electron gun was 5 cm long, and a 25 mm×25 mm holographic replica grating was placed behind the slit provided in the anode. A regulated DC power supply accelerated electron beams in excess of 10 μA up to 45 keV, while a small Van de Graaff generator accelerated smaller currents to higher energies. The grating had a 0.556 μm period, 30° blaze and a 0.2 μm thick aluminum coating. Spectral characteristics of the radiation were measured both manually and automatically; in the latter case, the spectrometer was driven by a stepping motor to scan the wavelength, and AD-converted signals from a photomultiplier tube were processed by a personal computer. The measurement, made at 80° relative to the electron beam, showed good agreement with theoretical wavelengths of the SP radiation. Diffraction orders were -2 and -3 for beam energies higher than 45 keV, -3 to -5 at 15-25 keV, and -2 to -4 in between. The experiment has thus provided evidence for the practical applicability of FEAs to compact radiation sources.

  2. Smith-Purcell experiment utilizing a field-emitter array cathode: measurements of radiation

    International Nuclear Information System (INIS)

    Ishizuka, H.; Kawamura, Y.; Yokoo, K.; Shimawaki, H.; Hosono, A.

    2001-01-01

    Smith-Purcell (SP) radiation at wavelengths of 350-750 nm was produced in a tabletop experiment using a field-emitter array (FEA) cathode. The electron gun was 5 cm long, and a 25 mmx25 mm holographic replica grating was placed behind the slit provided in the anode. A regulated DC power supply accelerated electron beams in excess of 10 μA up to 45 keV, while a small Van de Graaff generator accelerated smaller currents to higher energies. The grating had a 0.556 μm period, 30 deg. blaze and a 0.2 μm thick aluminum coating. Spectral characteristics of the radiation were measured both manually and automatically; in the latter case, the spectrometer was driven by a stepping motor to scan the wavelength, and AD-converted signals from a photomultiplier tube were processed by a personal computer. The measurement, made at 80 deg. relative to the electron beam, showed good agreement with theoretical wavelengths of the SP radiation. Diffraction orders were -2 and -3 for beam energies higher than 45 keV, -3 to -5 at 15-25 keV, and -2 to -4 in between. The experiment has thus provided evidence for the practical applicability of FEAs to compact radiation sources

  3. Cost/schedule performance measurement system utilized on the Fast Flux Test Facility project

    International Nuclear Information System (INIS)

    Brown, R.K.; Frost, R.A.; Zimmerman, F.M.

    1976-01-01

    An Earned Value-Integrated Cost/Schedule Performance Measurement System has been applied to a major nonmilitary nuclear design and construction project. This system is similar to the Department of Defense Cost/Schedule Performance Measurement System. The project is the Fast Flux Test Facility (a Fuels and Materials test reactor for the Liquid Metal Fast Breeder Reactor Program) being built at the Hanford Engineering Development Laboratory, Richland, Washington, by Westinghouse Hanford Company for the U. S. Energy Research and Development Administration. Because the project was well into the construction phase when the Earned Value System was being considered, it was decided that the principles of DOD's Cost/Schedule Control System Criteria would be applied to the extent possible but no major changes in accounting practices or management systems were imposed. Implementation of this system enabled the following questions to be answered: For work performed, how do actual costs compare with the budget for that work. What is the impact of cost and schedule variances at an overall project level composed of different kinds of activities. Without the Earned Value system, these questions could be answered in a qualitative, subjective manner at best

  4. Effects of atmospheric variability on energy utilization and conservation. [Space heating energy demand modeling; Program HEATLOAD

    Energy Technology Data Exchange (ETDEWEB)

    Reiter, E.R.; Johnson, G.R.; Somervell, W.L. Jr.; Sparling, E.W.; Dreiseitly, E.; Macdonald, B.C.; McGuirk, J.P.; Starr, A.M.

    1976-11-01

    Research conducted between 1 July 1975 and 31 October 1976 is reported. A ''physical-adaptive'' model of the space-conditioning demand for energy and its response to changes in weather regimes was developed. This model includes parameters pertaining to engineering factors of building construction, to weather-related factors, and to socio-economic factors. Preliminary testing of several components of the model on the city of Greeley, Colorado, yielded most encouraging results. Other components, especially those pertaining to socio-economic factors, are still under development. Expansion of model applications to different types of structures and larger regions is presently underway. A CRT-display model for energy demand within the conterminous United States also has passed preliminary tests. A major effort was expended to obtain disaggregated data on energy use from utility companies throughout the United States. The study of atmospheric variability revealed that the 22- to 26-day vacillation in the potential and kinetic energy modes of the Northern Hemisphere is related to the behavior of the planetary long-waves, and that the midwinter dip in zonal available potential energy is reflected in the development of blocking highs. Attempts to classify weather patterns over the eastern and central United States have proceeded satisfactorily to the point where testing of our method for longer time periods appears desirable.

  5. Improved utilization of ADAS-cog assessment data through item response theory based pharmacometric modeling.

    Science.gov (United States)

    Ueckert, Sebastian; Plan, Elodie L; Ito, Kaori; Karlsson, Mats O; Corrigan, Brian; Hooker, Andrew C

    2014-08-01

    This work investigates improved utilization of ADAS-cog data (the primary outcome in Alzheimer's disease (AD) trials of mild and moderate AD) by combining pharmacometric modeling and item response theory (IRT). A baseline IRT model characterizing the ADAS-cog was built based on data from 2,744 individuals. Pharmacometric methods were used to extend the baseline IRT model to describe longitudinal ADAS-cog scores from an 18-month clinical study with 322 patients. Sensitivity of the ADAS-cog items in different patient populations as well as the power to detect a drug effect in relation to total score based methods were assessed with the IRT based model. IRT analysis was able to describe both total and item level baseline ADAS-cog data. Longitudinal data were also well described. Differences in the information content of the item level components could be quantitatively characterized and ranked for mild cognitively impairment and mild AD populations. Based on clinical trial simulations with a theoretical drug effect, the IRT method demonstrated a significantly higher power to detect drug effect compared to the traditional method of analysis. A combined framework of IRT and pharmacometric modeling permits a more effective and precise analysis than total score based methods and therefore increases the value of ADAS-cog data.

  6. Brain in flames – animal models of psychosis: utility and limitations

    Directory of Open Access Journals (Sweden)

    Mattei D

    2015-05-01

    Full Text Available Daniele Mattei,1 Regina Schweibold,1,2 Susanne A Wolf1 1Department of Cellular Neuroscience, Max-Delbrueck-Center for Molecular Medicine, Berlin, Germany; 2Department of Neurosurgery, Helios Clinics, Berlin, Germany Abstract: The neurodevelopmental hypothesis of schizophrenia posits that schizophrenia is a psychopathological condition resulting from aberrations in neurodevelopmental processes caused by a combination of environmental and genetic factors which proceed long before the onset of clinical symptoms. Many studies discuss an immunological component in the onset and progression of schizophrenia. We here review studies utilizing animal models of schizophrenia with manipulations of genetic, pharmacologic, and immunological origin. We focus on the immunological component to bridge the studies in terms of evaluation and treatment options of negative, positive, and cognitive symptoms. Throughout the review we link certain aspects of each model to the situation in human schizophrenic patients. In conclusion we suggest a combination of existing models to better represent the human situation. Moreover, we emphasize that animal models represent defined single or multiple symptoms or hallmarks of a given disease. Keywords: inflammation, schizophrenia, microglia, animal models 

  7. On the utility of land surface models for agricultural drought monitoring

    Directory of Open Access Journals (Sweden)

    W. T. Crow

    2012-09-01

    Full Text Available The lagged rank cross-correlation between model-derived root-zone soil moisture estimates and remotely sensed vegetation indices (VI is examined between January 2000 and December 2010 to quantify the skill of various soil moisture models for agricultural drought monitoring. Examined modeling strategies range from a simple antecedent precipitation index to the application of modern land surface models (LSMs based on complex water and energy balance formulations. A quasi-global evaluation of lagged VI/soil moisture cross-correlation suggests, when globally averaged across the entire annual cycle, soil moisture estimates obtained from complex LSMs provide little added skill (< 5% in relative terms in anticipating variations in vegetation condition relative to a simplified water accounting procedure based solely on observed precipitation. However, larger amounts of added skill (5–15% in relative terms can be identified when focusing exclusively on the extra-tropical growing season and/or utilizing soil moisture values acquired by averaging across a multi-model ensemble.

  8. Surplus thermal energy model of greenhouses and coefficient analysis for effective utilization

    Directory of Open Access Journals (Sweden)

    Seung-Hwan Yang

    2016-03-01

    Full Text Available If a greenhouse in the temperate and subtropical regions is maintained in a closed condition, the indoor temperature commonly exceeds that required for optimal plant growth, even in the cold season. This study considered this excess energy as surplus thermal energy (STE, which can be recovered, stored and used when heating is necessary. To use the STE economically and effectively, the amount of STE must be estimated before designing a utilization system. Therefore, this study proposed an STE model using energy balance equations for the three steps of the STE generation process. The coefficients in the model were determined by the results of previous research and experiments using the test greenhouse. The proposed STE model produced monthly errors of 17.9%, 10.4% and 7.4% for December, January and February, respectively. Furthermore, the effects of the coefficients on the model accuracy were revealed by the estimation error assessment and linear regression analysis through fixing dynamic coefficients. A sensitivity analysis of the model coefficients indicated that the coefficients have to be determined carefully. This study also provides effective ways to increase the amount of STE.

  9. 9-year distributions of rain intensities measured in Prague and their utilization in telecommunications

    Science.gov (United States)

    Kvicera, V.; Grabner, M.

    2012-04-01

    Experimental research in the Department of Frequency Engineering of the Czech Metrology Institute (CMI) in Prague, the Czech Republic, is focused on stability of received signals on terrestrial radio and optical communication paths. Hydrometeors (rain, snow, fog, hails) can cause serious attenuation of electromagnetic waves in the frequency bands above 10 GHz and the availability performances of terrestrial radio communication systems are seriously affected by heavy hydrometeors events. The rain intensity data is usually used for the calculations of attenuation due to rain on terrestrial radio links in accordance with either the relevant ITU-R Recommendation or other methods. Therefore, our experimental research is also focused on our own meteorological measurements in the vicinity of experimental radio and optical paths. The heated tipping-bucket rain gauge with the collector area of 500 cm2 and the rain amount per tip of 0.1 mm is used at CMI for the measurements of intensities of hydrometeors. The time of tips is recorded with uncertainty of 1 second. Hydrometeors intensity data obtained from January 2003 to December 2011 (9 years of observation) was statistically processed over the individual years. All the recorded individual hydrometeor events were compared with the concurrent meteorological conditions and were carefully categorized according to the types of individual hydrometeors, i.e. rain, rain with snow, rain with hails, snow, fog, fog with rain, fog with snow, and fog with rain and snow. The obtained cumulative distributions (CDs) of intensities of individual hydrometeors over 9 years of observation will be presented and compared with the CD of intensities of all hydrometeors together. The rain amounts were examined too. The obtained rain amounts for individual years and the average rain amounts for individual months over the 9-year period will be given. The obtained CD of average 1-minute rain intensities for the average year over the 9-year period of

  10. Dispersion modeling of accidental releases of toxic gases - Comparison of the models and their utility for the fire brigades.

    Science.gov (United States)

    Stenzel, S.; Baumann-Stanzer, K.

    2009-04-01

    Dispersion modeling of accidental releases of toxic gases - Comparison of the models and their utility for the fire brigades. Sirma Stenzel, Kathrin Baumann-Stanzer In the case of accidental release of hazardous gases in the atmosphere, the emergency responders need a reliable and fast tool to assess the possible consequences and apply the optimal countermeasures. For hazard prediction and simulation of the hazard zones a number of air dispersion models are available. The most model packages (commercial or free of charge) include a chemical database, an intuitive graphical user interface (GUI) and automated graphical output for display the results, they are easy to use and can operate fast and effective during stress situations. The models are designed especially for analyzing different accidental toxic release scenarios ("worst-case scenarios"), preparing emergency response plans and optimal countermeasures as well as for real-time risk assessment and management. There are also possibilities for model direct coupling to automatic meteorological stations, in order to avoid uncertainties in the model output due to insufficient or incorrect meteorological data. Another key problem in coping with accidental toxic release is the relative width spectrum of regulations and values, like IDLH, ERPG, AEGL, MAK etc. and the different criteria for their application. Since the particulate emergency responders and organizations require for their purposes unequal regulations and values, it is quite difficult to predict the individual hazard areas. There are a quite number of research studies and investigations coping with the problem, anyway the end decision is up to the authorities. The research project RETOMOD (reference scenarios calculations for toxic gas releases - model systems and their utility for the fire brigade) was conducted by the Central Institute for Meteorology and Geodynamics (ZAMG) in cooperation with the Vienna fire brigade, OMV Refining & Marketing GmbH and

  11. Assessing outcomes for cost-utility analysis in mental health interventions: mapping mental health specific outcome measure GHQ-12 onto EQ-5D-3L.

    Science.gov (United States)

    Lindkvist, Marie; Feldman, Inna

    2016-09-20

    Many intervention-based studies aiming to improve mental health do not include a multi-attribute utility instrument (MAUI) that produces quality-adjusted life-years (QALYs) and it limits the applicability of the health economic analyses. This study aims to develop 'crosswalk' transformation algorithm between a measure for psychological distress General Health Questionnaire (GHQ-12) and MAUI EuroQoL (EQ-5D-3L). The study is based on a survey questionnaire sent to a random sample in four counties in Sweden in 2012. The survey included GHQ-12 and EQ-5D instruments, as well as a question about self-rated health. The EQ-5D index was calculated using the UK and the Swedish tariff values. Two OLS models were used to estimate the EQ-5D health state values using the GHQ-12 as exposure, based on the respondents (n = 17, 101) of two counties. The algorithms were applied to the data from two other counties, (n = 15, 447) to check the predictive capacity of the models. The final models included gender, age, self-rated health and GHQ-12 scores as a quantitative variable. The regression equations explained 40 % (UK tariff) and 46 % (Swedish tariff) of the variances. The model showed a satisfying predictive capacity between the observed and the predicted EQ-5D index score, with Pearson correlation = 0.65 and 0.69 for the UK and Swedish models, respectively. The algorithms developed in this study can be used to determine cost-effectiveness of services or interventions that use GHQ-12 as a primary outcome where the utility measures are not collected.

  12. Energy reduction in buildings in temperate and tropic regions utilizing a heat loss measuring device

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt

    2012-01-01

    for heating up, and cooling down our houses. There is a huge energy saving potential on this area reducing both the World climate problems and economy challenges as well. Heating of buildings in Denmark counts for approximately 40% of the entire national energy consume. Of this reason a reduction of heat...... to ACMV in the "warm countries" contribute to an enormous energy consumption and corresponding CO2 emission. In order to establish the best basis for energy renovation, it is important to have measures of the heat losses on a building façade, for optimizing the energy renovation. This paper will present......There exist two ordinary ways to obtain global energy efficiency. One way is to make improvements on the energy production and supply side, and the other way is, in general, to reduce the consume of energy in the society. This paper has focus on the latter and especially the consume of energy...

  13. Differential Absorption Lidar (DIAL) Measurements of Atmospheric Water Vapor Utilizing Robotic Aircraft

    Science.gov (United States)

    Hoang, Ngoc; DeYoung, Russell J.; Prasad, Coorg R.; Laufer, Gabriel

    1998-01-01

    A new unpiloted air vehicle (UAV) based water vapor DIAL system will be described. This system is expected to offer lower operating costs, longer test duration and severe weather capabilities. A new high-efficiency, compact, light weight, diode-pumped, tunable Cr:LiSAF laser will be developed to meet the UAV payload weight and size limitations and its constraints in cooling capacity, physical size and payload. Similarly, a new receiver system using a single mirror telescope and an avalanche photo diode (APD) will be developed. Projected UAV parameters are expected to allow operation at altitudes up to 20 km, endurance of 24 hrs and speed of 400 km/hr. At these conditions measurements of water vapor at an uncertainty of 2-10% with a vertical resolution of 200 m and horizontal resolution of 10 km will be possible.

  14. Measuring relative utilization of aerobic glycolysis in breast cancer cells by positional isotopic discrimination.

    Science.gov (United States)

    Yang, Da-Qing; Freund, Dana M; Harris, Benjamin R E; Wang, Defeng; Cleary, Margot P; Hegeman, Adrian D

    2016-09-01

    The ability of cancer cells to produce lactate through aerobic glycolysis is a hallmark of cancer. In this study, we established a positional isotopic labeling and LC-MS-based method that can specifically measure the conversion of glucose to lactate in glycolysis. We show that the rate of aerobic glycolysis is closely correlated with glucose uptake and lactate production in breast cancer cells. We also found that the production of [3-(13) C]lactate is significantly elevated in metastatic breast cancer cells and in early stage metastatic mammary tumors in mice. Our findings may enable the development of a biomarker for the diagnosis of aggressive breast cancer. © 2016 Federation of European Biochemical Societies.

  15. Functional outcome measures in a surgical model of hip osteoarthritis in dogs

    OpenAIRE

    Little, Dianne; Johnson, Stephen; Hash, Jonathan; Olson, Steven A.; Estes, Bradley T.; Moutos, Franklin T.; Lascelles, B. Duncan X.; Guilak, Farshid

    2016-01-01

    Background The hip is one of the most common sites of osteoarthritis in the body, second only to the knee in prevalence. However, current animal models of hip osteoarthritis have not been assessed using many of the functional outcome measures used in orthopaedics, a characteristic that could increase their utility in the evaluation of therapeutic interventions. The canine hip shares similarities with the human hip, and functional outcome measures are well documented in veterinary medicine, pr...

  16. Computational model of precision grip in Parkinson’s disease: A Utility based approach

    Directory of Open Access Journals (Sweden)

    Ankur eGupta

    2013-12-01

    Full Text Available We propose a computational model of Precision Grip (PG performance in normal subjects and Parkinson’s Disease (PD patients. Prior studies on grip force generation in PD patients show an increase in grip force during ON medication and an increase in the variability of the grip force during OFF medication (Fellows et al 1998; Ingvarsson et al 1997. Changes in grip force generation in dopamine-deficient PD conditions strongly suggest contribution of the Basal Ganglia, a deep brain system having a crucial role in translating dopamine signals to decision making. The present approach is to treat the problem of modeling grip force generation as a problem of action selection, which is one of the key functions of the Basal Ganglia. The model consists of two components: 1 the sensory-motor loop component, and 2 the Basal Ganglia component. The sensory-motor loop component converts a reference position and a reference grip force, into lift force and grip force profiles, respectively. These two forces cooperate in grip-lifting a load. The sensory-motor loop component also includes a plant model that represents the interaction between two fingers involved in PG, and the object to be lifted. The Basal Ganglia component is modeled using Reinforcement Learning with the significant difference that the action selection is performed using utility distribution instead of using purely Value-based distribution, thereby incorporating risk-based decision making. The proposed model is able to account for the precision grip results from normal and PD patients accurately (Fellows et. al. 1998; Ingvarsson et. al. 1997. To our knowledge the model is the first model of precision grip in PD conditions.

  17. Measuring equity in utilization of emergency obstetric care at Wolisso Hospital in Oromiya, Ethiopia: a cross sectional study.

    Science.gov (United States)

    Wilunda, Calistus; Putoto, Giovanni; Manenti, Fabio; Castiglioni, Maria; Azzimonti, Gaetano; Edessa, Wagari; Atzori, Andrea; Merialdi, Mario; Betrán, Ana Pilar; Vogel, Joshua; Criel, Bart

    2013-04-22

    Improving equity in access to services for the treatment of complications that arise during pregnancy and childbirth, namely Emergency Obstetric Care (EmOC), is fundamental if maternal and neonatal mortality are to be reduced. Consequently, there is a growing need to monitor equity in access to EmOC. The objective of this study was to develop a simple questionnaire to measure equity in utilization of EmOC at Wolisso Hospital, Ethiopia and compare the wealth status of EmOC users with women in the general population. Women in the Ethiopia 2005 Demographic and Health Survey (DHS) constituted our reference population. We cross-tabulated DHS wealth variables against wealth quintiles. Five variables that differentiated well across quintiles were selected to create a questionnaire that was administered to women at discharge from the maternity from January to August 2010. This was used to identify inequities in utilization of EmOC by comparison with the reference population. 760 women were surveyed. An a posteriori comparison of these 2010 data to the 2011 DHS dataset, indicated that women using EmOC were wealthier and more likely to be urban dwellers. On a scale from 0 (poorest) to 15 (wealthiest), 31% of women in the 2011 DHS sample scored less than 1 compared with 0.7% in the study population. 70% of women accessing EmOC belonged to the richest quintile with only 4% belonging to the poorest two quintiles. Transportation costs seem to play an important role. We found inequity in utilization of EmOC in favour of the wealthiest. Assessing and monitoring equitable utilization of maternity services is feasible using this simple tool.

  18. Validation of the measure automobile emissions model : a statistical analysis

    Science.gov (United States)

    2000-09-01

    The Mobile Emissions Assessment System for Urban and Regional Evaluation (MEASURE) model provides an external validation capability for hot stabilized option; the model is one of several new modal emissions models designed to predict hot stabilized e...

  19. Development of a probing system for a micro-coordinate measuring machine by utilizing shear-force detection

    International Nuclear Information System (INIS)

    Ito, So; Kodama, Issei; Gao, Wei

    2014-01-01

    This paper introduces a newly developed probing system for a micro-coordinate measurement machine (micro-CMM) based on an interaction force generated by the water layer on the surface of the measuring object. In order to measure the dimensions of the micrometric structures, a probing system using a nanopipette ball stylus has been developed. A glass microsphere with diameter of 9 µm is used as a stylus tip of the probing system. The glass nanopipette, which is fabricated from a capillary glass tube by a thermal pulling process, is employed as a stylus shaft to improve the fixation strength of the stylus tip. The approach between the stylus tip and the surface of the measuring object can be detected by utilizing the method of shear-force detection. The stylus is oscillated in the lateral direction at its resonant frequency to detect an interaction force owing to the viscoelasticity of the meniscus layer existing on the surface of the measuring object. The oscillation amplitude is decreased by the shear-force applied to the stylus tip. In this study, the basic characteristics of the probing system including sensitivity, resolution and reproducibility are investigated. The experimental result of dimensional measurement of micrometer-scale structure is presented. (paper)

  20. Radiation budget measurement/model interface research

    Science.gov (United States)

    Vonderhaar, T. H.

    1981-01-01

    The NIMBUS 6 data were analyzed to form an up to date climatology of the Earth radiation budget as a basis for numerical model definition studies. Global maps depicting infrared emitted flux, net flux and albedo from processed NIMBUS 6 data for July, 1977, are presented. Zonal averages of net radiation flux for April, May, and June and zonal mean emitted flux and net flux for the December to January period are also presented. The development of two models is reported. The first is a statistical dynamical model with vertical and horizontal resolution. The second model is a two level global linear balance model. The results of time integration of the model up to 120 days, to simulate the January circulation, are discussed. Average zonal wind, meridonal wind component, vertical velocity, and moisture budget are among the parameters addressed.

  1. Measuring Change with the Rating Scale Model.

    Science.gov (United States)

    Ludlow, Larry H.; And Others

    The Rehabilitation Research and Development Laboratory at the United States Veterans Administration Hines Hospital is engaged in a long-term evaluation of blind rehabilitation. One aspect of the evaluation project focuses on the measurement of attitudes toward blindness. Our aim is to measure changes in attitudes toward blindness from…

  2. Refining Change Measure with the Rasch Model

    Science.gov (United States)

    Zaporozhets, Olga; Fox, Christine M.; Beltyukova, Svetlana A.; Laux, John M.; Piazza, Nick J.; Salyers, Kathleen

    2015-01-01

    This study was to develop a linear measure of change using University of Rhode Island Change Assessment items that represented Prochaska and DiClemente's theory. The resulting Toledo Measure of Change is short, is easy to use, and provides reliable scores for identification of individuals' stage of change and progression within that stage.

  3. Migration Flows: Measurement, Analysis and Modeling

    NARCIS (Netherlands)

    Willekens, F.J.; White, Michael J.

    2016-01-01

    This chapter is an introduction to the study of migration flows. It starts with a review of major definition and measurement issues. Comparative studies of migration are particularly difficult because different countries define migration differently and measurement methods are not harmonized.

  4. Interaural multiple frequency tympanometry measures: clinical utility for unilateral conductive hearing loss.

    Science.gov (United States)

    Norrix, Linda W; Burgan, Briana; Ramirez, Nicholas; Velenovsky, David S

    2013-03-01

    Tympanometry is a routine clinical measurement of the acoustic immittance of the ear as a function of ear canal air pressure. The 226 Hz tympanogram can provide clinical evidence for conditions such as a tympanic membrane perforation, Eustachian tube dysfunction, middle ear fluid, and ossicular discontinuity. Multiple frequency tympanometry using a range of probe tone frequencies from low to high has been shown to be more sensitive than a single probe tone tympanogram in distinguishing between mass- and stiffness-related middle ear pathologies (Colletti, 1975; Funasaka et al, 1984; Van Camp et al, 1986). In this study we obtained normative measures of middle ear resonance by using multiple probe tone frequency tympanometry. Ninety percent ranges for middle ear resonance and for interaural differences were calculated. In a mixed design, normative data were collected from both ears of male and female adults. Twelve male and 12 female adults with normal hearing and normal middle ear function participated in the study. Multiple frequency tympanograms were recorded with a commercially available immittance instrument (GSI Tympstar) to obtain estimates of middle ear resonant frequency (RF) using ΔB, positive tail, and negative tail methods. Data were analyzed using three-way mixed analyses of variance with gender as a between-subject variable and ear and method as within-subject variables. T-tests were performed, using the Bonferroni adjustment, to determine significant differences between means. Using the positive and negative tail methods, a wide range of approximately 500 Hz was found for middle ear resonance in adults with normal hearing and normal middle ear function. The difference in RF between an individual's ears is small with 90% ranges of approximately ±200 Hz, indicating that the right ear RF should be either 200 Hz higher or lower in frequency compared to the left ear. This was true for both negative and positive tail methods. Ninety percent ranges were

  5. Full utilization of silt density index (SDI) measurements for seawater pre-treatment

    KAUST Repository

    Wei, Chunhai

    2012-07-01

    In order to clarify the fouling mechanism during silt density index (SDI) measurements of seawater in the seawater reverse osmosis (SWRO) desalination process, 11 runs were conducted under constant-pressure (207kPa) dead-end filtration mode according to the standard protocol for SDI measurement, in which two kinds of 0.45μm membranes of different material and seawater samples from the Mediterranean including raw seawater and seawater pre-treated by coagulation followed by sand filtration (CSF) and coagulation followed by microfiltration (CMF) technologies were tested. Fouling mechanisms based on the constant-pressure filtration equation were fully analyzed. For all runs, only t/(V/A)∼t showed very good linearity (correlation coefficient R 2>0.99) since the first moment of the filtration, indicating that standard blocking rather than cake filtration was the dominant fouling mechanism during the entire filtration process. The very low concentration of suspended solids rejected by MF of 0.45μm in seawater was the main reason why a cake layer was not formed. High turbidity removal during filtration indicated that organic colloids retained on and/or adsorbed in membrane pores governed the filtration process (i.e., standard blocking) due to the important contribution of organic substances to seawater turbidity in this study. Therefore the standard blocking coefficient k s, i.e., the slope of t/(V/A)∼t, could be used as a good fouling index for seawater because it showed good linearity with feed seawater turbidity. The correlation of SDI with k s and feed seawater quality indicated that SDI could be reliably used for seawater with low fouling potential (SDI 15min<5) like pre-treated seawater in this study. From both k s and SDI, the order of fouling potential was raw seawater>seawater pre-treated by CSF>seawater pre-treated by CMF, indicating the better performance of CMF than CSF. © 2012 Elsevier B.V.

  6. Utilizing a Wristband Sensor to Measure the Stress Level for People with Dementia

    Directory of Open Access Journals (Sweden)

    Basel Kikhia

    2016-11-01

    Full Text Available Stress is a common problem that affects most people with dementia and their caregivers. Stress symptoms for people with dementia are often measured by answering a checklist of questions by the clinical staff who work closely with the person with the dementia. This process requires a lot of effort with continuous observation of the person with dementia over the long term. This article investigates the effectiveness of using a straightforward method, based on a single wristband sensor to classify events of “Stressed” and “Not stressed” for people with dementia. The presented system calculates the stress level as an integer value from zero to five, providing clinical information of behavioral patterns to the clinical staff. Thirty staff members participated in this experiment, together with six residents suffering from dementia, from two nursing homes. The residents were equipped with the wristband sensor during the day, and the staff were writing observation notes during the experiment to serve as ground truth. Experimental evaluation showed relationships between staff observations and sensor analysis, while stress level thresholds adjusted to each individual can serve different scenarios.

  7. Utility of Accelerometers to Measure Physical Activity in Children Attending an Obesity Treatment Intervention

    Directory of Open Access Journals (Sweden)

    Wendy Robertson

    2011-01-01

    Full Text Available Objectives. To investigate the use of accelerometers to monitor change in physical activity in a childhood obesity treatment intervention. Methods. 28 children aged 7–13 taking part in “Families for Health” were asked to wear an accelerometer (Actigraph for 7-days, and complete an accompanying activity diary, at baseline, 3-months and 9-months. Interviews with 12 parents asked about research measurements. Results. Over 90% of children provided 4 days of accelerometer data, and around half of children provided 7 days. Adequately completed diaries were collected from 60% of children. Children partake in a wide range of physical activity which uniaxial monitors may undermonitor (cycling, nonmotorised scootering or overmonitor (trampolining. Two different cutoffs (4 METS or 3200 counts⋅min-1 for minutes spent in moderate and vigorous physical activity (MVPA yielded very different results, although reached the same conclusion regarding a lack of change in MVPA after the intervention. Some children were unwilling to wear accelerometers at school and during sport because they felt they put them at risk of stigma and bullying. Conclusion. Accelerometers are acceptable to a majority of children, although their use at school is problematic for some, but they may underestimate children's physical activity.

  8. Assessment of energy utilization and leakages in buildings with building information model energy

    Directory of Open Access Journals (Sweden)

    Egwunatum I. Samuel

    2017-03-01

    Full Text Available Given the ability of building information models (BIM to serve as a multidisciplinary data repository, this study attempts to explore and exploit the sustainability value of BIM in delivering buildings that require less energy for operations, emit less carbon dioxide, and provide conducive living environments for occupants. This objective was attained by a critical and extensive literature review that covers the following: (1 building energy consumption, (2 building energy performance and analysis, and (3 BIM and energy assessment. Literature cited in this paper shows that linking an energy analysis tool with a BIM model has helped project design teams to predict and create optimized energy consumption by conducting building energy performance analysis utilizing key performance indicators on average thermal transmitters, resulting heat demand, lighting power, solar heat gains, and ventilation heat losses. An in-depth analysis was conducted on a completed BIM integrated construction project utilizing the Arboleda Project in the Dominican Republic to validate the aforementioned findings. Results show that the BIM-based energy analysis helped the design team attain the world׳s first positive energy building. This study concludes that linking an energy analysis tool with a BIM model helps to expedite the energy analysis process, provide more detailed and accurate results, and deliver energy-efficient buildings. This study further recommends that the adoption of level 2 BIM and BIM integration in energy optimization analysis must be demanded by building regulatory agencies for all projects regardless of procurement method (i.e., government funded or otherwise or size.

  9. Quantitative utilization of prior biological knowledge in the Bayesian network modeling of gene expression data

    Directory of Open Access Journals (Sweden)

    Gao Shouguo

    2011-08-01

    Full Text Available Abstract Background Bayesian Network (BN is a powerful approach to reconstructing genetic regulatory networks from gene expression data. However, expression data by itself suffers from high noise and lack of power. Incorporating prior biological knowledge can improve the performance. As each type of prior knowledge on its own may be incomplete or limited by quality issues, integrating multiple sources of prior knowledge to utilize their consensus is desirable. Results We introduce a new method to incorporate the quantitative information from multiple sources of prior knowledge. It first uses the Naïve Bayesian classifier to assess the likelihood of functional linkage between gene pairs based on prior knowledge. In this study we included cocitation in PubMed and schematic similarity in Gene Ontology annotation. A candidate network edge reservoir is then created in which the copy number of each edge is proportional to the estimated likelihood of linkage between the two corresponding genes. In network simulation the Markov Chain Monte Carlo sampling algorithm is adopted, and samples from this reservoir at each iteration to generate new candidate networks. We evaluated the new algorithm using both simulated and real gene expression data including that from a yeast cell cycle and a mouse pancreas development/growth study. Incorporating prior knowledge led to a ~2 fold increase in the number of known transcription regulations recovered, without significant change in false positive rate. In contrast, without the prior knowledge BN modeling is not always better than a random selection, demonstrating the necessity in network modeling to supplement the gene expression data with additional information. Conclusion our new development provides a statistical means to utilize the quantitative information in prior biological knowledge in the BN modeling of gene expression data, which significantly improves the performance.

  10. The utility of measuring anti-Müllerian hormone in predicting menopause.

    Science.gov (United States)

    Aydogan, B; Mirkin, S

    2015-01-01

    Menopause is a relevant phase in a woman's reproductive life. Accurate estimation of the time of menopause could improve the preventive management of women's health. Reproductive hormones reflect the activity of follicle pools and provide information about ovarian aging. Anti-Mu llerian hormone (AMH) is secreted from small antral follicles and its level is correlated to the ovarian reserve. AMH declines with age, and data suggest that it can provide information on menopausal age and reproductive lifespan. Serum AMH levels become low approximately 5 years before the final menstrual period and are undetectable in postmenopausal women. The majority of studies indicate that AMH is relatively stable throughout the menstrual cycle; however, there are interindividual variabilites of serum AMH concentration under different conditions. AMH is an independent predictor of time to menopause. AMH coupled with age for menopause prediction provides stronger information than using age alone. Ongoing research is focused on constructing a multivariate model including AMH values, genes related to follicular recruitment and maternal age of menopause that would predict more precisily time to menopause.

  11. Utilization of Aerial Photograph and Geographic Information System for Deposit Measurement of Wuryantoro Watershed, Wonogiri

    Directory of Open Access Journals (Sweden)

    Sugiharto Budi Santoso

    2004-01-01

    Full Text Available This research is carried out in Wuryantoro Watershed, Wonogiri, Central Java. The goal of this study is to examine the remote sensing tehnology capability to obtain the parameters of the physical data of land in the prediction of sediment yield. The approach used in landscape with the land unit as mapping unit by using MUSLE (Modified Universal Soil Loss Equation model. The data analysis used the infrared aerial photo interpretation, which is combined   by Geographical Information Systems (GIS. Infrared aerial photo on scale 1 : 10.000 in 1991 is used as primary source of data to obtain the parameters of physical data of land. The data analysis uses the Geographical Information Systems. The prediction of the sediment yield is not done directly. First, predicated the runoff characteristi, which contains of runoff coeficient, runoff volume and peak discharge. Then the runoff charateristic with other influential factors (slope, soil, and land cover and conservation practice are used to predict the sediment yield. The result of the prediction is tested by comparing them with the data of field measurenment result. The accuracy of the result of aerial photo interpretation for prediction sediment yield is 89.45%.

  12. Measurement and modeling of indoor radon concentrations in residential buildings

    Directory of Open Access Journals (Sweden)

    Ji Hyun Park

    2018-01-01

    Full Text Available Radon, the primary constituent of natural radiation, is the second leading environmental cause of lung cancer after smoking. To confirm a relationship between indoor radon exposure and lung cancer, estimating cumulative levels of exposure to indoor radon for an individual or population is necessary. This study sought to develop a model for estimate indoor radon concentrations in Korea. Especially, our model and method may have wider application to other residences, not to specific site, and can be used in situations where actual measurements for input variables are lacking. In order to develop a model, indoor radon concentrations were measured at 196 ground floor residences using passive alpha-track detectors between January and April 2016. The arithmetic mean (AM and geometric mean (GM means of indoor radon concentrations were 117.86±72.03 and 95.13±2.02 Bq/m3, respectively. Questionnaires were administered to assess the characteristics of each residence, the environment around the measuring equipment, and lifestyles of the residents. Also, national data on indoor radon concentrations at 7643 detached houses for 2011-2014 were reviewed to determine radon concentrations in the soil, and meteorological data on temperature and wind speed were utilized to approximate ventilation rates. The estimated ventilation rates and radon exhalation rates from the soil varied from 0.18 to 0.98/hr (AM, 0.59±0.17/hr and 326.33 to 1392.77 Bq/m2/hr (AM, 777.45±257.39; GM, 735.67±1.40 Bq/m2/hr, respectively. With these results, the developed model was applied to estimate indoor radon concentrations for 157 residences (80% of all 196 residences, which were randomly sampled. The results were in better agreement for Gyeonggi and Seoul than for other regions of Korea. Overall, the actual and estimated radon concentrations were in better agreement, except for a few low-concentration residences.

  13. Flexible simulation framework to couple processes in complex 3D models for subsurface utilization assessment

    Science.gov (United States)

    Kempka, Thomas; Nakaten, Benjamin; De Lucia, Marco; Nakaten, Natalie; Otto, Christopher; Pohl, Maik; Tillner, Elena; Kühn, Michael

    2016-04-01

    Utilization of the geological subsurface for production and storage of hydrocarbons, chemical energy and heat as well as for waste disposal requires the quantification and mitigation of environmental impacts as well as the improvement of georesources utilization in terms of efficiency and sustainability. The development of tools for coupled process simulations is essential to tackle these challenges, since reliable assessments are only feasible by integrative numerical computations. Coupled processes at reservoir to regional scale determine the behaviour of reservoirs, faults and caprocks, generally demanding for complex 3D geological models to be considered besides available monitoring and experimenting data in coupled numerical simulations. We have been developing a flexible numerical simulation framework that provides efficient workflows for integrating the required data and software packages to carry out coupled process simulations considering, e.g., multiphase fluid flow, geomechanics, geochemistry and heat. Simulation results are stored in structured data formats to allow for an integrated 3D visualization and result interpretation as well as data archiving and its provision to collaborators. The main benefits in using the flexible simulation framework are the integration of data geological and grid data from any third party software package as well as data export to generic 3D visualization tools and archiving formats. The coupling of the required process simulators in time and space is feasible, while different spatial dimensions in the coupled simulations can be integrated, e.g., 0D batch with 3D dynamic simulations. User interaction is established via high-level programming languages, while computational efficiency is achieved by using low-level programming languages. We present three case studies on the assessment of geological subsurface utilization based on different process coupling approaches and numerical simulations.

  14. Analysis on misconducts and inappropriate practices by Japan's Nuclear Power Utilities and Assessment of their corrective measures

    International Nuclear Information System (INIS)

    Torikai, Seishi; Ozawa, Michihiro; Kanegae, Naomichi; Tani, Masaaki; Miyakoshi, Naoki; Madarame, Haruki

    2010-01-01

    On March 30, 2007, Japan's electric utilities reported the results of a complete review of their powergenerating units to the Nuclear and Industrial Safety Agency of the Ministry of Economy, Trade, and Industry (METI). The Ethics Committee of the Atomic Energy Society of Japan (AESJ) then recommended an assessment method to analyze the seriousness of the problems from multiple perspectives in order to support the public's understanding of the reported problems. Accordingly, the Ethics Committee conducted the assessment. The assessment considered each reported problem associated with nuclear power-generating units and the preventive measures completed between June 2007 and September 2008 (corrective measures continued beyond that period). The results were presented at the autumn conferences of AESJ in 2007 and 2008, and are discussed in this report. (author)

  15. Utilized social support and self-esteem mediate the relationship between perceived social support and suicide ideation. A test of a multiple mediator model.

    Science.gov (United States)

    Kleiman, Evan M; Riskind, John H

    2013-01-01

    While perceived social support has received considerable research as a protective factor for suicide ideation, little attention has been given to the mechanisms that mediate its effects. We integrated two theoretical models, Joiner's (2005) interpersonal theory of suicide and Leary's (Leary, Tambor, Terdal, & Downs, 1995) sociometer theory of self-esteem to investigate two hypothesized mechanisms, utilization of social support and self-esteem. Specifically, we hypothesized that individuals must utilize the social support they perceive that would result in increased self-esteem, which in turn buffers them from suicide ideation. Participants were 172 college students who completed measures of social support, self-esteem, and suicide ideation. Tests of simple mediation indicate that utilization of social support and self-esteem may each individually help to mediate the perceived social support/suicide ideation relationship. Additionally, a test of multiple mediators using bootstrapping supported the hypothesized multiple-mediator model. The use of a cross-sectional design limited our ability to find true cause-and-effect relationships. Results suggested that utilized social support and self-esteem both operate as individual moderators in the social support/self-esteem relationship. Results further suggested, in a comprehensive model, that perceived social support buffers suicide ideation through utilization of social support and increases in self-esteem.

  16. Use of forecasted assessment of quality of life to validate time-trade-off utilities and a prostate cancer screening decision-analytic model.

    Science.gov (United States)

    Cantor, Scott B; Deshmukh, Ashish A; Krahn, Murray D; Volk, Robert J

    2015-10-01

    To determine whether the forecasted assessment of how someone would feel in a future health state can be predictive of utilities (e.g. as elicited by the time-trade-off method) and also predictive of optimal decisions as determined by a decision-analytic model. We elicited time-trade-off utilities for prostate cancer treatment outcomes from 168 men. We also elicited forecasted assessments, that is, an informal, non-quantitative, descriptive evaluation, of impotence and incontinence from these men. We used multivariate regression analysis to explore the relationship between forecasted assessment and reluctance to trade length for improved quality of life, that is, the unwillingness to trade length of life for improved quality of life in the time-trade-off utility assessment and the relationship between the forecasted assessments and the optimal decision of whether to undergo screening for prostate cancer as determined from a previously published decision-analytic model. Importance of sexual function was strongly related to impotence utilities (P decision. Anticipated difficulty adjusting to adverse health effects were highly related to preferences and could be used as a proxy measure of utility. Similarly, the importance of sexual functioning, a future preference, was highly related to the optimal decision, which validates our previously published decision-analytic model. © 2013 John Wiley & Sons Ltd.

  17. Emergency biliary sonography: utility of common bile duct measurement in the diagnosis of cholecystitis and choledocholithiasis.

    Science.gov (United States)

    Becker, Brent A; Chin, Eric; Mervis, Eric; Anderson, Craig L; Oshita, Masaru H; Fox, J Christian

    2014-01-01

    Measurement of the common bile duct (CBD) has traditionally been considered an integral part of gallbladder sonography, but accurate identification of the CBD can be difficult for novice sonographers. To determine the prevalence of isolated sonographic CBD dilation in emergency department (ED) patients with cholecystitis or choledocholithiasis without laboratory abnormalities or other pathologic findings on biliary ultrasound. We conducted a retrospective chart review on two separate ED patient cohorts between June 2000 and June 2010. The first cohort comprised all ED patients undergoing a biliary ultrasound and subsequent cholecystectomy for presumed cholecystitis. The second cohort consisted of all ED patients receiving a biliary ultrasound who were ultimately diagnosed with choledocholithiasis. Ultrasound data and contemporaneous laboratory values were collected. Postoperative gallbladder pathology reports and endoscopic retrograde cholangiopancreatography (ERCP) reports were used as the criterion standard for final diagnosis. Of 666 cases of cholecystitis, there were 251 (37.7%) with a dilated CBD > 6 mm and only 2 cases (0.3%; 95% confidence interval [CI] 0.0-0.7%) of isolated CBD dilation with an otherwise negative ultrasound and normal laboratory values. Of 111 cases of choledocholithiasis, there were 80 (72.0%) with a dilated CBD and only 1 case (0.9%; 95% CI 0.0-2.7%) with an otherwise negative ultrasound and normal laboratory values. The prevalence of isolated sonographic CBD dilation in cholecystitis and choledocholithiasis is choledocholithiasis in the setting of a routine ED evaluation with an otherwise normal ultrasound and normal laboratory values. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Tomographic TR-PIV measurement of coherent structure spatial topology utilizing an improved quadrant splitting method

    Science.gov (United States)

    Yang, ShaoQiong; Jiang, Nan

    2012-10-01

    In this paper, we calculated the spatial local-averaged velocity strains along the streamwise direction at four spatial scales according to the concept of spatial local-averaged velocity structure function by using the three-dimensional three-component database of time series of velocity vector field in the turbulent boundary layer measured by tomographic time-resolved particle image velocimetry. An improved quadrant splitting method, based on the spatial local-averaged velocity strains together with a new conditional sampling phase average technique, was introduced as a criterion to detect the coherent structure topology. Furthermore, we used them to detect and extract the spatial topologies of fluctuating velocity and fluctuating vorticity whose center is a strong second-quadrant event (Q2) or a fourth-quadrant event (Q4). Results illustrate that a closer similarity of the multi-scale coherent structures is present in the wall-normal direction, compared to the one in the other two directions. The relationship among such topological coherent structures and Reynolds stress bursting events, as well as the fluctuating vorticity was discussed. When other burst events are surveyed (the first-quadrant event Q1 and the third-quadrant event Q3), a fascinating bursting period circularly occurs: Q4-S-Q2-Q3-Q2-Q1-Q4-S-Q2-Q3-Q2-Q1 in the center of such topological structures along the streamwise direction. In addition, the probability of the Q2 bursting event occurrence is slightly higher than that of the Q4 event occurrence. The spatial instable singularity that almost simultaneously appears together with typical Q2 or Q4 events has been observed, which is the main character of the mutual induction mechanism and vortex auto-generation mechanism explaining how the turbulence is produced and maintained.

  19. Three-dimensional cell culture model utilization in cancer stem cell research.

    Science.gov (United States)

    Bielecka, Zofia F; Maliszewska-Olejniczak, Kamila; Safir, Ilan J; Szczylik, Cezary; Czarnecka, Anna M

    2017-08-01

    Three-dimensional (3D) cell culture models are becoming increasingly popular in contemporary cancer research and drug resistance studies. Recently, scientists have begun incorporating cancer stem cells (CSCs) into 3D models and modifying culture components in order to mimic in vivo conditions better. Currently, the global cell culture market is primarily focused on either 3D cancer cell cultures or stem cell cultures, with less focus on CSCs. This is evident in the low product availability officially indicated for 3D CSC model research. This review discusses the currently available commercial products for CSC 3D culture model research. Additionally, we discuss different culture media and components that result in higher levels of stem cell subpopulations while better recreating the tumor microenvironment. In summary, although progress has been made applying 3D technology to CSC research, this technology could be further utilized and a greater number of 3D kits dedicated specifically to CSCs should be implemented. © 2016 The Authors. Biological Reviews published by John Wiley & Sons Ltd on behalf of Cambridge Philosophical Society.

  20. Model of sustainable utilization of organic solids waste in Cundinamarca, Colombia

    Directory of Open Access Journals (Sweden)

    Solanyi Castañeda Torres

    2017-05-01

    Full Text Available Introduction: This article considers a proposal of a model of use of organic solids waste for the department of Cundinamarca, which responds to the need for a tool to support decision-making for the planning and management of organic solids waste. Objective: To perform an approximation of a conceptual technical and mathematician optimization model to support decision-making in order to minimize environmental impacts. Materials and methods: A descriptive study was applied due to the fact that some fundamental characteristics of the studied homogeneous phenomenon are presented and it is also considered to be quasi experimental. The calculation of the model for plants of the department is based on three axes (environmental, economic and social, that are present in the general equation of optimization. Results: A model of harnessing organic solids waste in the techniques of biological treatment of composting aerobic and worm cultivation is obtained, optimizing the system with the emissions savings of greenhouse gases spread into the atmosphere, and in the reduction of the overall cost of final disposal of organic solids waste in sanitary landfill. Based on the economic principle of utility that determines the environmental feasibility and sustainability in the plants of harnessing organic solids waste to the department, organic fertilizers such as compost and humus capture carbon and nitrogen that reduce the tons of CO2.

  1. Modeling of Mean-VaR portfolio optimization by risk tolerance when the utility function is quadratic

    Science.gov (United States)

    Sukono, Sidi, Pramono; Bon, Abdul Talib bin; Supian, Sudradjat

    2017-03-01

    The problems of investing in financial assets are to choose a combination of weighting a portfolio can be maximized return expectations and minimizing the risk. This paper discusses the modeling of Mean-VaR portfolio optimization by risk tolerance, when square-shaped utility functions. It is assumed that the asset return has a certain distribution, and the risk of the portfolio is measured using the Value-at-Risk (VaR). So, the process of optimization of the portfolio is done based on the model of Mean-VaR portfolio optimization model for the Mean-VaR done using matrix algebra approach, and the Lagrange multiplier method, as well as Khun-Tucker. The results of the modeling portfolio optimization is in the form of a weighting vector equations depends on the vector mean return vector assets, identities, and matrix covariance between return of assets, as well as a factor in risk tolerance. As an illustration of numeric, analyzed five shares traded on the stock market in Indonesia. Based on analysis of five stocks return data gained the vector of weight composition and graphics of efficient surface of portfolio. Vector composition weighting weights and efficient surface charts can be used as a guide for investors in decisions to invest.

  2. Optimal parametric modelling of measured short waves

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.

    The spectral analysis of measured short waves can efficiently be carried out by the fast Fourier transform technique. Even though many present techniques can be used for the simulation of time series waves, these may not provide accurate...

  3. Bayesian Proteoform Modeling Improves Protein Quantification of Global Proteomic Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.; Datta, Susmita; Payne, Samuel H.; Kang, Jiyun; Bramer, Lisa M.; Nicora, Carrie D.; Shukla, Anil K.; Metz, Thomas O.; Rodland, Karin D.; Smith, Richard D.; Tardiff, Mark F.; McDermott, Jason E.; Pounds, Joel G.; Waters, Katrina M.

    2014-12-01

    As the capability of mass spectrometry-based proteomics has matured, tens of thousands of peptides can be measured simultaneously, which has the benefit of offering a systems view of protein expression. However, a major challenge is that with an increase in throughput, protein quantification estimation from the native measured peptides has become a computational task. A limitation to existing computationally-driven protein quantification methods is that most ignore protein variation, such as alternate splicing of the RNA transcript and post-translational modifications or other possible proteoforms, which will affect a significant fraction of the proteome. The consequence of this assumption is that statistical inference at the protein level, and consequently downstream analyses, such as network and pathway modeling, have only limited power for biomarker discovery. Here, we describe a Bayesian model (BP-Quant) that uses statistically derived peptides signatures to identify peptides that are outside the dominant pattern, or the existence of multiple over-expressed patterns to improve relative protein abundance estimates. It is a research-driven approach that utilizes the objectives of the experiment, defined in the context of a standard statistical hypothesis, to identify a set of peptides exhibiting similar statistical behavior relating to a protein. This approach infers that changes in relative protein abundance can be used as a surrogate for changes in function, without necessarily taking into account the effect of differential post-translational modifications, processing, or splicing in altering protein function. We verify the approach using a dilution study from mouse plasma samples and demonstrate that BP-Quant achieves similar accuracy as the current state-of-the-art methods at proteoform identification with significantly better specificity. BP-Quant is available as a MatLab ® and R packages at https://github.com/PNNL-Comp-Mass-Spec/BP-Quant.

  4. The comparison of environmental effects on michelson and fabry-perot interferometers utilized for the displacement measurement.

    Science.gov (United States)

    Wang, Yung-Cheng; Shyu, Lih-Horng; Chang, Chung-Ping

    2010-01-01

    The optical structure of general commercial interferometers, e.g., the Michelson interferometers, is based on a non-common optical path. Such interferometers suffer from environmental effects because of the different phase changes induced in different optical paths and consequently the measurement precision will be significantly influenced by tiny variations of the environmental conditions. Fabry-Perot interferometers, which feature common optical paths, are insensitive to environmental disturbances. That would be advantageous for precision displacement measurements under ordinary environmental conditions. To verify and analyze this influence, displacement measurements with the two types of interferometers, i.e., a self-fabricated Fabry-Perot interferometer and a commercial Michelson interferometer, have been performed and compared under various environmental disturbance scenarios. Under several test conditions, the self-fabricated Fabry-Perot interferometer was obviously less sensitive to environmental disturbances than a commercial Michelson interferometer. Experimental results have shown that induced errors from environmental disturbances in a Fabry-Perot interferometer are one fifth of those in a Michelson interferometer. This has proved that an interferometer with the common optical path structure will be much more independent of environmental disturbances than those with a non-common optical path structure. It would be beneficial for the solution of interferometers utilized for precision displacement measurements in ordinary measurement environments.

  5. Utilizing evolutionary information and gene expression data for estimating gene networks with bayesian network models.

    Science.gov (United States)

    Tamada, Yoshinori; Bannai, Hideo; Imoto, Seiya; Katayama, Toshiaki; Kanehisa, Minoru; Miyano, Satoru

    2005-12-01

    Since microarray gene expression data do not contain sufficient information for estimating accurate gene networks, other biological information has been considered to improve the estimated networks. Recent studies have revealed that highly conserved proteins that exhibit similar expression patterns in different organisms, have almost the same function in each organism. Such conserved proteins are also known to play similar roles in terms of the regulation of genes. Therefore, this evolutionary information can be used to refine regulatory relationships among genes, which are estimated from gene expression data. We propose a statistical method for estimating gene networks from gene expression data by utilizing evolutionarily conserved relationships between genes. Our method simultaneously estimates two gene networks of two distinct organisms, with a Bayesian network model utilizing the evolutionary information so that gene expression data of one organism helps to estimate the gene network of the other. We show the effectiveness of the method through the analysis on Saccharomyces cerevisiae and Homo sapiens cell cycle gene expression data. Our method was successful in estimating gene networks that capture many known relationships as well as several unknown relationships which are likely to be novel. Supplementary information is available at http://bonsai.ims.u-tokyo.ac.jp/~tamada/bayesnet/.

  6. Explaining regional variations in health care utilization between Swiss cantons using panel econometric models.

    Science.gov (United States)

    Camenzind, Paul A

    2012-03-13

    In spite of a detailed and nation-wide legislation frame, there exist large cantonal disparities in consumed quantities of health care services in Switzerland. In this study, the most important factors of influence causing these regional disparities are determined. The findings can also be productive for discussing the containment of health care consumption in other countries. Based on the literature, relevant factors that cause geographic disparities of quantities and costs in western health care systems are identified. Using a selected set of these factors, individual panel econometric models are calculated to explain the variation of the utilization in each of the six largest health care service groups (general practitioners, specialist doctors, hospital inpatient, hospital outpatient, medication, and nursing homes) in Swiss mandatory health insurance (MHI). The main data source is 'Datenpool santésuisse', a database of Swiss health insurers. For all six health care service groups, significant factors influencing the utilization frequency over time and across cantons are found. A greater supply of service providers tends to have strong interrelations with per capita consumption of MHI services. On the demand side, older populations and higher population densities represent the clearest driving factors. Strategies to contain consumption and costs in health care should include several elements. In the federalist Swiss system, the structure of regional health care supply seems to generate significant effects. However, the extent of driving factors on the demand side (e.g., social deprivation) or financing instruments (e.g., high deductibles) should also be considered.

  7. Integrating utilization-focused evaluation with business process modeling for clinical research improvement.

    Science.gov (United States)

    Kagan, Jonathan M; Rosas, Scott; Trochim, William M K

    2010-10-01

    New discoveries in basic science are creating extraordinary opportunities to design novel biomedical preventions and therapeutics for human disease. But the clinical evaluation of these new interventions is, in many instances, being hindered by a variety of legal, regulatory, policy and operational factors, few of which enhance research quality, the safety of study participants or research ethics. With the goal of helping increase the efficiency and effectiveness of clinical research, we have examined how the integration of utilization-focused evaluation with elements of business process modeling can reveal opportunities for systematic improvements in clinical research. Using data from the NIH global HIV/AIDS clinical trials networks, we analyzed the absolute and relative times required to traverse defined phases associated with specific activities within the clinical protocol lifecycle. Using simple median duration and Kaplan-Meyer survival analysis, we show how such time-based analyses can provide a rationale for the prioritization of research process analysis and re-engineering, as well as a means for statistically assessing the impact of policy modifications, resource utilization, re-engineered processes and best practices. Successfully applied, this approach can help researchers be more efficient in capitalizing on new science to speed the development of improved interventions for human disease.

  8. Modelling methods for milk intake measurements

    International Nuclear Information System (INIS)

    Coward, W.A.

    1999-01-01

    One component of the first Research Coordination Programme was a tutorial session on modelling in in-vivo tracer kinetic methods. This section describes the principles that are involved and how these can be translated into spreadsheets using Microsoft Excel and the SOLVER function to fit the model to the data. The purpose of this section is to describe the system developed within the RCM, and how it is used

  9. 4M Overturned Pyramid (MOP) Model Utilization: Case Studies on Collision in Indonesian and Japanese Maritime Traffic Systems (MTS)

    OpenAIRE

    Wanginingastuti Mutmainnah; Masao Furusho

    2016-01-01

    4M Overturned Pyramid (MOP) model is a new model, proposed by authors, to characterized MTS which is adopting epidemiological model that determines causes of accidents, including not only active failures but also latent failures and barriers. This model is still being developed. One of utilization of MOP model is characterizing accidents in MTS, i.e. collision in Indonesia and Japan that is written in this paper. The aim of this paper is to show the characteristics of ship collision accidents...

  10. Global precipitation measurements for validating climate models

    Science.gov (United States)

    Tapiador, F. J.; Navarro, A.; Levizzani, V.; García-Ortega, E.; Huffman, G. J.; Kidd, C.; Kucera, P. A.; Kummerow, C. D.; Masunaga, H.; Petersen, W. A.; Roca, R.; Sánchez, J.-L.; Tao, W.-K.; Turk, F. J.

    2017-11-01

    The advent of global precipitation data sets with increasing temporal span has made it possible to use them for validating climate models. In order to fulfill the requirement of global coverage, existing products integrate satellite-derived retrievals from many sensors with direct ground observations (gauges, disdrometers, radars), which are used as reference for the satellites. While the resulting product can be deemed as the best-available source of quality validation data, awareness of the limitations of such data sets is important to avoid extracting wrong or unsubstantiated conclusions when assessing climate model abilities. This paper provides guidance on the use of precipitation data sets for climate research, including model validation and verification for improving physical parameterizations. The strengths and limitations of the data sets for climate modeling applications are presented, and a protocol for quality assurance of both observational databases and models is discussed. The paper helps elaborating the recent IPCC AR5 acknowledgment of large observational uncertainties in precipitation observations for climate model validation.

  11. Measuring and modelling the structure of chocolate

    Science.gov (United States)

    Le Révérend, Benjamin J. D.; Fryer, Peter J.; Smart, Ian; Bakalis, Serafim

    2015-01-01

    The cocoa butter present in chocolate exists as six different polymorphs. To achieve the desired crystal form (βV), traditional chocolate manufacturers use relatively slow cooling (chocolate products during processing as well as the crystal structure of cocoa butter throughout the process. A set of ordinary differential equations describes the kinetics of fat crystallisation. The parameters were obtained by fitting the model to a set of DSC curves. The heat transfer equations were coupled to the kinetic model and solved using commercially available CFD software. A method using single crystal XRD was developed using a novel subtraction method to quantify the cocoa butter structure in chocolate directly and results were compared to the ones predicted from the model. The model was proven to predict phase change temperature during processing accurately (±1°C). Furthermore, it was possible to correctly predict phase changes and polymorphous transitions. The good agreement between the model and experimental data on the model geometry allows a better design and control of industrial processes.

  12. Modeling menopause: The utility of rodents in translational behavioral endocrinology research.

    Science.gov (United States)

    Koebele, Stephanie V; Bimonte-Nelson, Heather A

    2016-05-01

    The human menopause transition and aging are each associated with an increase in a variety of health risk factors including, but not limited to, cardiovascular disease, osteoporosis, cancer, diabetes, stroke, sexual dysfunction, affective disorders, sleep disturbances, and cognitive decline. It is challenging to systematically evaluate the biological underpinnings associated with the menopause transition in the human population. For this reason, rodent models have been invaluable tools for studying the impact of gonadal hormone fluctuations and eventual decline on a variety of body systems. While it is essential to keep in mind that some of the mechanisms associated with aging and the transition into a reproductively senescent state can differ when translating from one species to another, animal models provide researchers with opportunities to gain a fundamental understanding of the key elements underlying reproduction and aging processes, paving the way to explore novel pathways for intervention associated with known health risks. Here, we discuss the utility of several rodent models used in the laboratory for translational menopause research, examining the benefits and drawbacks in helping us to better understand aging and the menopause transition in women. The rodent models discussed are ovary-intact, ovariectomy, and 4-vinylcylohexene diepoxide for the menopause transition. We then describe how these models may be implemented in the laboratory, particularly in the context of cognition. Ultimately, we aim to use these animal models to elucidate novel perspectives and interventions for maintaining a high quality of life in women, and to potentially prevent or postpone the onset of negative health consequences associated with these significant life changes during aging. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Modeling menopause: The utility of rodents in translational behavioral endocrinology research

    Science.gov (United States)

    Koebele, Stephanie V.; Bimonte-Nelson, Heather A.

    2016-01-01

    The human menopause transition and aging are each associated with an increase in a variety of health risk factors including, but not limited to, cardiovascular disease, osteoporosis, cancer, diabetes, stroke, sexual dysfunction, affective disorders, sleep disturbances, and cognitive decline. It is often challenging to systematically evaluate the biological underpinnings associated with the menopause transition in the human population. For this reason, rodent models have been invaluable tools for studying the impact of gonadal hormone fluctuations and eventual decline on a variety of body systems. While it is essential to keep in mind that some of the mechanisms associated with aging and the transition into a reproductively senescent state can differ when translating from one species to another, animal models provide researchers with opportunities to gain a fundamental understanding of the key elements underlying reproduction and aging processes, paving the way to explore novel pathways for intervention associated with known health risks. Here, we discuss the utility of several rodent models used in the laboratory for translational menopause research, examining the benefits and drawbacks in helping us to better understand aging and the menopause transition in women. The rodent models discussed are ovary-intact, ovariectomy, and 4-vinylcylohexene diepoxide for the menopause transition. We then describe how these models may be implemented in the laboratory, particularly in the context of cognition. Ultimately, we aim to use these animal models to elucidate novel perspectives and interventions for maintaining a high quality of life in women, and to potentially prevent or postpone the onset of negative health consequences associated with these significant life changes during aging. PMID:27013283

  14. Measures and procedures utilized to determine the added value of microprocessor-controlled prosthetic knee joints: a systematic review.

    Science.gov (United States)

    Theeven, Patrick J R; Hemmen, Bea; Brink, Peter R G; Smeets, Rob J E M; Seelen, Henk A M

    2013-11-27

    The effectiveness of microprocessor-controlled prosthetic knee joints (MPKs) has been assessed using a variety of outcome measures in a variety of health and health-related domains. However, if the patient is to receive a prosthetic knee joint that enables him to function optimally in daily life, it is vital that the clinician has adequate information about the effects of that particular component on all aspects of persons' functioning. Especially information concerning activities and participation is of high importance, as this component of functioning closely describes the person's ability to function with the prosthesis in daily life. The present study aimed to review the outcome measures that have been utilized to assess the effects of microprocessor-controlled prosthetic knee joints (MPK), in comparison with mechanically controlled prosthetic knee joints, and aimed to classify these measures according to the components and categories of functioning defined by the International Classification of Functioning, Disability and Health (ICF). Subsequently, the gaps in the scientific evidence regarding the effectiveness of MPKs were determined. A systematic literature search in 6 databases (i.e. PubMed, CINAHL, Cochrane Library, Embase, Medline and PsychInfo) identified scientific studies that compared the effects of using MPKs with mechanically controlled prosthetic knee joints on persons' functioning. The outcome measures that have been utilized in those studies were extracted and categorized according to the ICF framework. Also, a descriptive analysis regarding all studies has been performed. A total of 37 studies and 72 outcome measures have been identified. The majority (67%) of the outcome measures that described the effects of using an MPK on persons' actual performance with the prosthesis covered the ICF body functions component. Only 31% of the measures on persons' actual performance investigated how an MPK may affect performance in daily life. Research also

  15. Mathematical model of a utility firm. Final technical report, Part I

    Energy Technology Data Exchange (ETDEWEB)

    1983-08-21

    Utility companies are in the predicament of having to make forecasts, and draw up plans for the future, in an increasingly fluid and volatile socio-economic environment. The project being reported is to contribute to an understanding of the economic and behavioral processes that take place within a firm, and without it. Three main topics are treated. One is the representation of the characteristics of the members of an organization, to the extent to which characteristics seem pertinent to the processes of interest. The second is the appropriate management of the processes of change by an organization. The third deals with the competitive striving towards an economic equilibrium among the members of a society in the large, on the theory that this process might be modeled in a way which is similar to the one for the intra-organizational ones. This volume covers mainly the first topic.

  16. Prediction of Adequate Prenatal Care Utilization Based on the Extended Parallel Process Model.

    Science.gov (United States)

    Hajian, Sepideh; Imani, Fatemeh; Riazi, Hedyeh; Salmani, Fatemeh

    2017-10-01

    Pregnancy complications are one of the major public health concerns. One of the main causes of preventable complications is the absence of or inadequate provision of prenatal care. The present study was conducted to investigate whether Extended Parallel Process Model's constructs can predict the utilization of prenatal care services. The present longitudinal prospective study was conducted on 192 pregnant women selected through the multi-stage sampling of health facilities in Qeshm, Hormozgan province, from April to June 2015. Participants were followed up from the first half of pregnancy until their childbirth to assess adequate or inadequate/non-utilization of prenatal care services. Data were collected using the structured Risk Behavior Diagnosis Scale. The analysis of the data was carried out in SPSS-22 using one-way ANOVA, linear regression and logistic regression analysis. The level of significance was set at 0.05. Totally, 178 pregnant women with a mean age of 25.31±5.42 completed the study. Perceived self-efficacy (OR=25.23; Pprenatal care. Husband's occupation in the labor market (OR=0.43; P=0.02), unwanted pregnancy (OR=0.352; Pcare for the minors or elderly at home (OR=0.35; P=0.045) were associated with lower odds of receiving prenatal care. The model showed that when perceived efficacy of the prenatal care services overcame the perceived threat, the likelihood of prenatal care usage will increase. This study identified some modifiable factors associated with prenatal care usage by women, providing key targets for appropriate clinical interventions.

  17. Modeling and optimization of processes for clean and efficient pulverized coal combustion in utility boilers

    Directory of Open Access Journals (Sweden)

    Belošević Srđan V.

    2016-01-01

    Full Text Available Pulverized coal-fired power plants should provide higher efficiency of energy conversion, flexibility in terms of boiler loads and fuel characteristics and emission reduction of pollutants like nitrogen oxides. Modification of combustion process is a cost-effective technology for NOx control. For optimization of complex processes, such as turbulent reactive flow in coal-fired furnaces, mathematical modeling is regularly used. The NOx emission reduction by combustion modifications in the 350 MWe Kostolac B boiler furnace, tangentially fired by pulverized Serbian lignite, is investigated in the paper. Numerical experiments were done by an in-house developed three-dimensional differential comprehensive combustion code, with fuel- and thermal-NO formation/destruction reactions model. The code was developed to be easily used by engineering staff for process analysis in boiler units. A broad range of operating conditions was examined, such as fuel and preheated air distribution over the burners and tiers, operation mode of the burners, grinding fineness and quality of coal, boiler loads, cold air ingress, recirculation of flue gases, water-walls ash deposition and combined effect of different parameters. The predictions show that the NOx emission reduction of up to 30% can be achieved by a proper combustion organization in the case-study furnace, with the flame position control. Impact of combustion modifications on the boiler operation was evaluated by the boiler thermal calculations suggesting that the facility was to be controlled within narrow limits of operation parameters. Such a complex approach to pollutants control enables evaluating alternative solutions to achieve efficient and low emission operation of utility boiler units. [Projekat Ministarstva nauke Republike Srbije, br. TR-33018: Increase in energy and ecology efficiency of processes in pulverized coal-fired furnace and optimization of utility steam boiler air preheater by using in

  18. A gentle introduction to Rasch measurement models for metrologists

    International Nuclear Information System (INIS)

    Mari, Luca; Wilson, Mark

    2013-01-01

    The talk introduces the basics of Rasch models by systematically interpreting them in the conceptual and lexical framework of the International Vocabulary of Metrology, third edition (VIM3). An admittedly simple example of physical measurement highlights the analogies between physical transducers and tests, as they can be understood as measuring instruments of Rasch models and psychometrics in general. From the talk natural scientists and engineers might learn something of Rasch models, as a specifically relevant case of social measurement, and social scientists might re-interpret something of their knowledge of measurement in the light of the current physical measurement models

  19. Modeling Utility Load and Temperature Relationships for Use with Long-Lead Forecasts.

    Science.gov (United States)

    Robinson, Peter J.

    1997-05-01

    Models relating system-wide average temperature to total system load were developed for the Virginia Power and Duke Power service areas in the southeastern United States. Daily data for the 1985-91 period were used. The influence of temperature on load was at a minimum around 18°C and increased more rapidly with increasing temperatures than with decreasing ones. The response was sensitive to the day of the week, and models using separate weekdays as well as one using pooled data were created. None adequately accounted for civic holidays or for extreme temperatures. Estimates of average loads over a 3-month period, however, were accurate to within ±3%. The models were used to transform the probability distribution of 3-month average temperatures for each system, derived from the historical record, into load probabilities. These were used with the categorical temperature probabilities given by the National Weather Service long-lead forecasts to estimate the forecast load probabilities. In summer and winter the resultant change in distribution is sufficient to have an impact on the advance fuel purchase decisions of the utilities. Results in spring and fall are more ambiguous.

  20. Evaluation of remedial alternative of a LNAPL plume utilizing groundwater modeling

    International Nuclear Information System (INIS)

    Johnson, T.; Way, S.; Powell, G.

    1997-01-01

    The TIMES model was utilized to evaluate remedial options for a large LNAPL spill that was impacting the North Platte River in Glenrock, Wyoming. LNAPL was found discharging into the river from the adjoining alluvial aquifer. Subsequent investigations discovered an 18 hectare plume extended across the alluvium and into a sandstone bedrock outcrop to the south of the river. The TIMES model was used to estimate the LNAPL volume and to evaluate options for optimizing LNAPL recovery. Data collected from recovery and monitoring wells were used for model calibration. A LNAPL volume of 5.5 million L was estimated, over 3.0 million L of which is in the sandstone bedrock. An existing product recovery system was evaluated for its effectiveness. Three alternative recovery scenarios were also evaluated to aid in selecting the most cost-effective and efficient recovery system for the site. An active wellfield hydraulically upgradient of the existing recovery system was selected as most appropriate to augment the existing system in recovering LNAPL efficiently

  1. Classification models of child molesters utilizing the Abel Assessment for sexual interest.

    Science.gov (United States)

    Abel, G G; Jordan, A; Hand, C G; Holland, L A; Phipps, A

    2001-05-01

    The aims of this study are to demonstrate 1) the criterion validity of the Abel Assessment for sexual interest (AASI) based on its ability to discriminate between non child molesters and admitting child molesters, and 2) its resistance to falsification based on its ability to discriminate between liar-denier child molesters and non child molesters. A group of 747 participants matched by age, race, and income was used to develop three logistic regression equations. The models compare a group of non child molesting patients under evaluation for other paraphilias to three groups: 1) a group of admitting molesters of girls under 14 years of age, 2) a group of admitting molesters of boys under 14 years of age, and 3) a group believed to be concealing or denying having molested. Both of the equations designed to discriminate between admitting child molesters and non child molesters were statistically significant. The equation contrasting child molesters attempting to conceal or deny their behavior and non child molesting patients was also statistically significant. The models classifying admitting child molesters versus non child molesters demonstrate criterion validity, while the third model provides evidence of the AASI's resistance to falsification and its utility as a tool in the detection of child molesters who deny the behavior. Results of the equations are reported and suggestions for their use are discussed.

  2. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  3. Multivariate linear models and repeated measurements revisited

    DEFF Research Database (Denmark)

    Dalgaard, Peter

    2009-01-01

    Methods for generalized analysis of variance based on multivariate normal theory have been known for many years. In a repeated measurements context, it is most often of interest to consider transformed responses, typically within-subject contrasts or averages. Efficiency considerations leads to s...

  4. Model-based cartilage thickness measurement in the submillimeter range

    International Nuclear Information System (INIS)

    Streekstra, G. J.; Strackee, S. D.; Maas, M.; Wee, R. ter; Venema, H. W.

    2007-01-01

    Current methods of image-based thickness measurement in thin sheet structures utilize second derivative zero crossings to locate the layer boundaries. It is generally acknowledged that the nonzero width of the point spread function (PSF) limits the accuracy of this measurement procedure. We propose a model-based method that strongly reduces PSF-induced bias by incorporating the PSF into the thickness estimation method. We estimated the bias in thickness measurements in simulated thin sheet images as obtained from second derivative zero crossings. To gain insight into the range of sheet thickness where our method is expected to yield improved results, sheet thickness was varied between 0.15 and 1.2 mm with an assumed PSF as present in the high-resolution modes of current computed tomography (CT) scanners [full width at half maximum (FWHM) 0.5-0.8 mm]. Our model-based method was evaluated in practice by measuring layer thickness from CT images of a phantom mimicking two parallel cartilage layers in an arthrography procedure. CT arthrography images of cadaver wrists were also evaluated, and thickness estimates were compared to those obtained from high-resolution anatomical sections that served as a reference. The thickness estimates from the simulated images reveal that the method based on second derivative zero crossings shows considerable bias for layers in the submillimeter range. This bias is negligible for sheet thickness larger than 1 mm, where the size of the sheet is more than twice the FWHM of the PSF but can be as large as 0.2 mm for a 0.5 mm sheet. The results of the phantom experiments show that the bias is effectively reduced by our method. The deviations from the true thickness, due to random fluctuations induced by quantum noise in the CT images, are of the order of 3% for a standard wrist imaging protocol. In the wrist the submillimeter thickness estimates from the CT arthrography images correspond within 10% to those estimated from the anatomical

  5. Performance Measurement Model A TarBase model with ...

    Indian Academy of Sciences (India)

    rohit

    Model A 8.0 2.0 94.52% 88.46% 76 108 12 12 0.86 0.91 0.78 0.94. Model B 2.0 2.0 93.18% 89.33% 64 95 10 9 0.88 0.90 0.75 0.98. The above results for TEST – 1 show details for our two models (Model A and Model B).Performance of Model A after adding of 32 negative dataset of MiRTif on our testing set(MiRecords) ...

  6. Modelling of power-reactivity coefficient measurement

    International Nuclear Information System (INIS)

    Strmensky, C.; Petenyi, V.; Jagrik, J.; Minarcin, M.; Hascik, R.; Toth, L.

    2005-01-01

    Report describes results of modeling of power-reactivity coefficient analysis on power-level. In paper we calculate values of discrepancies arisen during transient process. These discrepancies can be arisen as result of experiment evaluation and can be caused by disregard of 3D effects on neutron distribution. The results are critically discussed (Authors)

  7. Measuring productivity differences in equilibrium search models

    DEFF Research Database (Denmark)

    Lanot, Gauthier; Neumann, George R.

    1996-01-01

    Equilibrium search models require unobserved heterogeneity in productivity to fit observed wage distribution data, but provide no guidance about the location parameter of the heterogeneity. In this paper we show that the location of the productivity heterogeneity implies a mode in a kernel density...

  8. Modelling of landfill gas adsorption with bottom ash for utilization of renewable energy

    Energy Technology Data Exchange (ETDEWEB)

    Miao, Chen

    2011-10-06

    Energy crisis, environment pollution and climate change are the serious challenges to people worldwide. In the 21st century, human being is trend to research new technology of renewable energy, so as to slow down global warming and develop society in an environmentally sustainable method. Landfill gas, produced by biodegradable municipal solid waste in landfill, is a renewable energy source. In this work, landfill gas utilization for energy generation is introduced. Landfill gas is able to produce hydrogen by steam reforming reactions. There is a steam reformer equipment in the fuel cells system. A sewage plant of Cologne in Germany has run the Phosphoric Acid Fuel Cells power station with biogas for more than 50,000 hours successfully. Landfill gas thus may be used as fuel for electricity generation via fuel cells system. For the purpose of explaining the possibility of landfill gas utilization via fuel cells, the thermodynamics of landfill gas steam reforming are discussed by simulations. In practice, the methane-riched gas can be obtained by landfill gas purification and upgrading. This work investigate a new method for upgrading-landfill gas adsorption with bottom ash experimentally. Bottom ash is a by-product of municipal solid waste incineration, some of its physical and chemical properties are analysed in this work. The landfill gas adsorption experimental data show bottom ash can be used as a potential adsorbent for landfill gas adsorption to remove CO{sub 2}. In addition, the alkalinity of bottom ash eluate can be reduced in these adsorption processes. Therefore, the interactions between landfill gas and bottom ash can be explained by series reactions accordingly. Furthermore, a conceptual model involving landfill gas adsorption with bottom ash is developed. In this thesis, the parameters of landfill gas adsorption equilibrium equations can be obtained by fitting experimental data. On the other hand, these functions can be deduced with theoretical approach

  9. A Quantitative Human Spacecraft Design Evaluation Model for Assessing Crew Accommodation and Utilization

    Science.gov (United States)

    Fanchiang, Christine

    Crew performance, including both accommodation and utilization factors, is an integral part of every human spaceflight mission from commercial space tourism, to the demanding journey to Mars and beyond. Spacecraft were historically built by engineers and technologists trying to adapt the vehicle into cutting edge rocketry with the assumption that the astronauts could be trained and will adapt to the design. By and large, that is still the current state of the art. It is recognized, however, that poor human-machine design integration can lead to catastrophic and deadly mishaps. The premise of this work relies on the idea that if an accurate predictive model exists to forecast crew performance issues as a result of spacecraft design and operations, it can help designers and managers make better decisions throughout the design process, and ensure that the crewmembers are well-integrated with the system from the very start. The result should be a high-quality, user-friendly spacecraft that optimizes the utilization of the crew while keeping them alive, healthy, and happy during the course of the mission. Therefore, the goal of this work was to develop an integrative framework to quantitatively evaluate a spacecraft design from the crew performance perspective. The approach presented here is done at a very fundamental level starting with identifying and defining basic terminology, and then builds up important axioms of human spaceflight that lay the foundation for how such a framework can be developed. With the framework established, a methodology for characterizing the outcome using a mathematical model was developed by pulling from existing metrics and data collected on human performance in space. Representative test scenarios were run to show what information could be garnered and how it could be applied as a useful, understandable metric for future spacecraft design. While the model is the primary tangible product from this research, the more interesting outcome of

  10. Innovative practice model to optimize resource utilization and improve access to care for high-risk and BRCA+ patients.

    Science.gov (United States)

    Head, Linden; Nessim, Carolyn; Usher Boyd, Kirsty

    2017-02-01

    Bilateral prophylactic mastectomy (BPM) has demonstrated breast cancer risk reduction in high-risk/ BRCA + patients. However, priority of active cancers coupled with inefficient use of operating room (OR) resources presents challenges in offering BPM in a timely manner. To address these challenges, a rapid access prophylactic mastectomy and immediate reconstruction (RAPMIR) program was innovated. The purpose of this study was to evaluate RAPMIR with regards to access to care and efficiency. We retrospectively reviewed the cases of all high-risk/ BRCA + patients having had BPM between September 2012 and August 2014. Patients were divided into 2 groups: those managed through the traditional model and those managed through the RAPMIR model. RAPMIR leverages 2 concurrently running ORs with surgical oncology and plastic surgery moving between rooms to complete 3 combined BPMs with immediate reconstruction in addition to 1-2 independent cases each operative day. RAPMIR eligibility criteria included high-risk/ BRCA + status; BPM with immediate, implant-based reconstruction; and day surgery candidacy. Wait times, case volumes and patient throughput were measured and compared. There were 16 traditional patients and 13 RAPMIR patients. Mean wait time (days from referral to surgery) for RAPMIR was significantly shorter than for the traditional model (165.4 v. 309.2 d, p = 0.027). Daily patient throughput (4.3 v. 2.8), plastic surgery case volume (3.7 v. 1.6) and surgical oncology case volume (3.0 v. 2.2) were significantly greater in the RAPMIR model than the traditional model ( p = 0.003, p < 0.001 and p = 0.015, respectively). A multidisciplinary model with optimized scheduling has the potential to improve access to care and optimize resource utilization.

  11. Measurement Models for Reasoned Action Theory

    OpenAIRE

    Hennessy, Michael; Bleakley, Amy; Fishbein, Martin

    2012-01-01

    Quantitative researchers distinguish between causal and effect indicators. What are the analytic problems when both types of measures are present in a quantitative reasoned action analysis? To answer this question, we use data from a longitudinal study to estimate the association between two constructs central to reasoned action theory: behavioral beliefs and attitudes toward the behavior. The belief items are causal indicators that define a latent variable index while the attitude items are ...

  12. Modeling Late-State Serpentinization on Enceladus and Implications for Methane-Utilizing Microbial Metabolisms

    Science.gov (United States)

    Hart, R.; Cardace, D.

    2017-12-01

    Modeling investigations of Enceladus and other icy-satellites have included physicochemical properties (Sohl et al., 2010; Glein et al., 2015; Neveu et al., 2015), geophysical prospects of serpentinization (Malamud and Prialnik, 2016; Vance et al., 2016), and aqueous geochemistry across different antifreeze fluid-rock scenarios (Neveu et al., 2017). To more effectively evaluate the habitability of Enceladus, in the context of recent observations (Waite et al., 2017), we model the potential bioenergetic pathways that would be thermodynamically favorable at the interface of hydrothermal water-rock reactions resulting from late stage serpentinization (>90% serpentinized), hypothesized on Enceladus. Building on previous geochemical model outputs of Enceladus (Neveu et al., 2017), and bioenergetic modeling (as in Amend and Shock, 2001; Cardace et al., 2015), we present a model of late stage serpentinization possible at the water-rock interface of Enceladus, and report changing activities of chemical species related to methane utilization by microbes over the course of serpentinization using the Geochemist's Workbench REACT code [modified Extended Debye-Hückel (Helgeson, 1969) using the thermodynamic database of SUPCRT92 (Johnson et al., 1992)]. Using a model protolith speculated to exist at Enceladus's water-rock boundary, constrained by extraterrestrial analog analytical data for subsurface serpentinites of the Coast Range Ophiolite (Lower Lake, CA, USA) mélange rocks, we deduce evolving habitability conditions as the model protolith reacts with feasible, though hypothetical, planetary ocean chemistries (from Glien et al., 2015, and Neveu et al., 2017). Major components of modeled oceans, Na-Cl, Mg-Cl, and Ca-Cl, show shifts in the feasibility of CO2-CH4-H2 driven microbial habitability, occurring early in the reaction progress, with methanogenesis being bioenergetically favored. Methanotrophy was favored late in the reaction progress of some Na-Cl systems and in the

  13. Utilization and cost of a new model of care for managing acute knee injuries: the Calgary acute knee injury clinic

    Directory of Open Access Journals (Sweden)

    Lau Breda HF

    2012-12-01

    Full Text Available Abstract Background Musculoskeletal disorders (MSDs affect a large proportion of the Canadian population and present a huge problem that continues to strain primary healthcare resources. Currently, the Canadian healthcare system depicts a clinical care pathway for MSDs that is inefficient and ineffective. Therefore, a new inter-disciplinary team-based model of care for managing acute knee injuries was developed in Calgary, Alberta, Canada: the Calgary Acute Knee Injury Clinic (C-AKIC. The goal of this paper is to evaluate and report on the appropriateness, efficiency, and effectiveness of the C-AKIC through healthcare utilization and costs associated with acute knee injuries. Methods This quasi-experimental study measured and evaluated cost and utilization associated with specific healthcare services for patients presenting with acute knee injuries. The goal was to compare patients receiving care from two clinical care pathways: the existing pathway (i.e. comparison group and a new model, the C-AKIC (i.e. experimental group. This was accomplished through the use of a Healthcare Access and Patient Satisfaction Questionnaire (HAPSQ. Results Data from 138 questionnaires were analyzed in the experimental group and 136 in the comparison group. A post-hoc analysis determined that both groups were statistically similar in socio-demographic characteristics. With respect to utilization, patients receiving care through the C-AKIC used significantly less resources. Overall, patients receiving care through the C-AKIC incurred 37% of the cost of patients with knee injuries in the comparison group and significantly incurred less costs when compared to the comparison group. The total aggregate average cost for the C-AKIC group was $2,549.59 compared to $6,954.33 for the comparison group (p Conclusions The Calgary Acute Knee Injury Clinic was able to manage and treat knee injured patients for less cost than the existing state of healthcare delivery. The

  14. Measuring Quality Satisfaction with Servqual Model

    Directory of Open Access Journals (Sweden)

    Dan Păuna

    2012-05-01

    Full Text Available The orientation to customer satisfaction is not a recent phenomenon, many very successfulbusinesspeople from the beginning of the 20th century, such as Sir Henry Royce, a name synonymous withRoll – Royce vehicles, stated the first principle regarding customer satisfaction “Our interest in the Roll-Royce cars does not end at the moment when the owner pays for and takes delivery the car. Our interest in thecar never wanes. Our ambition is that every purchaser of the Rolls - Royce car shall continue to be more thansatisfied (Rolls-Royce.” The following paper tries to deal with the important qualities of the concept for themeasuring of the gap between expected costumer services satisfactions, and perceived services like a routinecustomer feedback process, by means of a relatively new model, the Servqual model.

  15. Artificial intelligence model for sustain ability measurement

    International Nuclear Information System (INIS)

    Navickiene, R.; Navickas, K.

    2012-01-01

    The article analyses the main dimensions of organizational sustain ability, their possible integrations into artificial neural network. In this article authors performing analyses of organizational internal and external environments, their possible correlations with 4 components of sustain ability, and the principal determination models for sustain ability of organizations. Based on the general principles of sustainable development organizations, a artificial intelligence model for the determination of organizational sustain ability has been developed. The use of self-organizing neural networks allows the identification of the organizational sustain ability and the endeavour to explore vital, social, antropogenical and economical efficiency. The determination of the forest enterprise sustain ability is expected to help better manage the sustain ability. (Authors)

  16. Achieving Success in Measurement and Reliability Modeling

    OpenAIRE

    Keller, Ted; Munson, John C.; Schneidewind, Norman; Stark, George

    1993-01-01

    Panel Session at the International Symposium on Software Reliability Engineering 1993, Saturday: 6 November 1993, 0830-1000 and 1030-1200 The NASA Space Shuttle on-board software is one of the nation’s most safety-critical software systems. The process which produces this software has been rated at maturity level five. Among the quality assurance methods that are used to ensure the software is free of safetycritical faults is the use of reliability modelling and predi...

  17. Validation of the measurement model concept for error structure identification

    International Nuclear Information System (INIS)

    Shukla, Pavan K.; Orazem, Mark E.; Crisalle, Oscar D.

    2004-01-01

    The development of different forms of measurement models for impedance has allowed examination of key assumptions on which the use of such models to assess error structure are based. The stochastic error structures obtained using the transfer-function and Voigt measurement models were identical, even when non-stationary phenomena caused some of the data to be inconsistent with the Kramers-Kronig relations. The suitability of the measurement model for assessment of consistency with the Kramers-Kronig relations, however, was found to be more sensitive to the confidence interval for the parameter estimates than to the number of parameters in the model. A tighter confidence interval was obtained for Voigt measurement model, which made the Voigt measurement model a more sensitive tool for identification of inconsistencies with the Kramers-Kronig relations

  18. Mars Colony in situ resource utilization: An integrated architecture and economics model

    Science.gov (United States)

    Shishko, Robert; Fradet, René; Do, Sydney; Saydam, Serkan; Tapia-Cortez, Carlos; Dempster, Andrew G.; Coulton, Jeff

    2017-09-01

    This paper reports on our effort to develop an ensemble of specialized models to explore the commercial potential of mining water/ice on Mars in support of a Mars Colony. This ensemble starts with a formal systems architecting framework to describe a Mars Colony and capture its artifacts' parameters and technical attributes. The resulting database is then linked to a variety of ;downstream; analytic models. In particular, we integrated an extraction process (i.e., ;mining;) model, a simulation of the colony's environmental control and life support infrastructure known as HabNet, and a risk-based economics model. The mining model focuses on the technologies associated with in situ resource extraction, processing, storage and handling, and delivery. This model computes the production rate as a function of the systems' technical parameters and the local Mars environment. HabNet simulates the fundamental sustainability relationships associated with establishing and maintaining the colony's population. The economics model brings together market information, investment and operating costs, along with measures of market uncertainty and Monte Carlo techniques, with the objective of determining the profitability of commercial water/ice in situ mining operations. All told, over 50 market and technical parameters can be varied in order to address ;what-if; questions, including colony location.

  19. Utilizing Visual Effects Software for Efficient and Flexible Isostatic Adjustment Modelling

    Science.gov (United States)

    Meldgaard, A.; Nielsen, L.; Iaffaldano, G.

    2017-12-01

    The isostatic adjustment signal generated by transient ice sheet loading is an important indicator of past ice sheet extent and the rheological constitution of the interior of the Earth. Finite element modelling has proved to be a very useful tool in these studies. We present a simple numerical model for 3D visco elastic Earth deformation and a new approach to the design of such models utilizing visual effects software designed for the film and game industry. The software package Houdini offers an assortment of optimized tools and libraries which greatly facilitate the creation of efficient numerical algorithms. In particular, we make use of Houdini's procedural work flow, the SIMD programming language VEX, Houdini's sparse matrix creation and inversion libraries, an inbuilt tetrahedralizer for grid creation, and the user interface, which facilitates effortless manipulation of 3D geometry. We mitigate many of the time consuming steps associated with the authoring of efficient algorithms from scratch while still keeping the flexibility that may be lost with the use of commercial dedicated finite element programs. We test the efficiency of the algorithm by comparing simulation times with off-the-shelf solutions from the Abaqus software package. The algorithm is tailored for the study of local isostatic adjustment patterns, in close vicinity to present ice sheet margins. In particular, we wish to examine possible causes for the considerable spatial differences in the uplift magnitude which are apparent from field observations in these areas. Such features, with spatial scales of tens of kilometres, are not resolvable with current global isostatic adjustment models, and may require the inclusion of local topographic features. We use the presented algorithm to study a near field area where field observations are abundant, namely, Disko Bay in West Greenland with the intention of constraining Earth parameters and ice thickness. In addition, we assess how local

  20. Spot markets vs. long-term contracts - modelling tools for regional electricity generating utilities

    International Nuclear Information System (INIS)

    Grohnheit, P.E.

    1999-01-01

    A properly organised market for electricity requires that some information will be available for all market participants. Also a range of generally available modelling tools are necessary. This paper describes a set of simple models based on published data for analyses of the long-term revenues of regional utilities with combined heat and power generation (CHP), who will operate a competitive international electricity market and a local heat market. The future revenues from trade on the spot market is analysed using a load curve model, in which marginal costs are calculated on the basis of short-term costs of the available units and chronological hourly variations in the demands for electricity and heat. Assumptions on prices, marginal costs and electricity generation by the different types of generating units are studied for selected types of local electricity generators. The long-term revenue requirements to be met by long-term contracts are analysed using a traditional techno-economic optimisation model focusing on technology choice and competition among technologies over 20.30 years. A possible conclusion from this discussion is that it is important for the economic and environmental efficiency of the electricity market that local or regional generators of CHP, who are able to react on price signals, do not conclude long-term contracts that include fixed time-of-day tariff for sale of electricity. Optimisation results for a CHP region (represented by the structure of the Danish electricity and CHP market in 1995) also indicates that a market for CO 2 tradable permits is unlikely to attract major non-fossil fuel technologies for electricity generation, e.g. wind power. (au)

  1. Protein (multi-)location prediction: utilizing interdependencies via a generative model

    Science.gov (United States)

    Shatkay, Hagit

    2015-01-01

    Motivation: Proteins are responsible for a multitude of vital tasks in all living organisms. Given that a protein’s function and role are strongly related to its subcellular location, protein location prediction is an important research area. While proteins move from one location to another and can localize to multiple locations, most existing location prediction systems assign only a single location per protein. A few recent systems attempt to predict multiple locations for proteins, however, their performance leaves much room for improvement. Moreover, such systems do not capture dependencies among locations and usually consider locations as independent. We hypothesize that a multi-location predictor that captures location inter-dependencies can improve location predictions for proteins. Results: We introduce a probabilistic generative model for protein localization, and develop a system based on it—which we call MDLoc—that utilizes inter-dependencies among locations to predict multiple locations for proteins. The model captures location inter-dependencies using Bayesian networks and represents dependency between features and locations using a mixture model. We use iterative processes for learning model parameters and for estimating protein locations. We evaluate our classifier MDLoc, on a dataset of single- and multi-localized proteins derived from the DBMLoc dataset, which is the most comprehensive protein multi-localization dataset currently available. Our results, obtained by using MDLoc, significantly improve upon results obtained by an initial simpler classifier, as well as on results reported by other top systems. Availability and implementation: MDLoc is available at: http://www.eecis.udel.edu/∼compbio/mdloc. Contact: shatkay@udel.edu. PMID:26072505

  2. Statistical learning modeling method for space debris photometric measurement

    Science.gov (United States)

    Sun, Wenjing; Sun, Jinqiu; Zhang, Yanning; Li, Haisen

    2016-03-01

    Photometric measurement is an important way to identify the space debris, but the present methods of photometric measurement have many constraints on star image and need complex image processing. Aiming at the problems, a statistical learning modeling method for space debris photometric measurement is proposed based on the global consistency of the star image, and the statistical information of star images is used to eliminate the measurement noises. First, the known stars on the star image are divided into training stars and testing stars. Then, the training stars are selected as the least squares fitting parameters to construct the photometric measurement model, and the testing stars are used to calculate the measurement accuracy of the photometric measurement model. Experimental results show that, the accuracy of the proposed photometric measurement model is about 0.1 magnitudes.

  3. Seismoelectric fluid/porous-medium interface response model and measurements

    NARCIS (Netherlands)

    Schakel, M.D.; Smeulders, D.M.J.; Slob, E.C.; Heller, H.K.J.

    2011-01-01

    Coupled seismic and electromagnetic (EM) wave effects in fluid-saturated porous media are measured since decades. However, direct comparisons between theoretical seismoelectric wavefields and measurements are scarce. A seismoelectric full-waveform numerical model is developed, which predicts both

  4. Crew Autonomy Measures and Models (CAMM), Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — SA Technologies will employ a two-part solution including measures and models for evaluating crew autonomy in exploratory space missions. An integrated measurement...

  5. Measurements of the Backstreaming Proton IONS in the Self-Magnetic Pinch (SMP) Diode Utilizing Copper Activation Technique

    Science.gov (United States)

    Mazarakis, Michael; Cuneo, Michael; Fournier, Sean; Johnston, Mark; Kiefer, Mark; Leckbee, Joshua; Simpson, Sean; Renk, Timothy; Webb, Timothy; Bennett, Nichelle

    2016-10-01

    The results presented here were obtained with an SMP diode mounted at the front high voltage end of the 8-10-MV RITS Self-Magnetically Insulated Transmission Line (MITL) voltage adder. Our experiments had two objectives: first, to measure the contribution of the back-streaming proton currents emitted from the anode target, and second, to evaluate the energy of those ions and hence the actual Anode-Cathode (A-K) gap voltage. The accelerating voltage quoted in the literature is estimated utilizing para-potential flow theories. Thus, it is interesting to have another independent measurement of the A-K voltage. We have measured the back-streaming protons emitted from the anode and propagating through a hollow cathode tip for various diode configurations and different techniques of target cleaning treatment, namely, heating at very high temperatures with DC and pulsed current, with RF plasma cleaning, and with both plasma cleaning and heating. We have also evaluated the A-K gap voltage by energy filtering techniques. Sandia is operated by Sandia Corporation, a subsidiary of Lockheed Martin Company, for the US DOE NNSA under Contract No. DE-AC04-94AL85000.

  6. Effectiveness and Utility of a Case-Based Model for Delivering Engineering Ethics Professional Development Units

    Directory of Open Access Journals (Sweden)

    Heidi Ann Hahn

    2014-10-01

    Full Text Available This article describes an action research project conducted at Los Alamos National Laboratory (LANL to resolve a problem with the ability of licensed and/or certified engineers to obtain the ethics-related professional development units or hours (PDUs or PDHs needed to maintain their credentials. Because of the recurring requirement and the static nature of the information, an initial, in-depth training followed by annually updated refresher training was proposed. A case model approach, with online delivery, was selected as the optimal pedagogical model for the refresher training. In the first two years, the only data that was collected was throughput and information retention. Response rates indicated that the approach was effective in helping licensed professional engineers obtain the needed PDUs. The rates of correct responses suggested that knowledge transfer regarding ethical reasoning had occurred in the initial training and had been retained in the refresher. In FY13, after completing the refresher, learners received a survey asking their opinion of the effectiveness and utility of the course, as well as their impressions of the case study format vs. the typical presentation format. Results indicate that the courses have been favorably received and that the case study method supports most of the pedagogical needs of adult learners as well as, if not better than, presentation-based instruction. Future plans for improvement are focused on identifying and evaluating methods for enriching online delivery of the engineering ethics cases.

  7. Effectiveness and Utility of a Case-Based Model for Delivering Engineering Ethics Professional Development Units

    Directory of Open Access Journals (Sweden)

    Heidi Ann Hahn

    2015-04-01

    Full Text Available This article describes an action research project conducted at Los Alamos National Laboratory (LANL to resolve a problem with the ability of licensed and/or certified engineers to obtain the ethics-related professional development units or hours (PDUs or PDHs needed to maintain their credentials. Because of the recurring requirement and the static nature of the information, an initial, in-depth training followed by annually updated refresher training was proposed. A case model approach, with online delivery, was selected as the optimal pedagogical model for the refresher training. In the first two years, the only data that was collected was throughput and information retention. Response rates indicated that the approach was effective in helping licensed professional engineers obtain the needed PDUs. The rates of correct responses suggested that knowledge transfer regarding ethical reasoning had occurred in the initial training and had been retained in the refresher. In FY13, after completing the refresher, learners received a survey asking their opinion of the effectiveness and utility of the course, as well as their impressions of the case study format vs. the typical presentation format. Results indicate that the courses have been favorably received and that the case study method supports most of the pedagogical needs of adult learners as well as, if not better than, presentation-based instruction. Future plans for improvement are focused on identifying and evaluating methods for enriching online delivery of the engineering ethics cases.

  8. Cost-Utility Analysis of Bariatric Surgery in Italy: Results of Decision-Analytic Modelling

    Directory of Open Access Journals (Sweden)

    Marcello Lucchese

    2017-06-01

    Full Text Available Objective: To evaluate the cost-effectiveness of bariatric surgery in Italy from a third-party payer perspective over a medium-term (10 years and a long-term (lifetime horizon. Methods: A state-transition Markov model was developed, in which patients may experience surgery, post-surgery complications, diabetes mellitus type 2, cardiovascular diseases or die. Transition probabilities, costs, and utilities were obtained from the Italian and international literature. Three types of surgeries were considered: gastric bypass, sleeve gastrectomy, and adjustable gastric banding. A base-case analysis was performed for the population, the characteristics of which were obtained from surgery candidates in Italy. Results: In the base-case analysis, over 10 years, bariatric surgery led to cost increment of EUR 2,661 and generated additional 1.1 quality-adjusted life years (QALYs. Over a lifetime, surgery led to savings of EUR 8,649, additional 0.5 life years and 3.2 QALYs. Bariatric surgery was cost-effective at 10 years with an incremental cost-effectiveness ratio of EUR 2,412/QALY and dominant over conservative management over a lifetime. Conclusion: In a comprehensive decision analytic model, a current mix of surgical methods for bariatric surgery was cost-effective at 10 years and cost-saving over the lifetime of the Italian patient cohort considered in this analysis.

  9. Cost-Utility Analysis of Bariatric Surgery in Italy: Results of Decision-Analytic Modelling.

    Science.gov (United States)

    Lucchese, Marcello; Borisenko, Oleg; Mantovani, Lorenzo Giovanni; Cortesi, Paolo Angelo; Cesana, Giancarlo; Adam, Daniel; Burdukova, Elisabeth; Lukyanov, Vasily; Di Lorenzo, Nicola

    2017-01-01

    To evaluate the cost-effectiveness of bariatric surgery in Italy from a third-party payer perspective over a medium-term (10 years) and a long-term (lifetime) horizon. A state-transition Markov model was developed, in which patients may experience surgery, post-surgery complications, diabetes mellitus type 2, cardiovascular diseases or die. Transition probabilities, costs, and utilities were obtained from the Italian and international literature. Three types of surgeries were considered: gastric bypass, sleeve gastrectomy, and adjustable gastric banding. A base-case analysis was performed for the population, the characteristics of which were obtained from surgery candidates in Italy. In the base-case analysis, over 10 years, bariatric surgery led to cost increment of EUR 2,661 and generated additional 1.1 quality-adjusted life years (QALYs). Over a lifetime, surgery led to savings of EUR 8,649, additional 0.5 life years and 3.2 QALYs. Bariatric surgery was cost-effective at 10 years with an incremental cost-effectiveness ratio of EUR 2,412/QALY and dominant over conservative management over a lifetime. In a comprehensive decision analytic model, a current mix of surgical methods for bariatric surgery was cost-effective at 10 years and cost-saving over the lifetime of the Italian patient cohort considered in this analysis. © 2017 The Author(s) Published by S. Karger GmbH, Freiburg.

  10. Practical utilization of modeling and simulation in laboratory process waste assessments

    International Nuclear Information System (INIS)

    Lyttle, T.W.; Smith, D.M.; Weinrach, J.B.; Burns, M.L.

    1993-01-01

    At Los Alamos National Laboratory (LANL), facility waste streams tend to be small but highly diverse. Initial characterization of such waste streams is difficult in part due to a lack of tools to assist the waste generators in completing such assessments. A methodology has been developed at LANL to allow process knowledgeable field personnel to develop baseline waste generation assessments and to evaluate potential waste minimization technology. This process waste assessment (PWA) system is an application constructed within the process modeling system. The Process Modeling System (PMS) is an object-oriented, mass balance-based, discrete-event simulation using the common LISP object system (CLOS). Analytical capabilities supported within the PWA system include: complete mass balance specifications, historical characterization of selected waste streams and generation of facility profiles for materials consumption, resource utilization and worker exposure. Anticipated development activities include provisions for a best available technologies (BAT) database and integration with the LANL facilities management Geographic Information System (GIS). The environments used to develop these assessment tools will be discussed in addition to a review of initial implementation results

  11. Inferring the most probable maps of underground utilities using Bayesian mapping model

    Science.gov (United States)

    Bilal, Muhammad; Khan, Wasiq; Muggleton, Jennifer; Rustighi, Emiliano; Jenks, Hugo; Pennock, Steve R.; Atkins, Phil R.; Cohn, Anthony

    2018-03-01

    Mapping the Underworld (MTU), a major initiative in the UK, is focused on addressing social, environmental and economic consequences raised from the inability to locate buried underground utilities (such as pipes and cables) by developing a multi-sensor mobile device. The aim of MTU device is to locate different types of buried assets in real time with the use of automated data processing techniques and statutory records. The statutory records, even though typically being inaccurate and incomplete, provide useful prior information on what is buried under the ground and where. However, the integration of information from multiple sensors (raw data) with these qualitative maps and their visualization is challenging and requires the implementation of robust machine learning/data fusion approaches. An approach for automated creation of revised maps was developed as a Bayesian Mapping model in this paper by integrating the knowledge extracted from sensors raw data and available statutory records. The combination of statutory records with the hypotheses from sensors was for initial estimation of what might be found underground and roughly where. The maps were (re)constructed using automated image segmentation techniques for hypotheses extraction and Bayesian classification techniques for segment-manhole connections. The model consisting of image segmentation algorithm and various Bayesian classification techniques (segment recognition and expectation maximization (EM) algorithm) provided robust performance on various simulated as well as real sites in terms of predicting linear/non-linear segments and constructing refined 2D/3D maps.

  12. Optimal energy-utilization ratio for long-distance cruising of a model fish

    Science.gov (United States)

    Liu, Geng; Yu, Yong-Liang; Tong, Bing-Gang

    2012-07-01

    The efficiency of total energy utilization and its optimization for long-distance migration of fish have attracted much attention in the past. This paper presents theoretical and computational research, clarifying the above well-known classic questions. Here, we specify the energy-utilization ratio (fη) as a scale of cruising efficiency, which consists of the swimming speed over the sum of the standard metabolic rate and the energy consumption rate of muscle activities per unit mass. Theoretical formulation of the function fη is made and it is shown that based on a basic dimensional analysis, the main dimensionless parameters for our simplified model are the Reynolds number (Re) and the dimensionless quantity of the standard metabolic rate per unit mass (Rpm). The swimming speed and the hydrodynamic power output in various conditions can be computed by solving the coupled Navier-Stokes equations and the fish locomotion dynamic equations. Again, the energy consumption rate of muscle activities can be estimated by the quotient of dividing the hydrodynamic power by the muscle efficiency studied by previous researchers. The present results show the following: (1) When the value of fη attains a maximum, the dimensionless parameter Rpm keeps almost constant for the same fish species in different sizes. (2) In the above cases, the tail beat period is an exponential function of the fish body length when cruising is optimal, e.g., the optimal tail beat period of Sockeye salmon is approximately proportional to the body length to the power of 0.78. Again, the larger fish's ability of long-distance cruising is more excellent than that of smaller fish. (3) The optimal swimming speed we obtained is consistent with previous researchers’ estimations.

  13. Estimating Utility

    DEFF Research Database (Denmark)

    Arndt, Channing; Simler, Kenneth R.

    2010-01-01

    A fundamental premise of absolute poverty lines is that they represent the same level of utility through time and space. Disturbingly, a series of recent studies in middle- and low-income economies show that even carefully derived poverty lines rarely satisfy this premise. This article proposes...... an information-theoretic approach to estimating cost-of-basic-needs (CBN) poverty lines that are utility consistent. Applications to date illustrate that utility-consistent poverty measurements derived from the proposed approach and those derived from current CBN best practices often differ substantially......, with the current approach tending to systematically overestimate (underestimate) poverty in urban (rural) zones....

  14. Comparisons of Academic Researchers' and Physical Education Teachers' Perspectives on the Utilization of the Tactical Games Model

    Science.gov (United States)

    Harvey, Stephen; Pill, Shane

    2016-01-01

    Research commentary suggests the utilization of Tactical Games Models (TGMs) only exists in isolated instances, particularly where teachers demonstrate true fidelity to these models. In contrast, many academics have adopted TGMs into their courses. Consequently, the purpose of this study was to investigate reasons for this disparity. Participants…

  15. Transport services quality measurment using SERVQUAL model

    Directory of Open Access Journals (Sweden)

    Maksimović Mlađan V.

    2017-01-01

    Full Text Available Quality in the world is considered to be the most important phenomenon of our age, with a permanent and irreversible growing trend of its emphasis. Many companies have come to the conclusion that high quality of services can provide them with a potential competitive advantage, leading to superior sales results and profit making. The aim of this paper is to test the applicability of service SERVQUAL dimensions and measure the quality of services in the public transport of passengers. Based on the data obtained by researching the views of public transport users in Kragujevac using the SERVQUAL methodology and statistical analysis based on defined service quality dimensions, this research will show the level of quality of urban transport services in Kragujevac and based on this, make recommendations for improving the quality of service.

  16. Measurement Models for Reasoned Action Theory.

    Science.gov (United States)

    Hennessy, Michael; Bleakley, Amy; Fishbein, Martin

    2012-03-01

    Quantitative researchers distinguish between causal and effect indicators. What are the analytic problems when both types of measures are present in a quantitative reasoned action analysis? To answer this question, we use data from a longitudinal study to estimate the association between two constructs central to reasoned action theory: behavioral beliefs and attitudes toward the behavior. The belief items are causal indicators that define a latent variable index while the attitude items are effect indicators that reflect the operation of a latent variable scale. We identify the issues when effect and causal indicators are present in a single analysis and conclude that both types of indicators can be incorporated in the analysis of data based on the reasoned action approach.

  17. Factor Structure, Reliability and Measurement Invariance of the Alberta Context Tool and the Conceptual Research Utilization Scale, for German Residential Long Term Care

    Science.gov (United States)

    Hoben, Matthias; Estabrooks, Carole A.; Squires, Janet E.; Behrens, Johann

    2016-01-01

    We translated the Canadian residential long term care versions of the Alberta Context Tool (ACT) and the Conceptual Research Utilization (CRU) Scale into German, to study the association between organizational context factors and research utilization in German nursing homes. The rigorous translation process was based on best practice guidelines for tool translation, and we previously published methods and results of this process in two papers. Both instruments are self-report questionnaires used with care providers working in nursing homes. The aim of this study was to assess the factor structure, reliability, and measurement invariance (MI) between care provider groups responding to these instruments. In a stratified random sample of 38 nursing homes in one German region (Metropolregion Rhein-Neckar), we collected questionnaires from 273 care aides, 196 regulated nurses, 152 allied health providers, 6 quality improvement specialists, 129 clinical leaders, and 65 nursing students. The factor structure was assessed using confirmatory factor models. The first model included all 10 ACT concepts. We also decided a priori to run two separate models for the scale-based and the count-based ACT concepts as suggested by the instrument developers. The fourth model included the five CRU Scale items. Reliability scores were calculated based on the parameters of the best-fitting factor models. Multiple-group confirmatory factor models were used to assess MI between provider groups. Rather than the hypothesized ten-factor structure of the ACT, confirmatory factor models suggested 13 factors. The one-factor solution of the CRU Scale was confirmed. The reliability was acceptable (>0.7 in the entire sample and in all provider groups) for 10 of 13 ACT concepts, and high (0.90–0.96) for the CRU Scale. We could demonstrate partial strong MI for both ACT models and partial strict MI for the CRU Scale. Our results suggest that the scores of the German ACT and the CRU Scale for nursing

  18. Factor Structure, Reliability and Measurement Invariance of the Alberta Context Tool and the Conceptual Research Utilization Scale, for German Residential Long Term Care.

    Science.gov (United States)

    Hoben, Matthias; Estabrooks, Carole A; Squires, Janet E; Behrens, Johann

    2016-01-01

    We translated the Canadian residential long term care versions of the Alberta Context Tool (ACT) and the Conceptual Research Utilization (CRU) Scale into German, to study the association between organizational context factors and research utilization in German nursing homes. The rigorous translation process was based on best practice guidelines for tool translation, and we previously published methods and results of this process in two papers. Both instruments are self-report questionnaires used with care providers working in nursing homes. The aim of this study was to assess the factor structure, reliability, and measurement invariance (MI) between care provider groups responding to these instruments. In a stratified random sample of 38 nursing homes in one German region (Metropolregion Rhein-Neckar), we collected questionnaires from 273 care aides, 196 regulated nurses, 152 allied health providers, 6 quality improvement specialists, 129 clinical leaders, and 65 nursing students. The factor structure was assessed using confirmatory factor models. The first model included all 10 ACT concepts. We also decided a priori to run two separate models for the scale-based and the count-based ACT concepts as suggested by the instrument developers. The fourth model included the five CRU Scale items. Reliability scores were calculated based on the parameters of the best-fitting factor models. Multiple-group confirmatory factor models were used to assess MI between provider groups. Rather than the hypothesized ten-factor structure of the ACT, confirmatory factor models suggested 13 factors. The one-factor solution of the CRU Scale was confirmed. The reliability was acceptable (>0.7 in the entire sample and in all provider groups) for 10 of 13 ACT concepts, and high (0.90-0.96) for the CRU Scale. We could demonstrate partial strong MI for both ACT models and partial strict MI for the CRU Scale. Our results suggest that the scores of the German ACT and the CRU Scale for nursing

  19. Performance Measurement Model A TarBase model with ...

    Indian Academy of Sciences (India)

    rohit

    Cost. G=Gamma. CV=Cross Validation. MCC=Matthew Correlation Coefficient. Test 1: C G CV Accuracy TP TN FP FN ... Conclusion: Without considering the MirTif negative dataset for training Model A and B classifiers, our Model A and B ...

  20. The utility of behavioral economics in expanding the free-feed model of obesity.

    Science.gov (United States)

    Rasmussen, Erin B; Robertson, Stephen H; Rodriguez, Luis R

    2016-06-01

    Animal models of obesity are numerous and diverse in terms of identifying specific neural and peripheral mechanisms related to obesity; however, they are limited when it comes to behavior. The standard behavioral measure of food intake in most animal models occurs in a free-feeding environment. While easy and cost-effective for the researcher, the free-feeding environment omits some of the most important features of obesity-related food consumption-namely, properties of food availability, such as effort and delay to obtaining food. Behavior economics expands behavioral measures of obesity animal models by identifying such behavioral mechanisms. First, economic demand analysis allows researchers to understand the role of effort in food procurement, and how physiological and neural mechanisms are related. Second, studies on delay discounting contribute to a growing literature that shows that sensitivity to delayed food- and food-related outcomes is likely a fundamental process of obesity. Together, these data expand the animal model in a manner that better characterizes how environmental factors influence food consumption. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Scale Model Thruster Acoustic Measurement Results

    Science.gov (United States)

    Vargas, Magda; Kenny, R. Jeremy

    2013-01-01

    The Space Launch System (SLS) Scale Model Acoustic Test (SMAT) is a 5% scale representation of the SLS vehicle, mobile launcher, tower, and launch pad trench. The SLS launch propulsion system will be comprised of the Rocket Assisted Take-Off (RATO) motors representing the solid boosters and 4 Gas Hydrogen (GH2) thrusters representing the core engines. The GH2 thrusters were tested in a horizontal configuration in order to characterize their performance. In Phase 1, a single thruster was fired to determine the engine performance parameters necessary for scaling a single engine. A cluster configuration, consisting of the 4 thrusters, was tested in Phase 2 to integrate the system and determine their combined performance. Acoustic and overpressure data was collected during both test phases in order to characterize the system's acoustic performance. The results from the single thruster and 4- thuster system are discussed and compared.

  2. Laser alignment measurement model with double beam

    Science.gov (United States)

    Mo, Changtao; Zhang, Lili; Hou, Xianglin; Wang, Ming; Lv, Jia; Du, Xin; He, Ping

    2012-10-01

    Double LD-Double PSD schedule.employ a symmetric structure and there are a laser and a PSD receiver on each axis. The Double LD-Double PSD is used, and the rectangular coordinate system is set up by use of the relationship of arbitrary two points coordinates, and then the parameter formula is deduced by the knowledge of solid geometry. Using the data acquisition system and the data processing model of laser alignment meter with double laser beam and two detector , basing on the installation parameter of the computer, we can have the state parameter between the two shafts by more complicated calculation and correction. The correcting data of the four under chassis of the adjusted apparatus moving on the level and the vertical plane can be calculated using the computer. This will instruct us to move the apparatus to align the shafts.

  3. Measuring and Modeling Shared Visual Attention

    Science.gov (United States)

    Mulligan, Jeffrey B.; Gontar, Patrick

    2016-01-01

    Multi-person teams are sometimes responsible for critical tasks, such as flying an airliner. Here we present a method using gaze tracking data to assess shared visual attention, a term we use to describe the situation where team members are attending to a common set of elements in the environment. Gaze data are quantized with respect to a set of N areas of interest (AOIs); these are then used to construct a time series of N dimensional vectors, with each vector component representing one of the AOIs, all set to 0 except for the component corresponding to the currently fixated AOI, which is set to 1. The resulting sequence of vectors can be averaged in time, with the result that each vector component represents the proportion of time that the corresponding AOI was fixated within the given time interval. We present two methods for comparing sequences of this sort, one based on computing the time-varying correlation of the averaged vectors, and another based on a chi-square test testing the hypothesis that the observed gaze proportions are drawn from identical probability distributions. We have evaluated the method using synthetic data sets, in which the behavior was modeled as a series of "activities," each of which was modeled as a first-order Markov process. By tabulating distributions for pairs of identical and disparate activities, we are able to perform a receiver operating characteristic (ROC) analysis, allowing us to choose appropriate criteria and estimate error rates. We have applied the methods to data from airline crews, collected in a high-fidelity flight simulator (Haslbeck, Gontar & Schubert, 2014). We conclude by considering the problem of automatic (blind) discovery of activities, using methods developed for text analysis.

  4. Measurement of Function Post Hip Fracture: Testing a Comprehensive Measurement Model of Physical Function.

    Science.gov (United States)

    Resnick, Barbara; Gruber-Baldini, Ann L; Hicks, Gregory; Ostir, Glen; Klinedinst, N Jennifer; Orwig, Denise; Magaziner, Jay

    2016-07-01

    Measurement of physical function post hip fracture has been conceptualized using multiple different measures. This study tested a comprehensive measurement model of physical function. This was a descriptive secondary data analysis including 168 men and 171 women post hip fracture. Using structural equation modeling, a measurement model of physical function which included grip strength, activities of daily living, instrumental activities of daily living, and performance was tested for fit at 2 and 12 months post hip fracture, and among male and female participants. Validity of the measurement model of physical function was evaluated based on how well the model explained physical activity, exercise, and social activities post hip fracture. The measurement model of physical function fit the data. The amount of variance the model or individual factors of the model explained varied depending on the activity. Decisions about the ideal way in which to measure physical function should be based on outcomes considered and participants. The measurement model of physical function is a reliable and valid method to comprehensively measure physical function across the hip fracture recovery trajectory. © 2015 Association of Rehabilitation Nurses.

  5. Evaluating measurement of dynamic constructs: defining a measurement model of derivatives.

    Science.gov (United States)

    Estabrook, Ryne

    2015-03-01

    While measurement evaluation has been embraced as an important step in psychological research, evaluating measurement structures with longitudinal data is fraught with limitations. This article defines and tests a measurement model of derivatives (MMOD), which is designed to assess the measurement structure of latent constructs both for analyses of between-person differences and for the analysis of change. Simulation results indicate that MMOD outperforms existing models for multivariate analysis and provides equivalent fit to data generation models. Additional simulations show MMOD capable of detecting differences in between-person and within-person factor structures. Model features, applications, and future directions are discussed. (c) 2015 APA, all rights reserved).

  6. Resource planning for gas utilities: Using a model to analyze pivotal issues

    Energy Technology Data Exchange (ETDEWEB)

    Busch, J.F.; Comnes, G.A.

    1995-11-01

    With the advent of wellhead price decontrols that began in the late 1970s and the development of open access pipelines in the 1980s and 90s, gas local distribution companies (LDCs) now have increased responsibility for their gas supplies and face an increasingly complex array of supply and capacity choices. Heretofore this responsibility had been share with the interstate pipelines that provide bundled firm gas supplies. Moreover, gas supply an deliverability (capacity) options have multiplied as the pipeline network becomes increasing interconnected and as new storage projects are developed. There is now a fully-functioning financial market for commodity price hedging instruments and, on interstate Pipelines, secondary market (called capacity release) now exists. As a result of these changes in the natural gas industry, interest in resource planning and computer modeling tools for LDCs is increasing. Although in some ways the planning time horizon has become shorter for the gas LDC, the responsibility conferred to the LDC and complexity of the planning problem has increased. We examine current gas resource planning issues in the wake of the Federal Energy Regulatory Commission`s (FERC) Order 636. Our goal is twofold: (1) to illustrate the types of resource planning methods and models used in the industry and (2) to illustrate some of the key tradeoffs among types of resources, reliability, and system costs. To assist us, we utilize a commercially-available dispatch and resource planning model and examine four types of resource planning problems: the evaluation of new storage resources, the evaluation of buyback contracts, the computation of avoided costs, and the optimal tradeoff between reliability and system costs. To make the illustration of methods meaningful yet tractable, we developed a prototype LDC and used it for the majority of our analysis.

  7. A novel measurement method for the thermal properties of liquids by utilizing a bridge-based micromachined sensor

    International Nuclear Information System (INIS)

    Beigelbeck, Roman; Nachtnebel, Herbert; Kohl, Franz; Jakoby, Bernhard

    2011-01-01

    In recent decades, the demands for online monitoring of liquids in various applications have increased significantly. In this context, the sensing of the thermal transport parameters of liquids (i.e. thermal conductivity and diffusivity) may be an interesting alternative to well-established monitoring parameters like permittivity, mass density or shear viscosity. We developed a micromachined thermal property sensor, applicable to non-flowing liquids, featuring three in parallel microbridges, which carry either a heater or one of in total two thermistors. Its active sensing region was designed to achieve almost negligible spurious thermal shunts between heater and thermistors. This enables the adoption of a simple two-dimensional model to describe the heat transfer from the heater to the thermistors, which is mainly governed by the thermal properties of the sample liquid. Founded on this theoretical model, a novel measurement method for the thermal parameters was devised that relies solely on the frequency response of the measured peak temperature and allows simultaneous extraction of the thermal conductivity and diffusivity of liquids. In this contribution, we describe the device prototype, the model, the deduced measurement method and the experimental verification by means of test measurements carried out on five sample liquids

  8. Modelling, Measuring and Compensating Color Weak Vision.

    Science.gov (United States)

    Oshima, Satoshi; Mochizuki, Rika; Lenz, Reiner; Chao, Jinhui

    2016-03-08

    We use methods from Riemann geometry to investigate transformations between the color spaces of color-normal and color weak observers. The two main applications are the simulation of the perception of a color weak observer for a color normal observer and the compensation of color images in a way that a color weak observer has approximately the same perception as a color normal observer. The metrics in the color spaces of interest are characterized with the help of ellipsoids defined by the just-noticable-differences between color which are measured with the help of color-matching experiments. The constructed mappings are isometries of Riemann spaces that preserve the perceived color-differences for both observers. Among the two approaches to build such an isometry, we introduce normal coordinates in Riemann spaces as a tool to construct a global color-weak compensation map. Compared to previously used methods this method is free from approximation errors due to local linearizations and it avoids the problem of shifting locations of the origin of the local coordinate system. We analyse the variations of the Riemann metrics for different observers obtained from new color matching experiments and describe three variations of the basic method. The performance of the methods is evaluated with the help of semantic differential (SD) tests.

  9. Simultaneous, quantitative measurement of local blood flow and glucose utilization in tissue samples in normal and injured feline brain.

    Science.gov (United States)

    DeWitt, D S; Yuan, X Q; Becker, D P; Hayes, R L

    1988-01-01

    Cerebral blood flow (CBF) and local cerebral glucose utilization (LCGU) were measured using radioactive microspheres and [14C]2-deoxyglucose, respectively, in 26 brain regions in control animals (n = 8) and in animals (n = 4) sustaining low-level experimental brain injury. Examination of the initial (resting) CBF measurement in the uninjured cats revealed two subgroups with significantly (p less than 0.01) different CBF levels. In uninjured cats with normal CBF levels (33.4 +/- 1.8 ml/100 g/min) there was a close linear relationship between CBF and LCGU (n = 0.71, p less than 0.01). In contrast, the remainder of the uninjured cats exhibited abnormally high levels of CBF (72.6 +/- 9.9 ml/100 g/min) and the absence of a close relationship between CBF and LCGU (r = 0.27). One hour following low-level (2.0 atm) fluid percussion brain injury, CBF was increased and LCGU was decreased, though not significantly. The relationship between CBF and LCGU remained intact (r = 0.66, p less than 0.01) in most brain regions. However, the relationship between CBF and LCGU in the hippocampus differed significantly from the relationship between the two parameters in the rest of the brain. Thus, the use of the radioactive microsphere method for CBF measurements allows multiple measurements of CBF and permits the assessment of the status of the cerebral vasculature prior to experimental manipulations such as traumatic brain injury. In view of our current findings of an abnormal relationship between CBF and LCGU in cats with high resting CBF levels, this is an important advantage. In addition, the combination of the microsphere and 2-DG techniques within the same tissue samples allows for the investigation of the effects of traumatic injury on the important relationship between CBF and LCGU.

  10. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically implemented as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this paper, we introduce a discrete event-based simulation tool that models the dataflow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers; resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error in simulation when comparing the results to a large amount of real-world ...

  11. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically realized as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this work, we introduce a discrete event-based simulation tool that models the data flow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers, resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error of simulation when comparing the results to a large amount of real-world ope...

  12. Case studies of community relations on DOE's Formerly Utilized Sites Remedial Action Program as models for Superfund sites

    International Nuclear Information System (INIS)

    Plant, S.W.; Adler, D.G.

    1995-01-01

    Ever since the US Department of Energy (DOE) created its Formerly Utilized Sites Remedial Action Program (FUSRAP) in 1974, there has been a community relations program. The community relations effort has grown as FUSRAP has grown. With 20 of 46 sites now cleaned up, considerable experience in working with FUSRAP stakeholders has been gained. Why not share that experience with others who labor on the Superfund sites? Many similarities exist between the Superfund sites and FUSRAP. FUSRAP is a large, multiple-site environmental restoration program. The challenges range from small sites requiring remedial actions measurable in weeks to major sites requiring the full remedial investigation/feasibility study process. The numerous Superfund sites throughout the United States offer the same diversity, both geographically and technically. But before DOE offers FUSRAP's community relations experience as a model, it needs to make clear that this will be a realistic model. As experiences are shared, DOE will certainly speak of the efforts that achieved its goals. But many of the problems that DOE encountered along the way will also be related. FUSRAP relies on a variety of one- and two-way communication techniques for involving stakeholders in the DOE decision-making process. Some of the techniques and experiences from the case studies are presented

  13. An econometric analysis of changes in arable land utilization using multinomial logit model in Pinggu district, Beijing, China.

    Science.gov (United States)

    Xu, Yueqing; McNamara, Paul; Wu, Yanfang; Dong, Yue

    2013-10-15

    Arable land in China has been decreasing as a result of rapid population growth and economic development as well as urban expansion, especially in developed regions around cities where quality farmland quickly disappears. This paper analyzed changes in arable land utilization during 1993-2008 in the Pinggu district, Beijing, China, developed a multinomial logit (MNL) model to determine spatial driving factors influencing arable land-use change, and simulated arable land transition probabilities. Land-use maps, as well as social-economic and geographical data were used in the study. The results indicated that arable land decreased significantly between 1993 and 2008. Lost arable land shifted into orchard, forestland, settlement, and transportation land. Significant differences existed for arable land transitions among different landform areas. Slope, elevation, population density, urbanization rate, distance to settlements, and distance to roadways were strong drivers influencing arable land transition to other uses. The MNL model was proved effective for predicting transition probabilities in land use from arable land to other land-use types, thus can be used for scenario analysis to develop land-use policies and land-management measures in this metropolitan area. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Biological darkening of ice: measurements and models

    Science.gov (United States)

    Cook, J.; Tedstone, A.; Hodson, A. J.; Williamson, C.; McCutcheon, J.; Tranter, M.

    2017-12-01

    Biological growth occurs in the ablation zones of glaciers and ice sheets, resulting in a reduction of the ice albedo. Given the critical role of albedo in determining the surface energy balance - and therefore melt rate - of a mass of ice, understanding and quantifying biological albedo reduction is fundamental to predicting future ice dynamics. This may be particularly important on ablating ice on the western Greenland Ice Sheet, where a `dark ice zone' of varying spatial extent may be partly or mostly explained by biological growth. However, our ability to quantify and predict the contribution of biological impurities to the overall energy balance of glacial systems is currently limited by a lack of understanding of the mechanisms of biological darkening, difficulties in determining the spatial extent of biological impurities and uncertainty in isolating biological from non-biological albedo reduction. Here, new spectral measurements are presented for ice containing varying amounts of biological impurities which were obtained on the ground using a field spectrometer and from the air using a purpose built UAV on the Greenland Ice Sheet in summer 2016 and 2017. Distinctive spectral signatures are identified and used to map the spatial extent of algal blooms on the ice surface. A new radiative transfer scheme (BioSNICAR) for predicting the albedo of snow or ice discolored by microbial life is also described, offering insight into the mechanisms of biological darkening. Together, these demonstrate the critical role played by pigmented algae in darkening ice surfaces and provide a framework for predicting biological albedo reduction in future climate scenarios.

  15. Optical 3D Deformation Measurement Utilizing Non-planar Surface for the Development of an “Intelligent Tire”

    Science.gov (United States)

    Matsuzaki, Ryosuke; Hiraoka, Naoki; Todoroki, Akira; Mizutani, Yoshihiro

    Intelligent tires, also known as smart tires, are equipped with sensors to monitor the strain of the interior surface and the rolling radius of tire, and are expected to improve the reliability of tires and tire control systems such as anti-lock braking systems (ABS). However, the high stiffness of an attached sensor like a strain gauge causes sensors to debond from the tire rubber. In the present study, a novel optical method is used for the concurrent monitoring of in-plane strain and out-of-plane displacement (rolling radius) utilizing the non-planar surface of the monitoring object. The optical method enables noncontact measurement of strain distribution. The in-plane strain and out-of-plane displacement are calculated by using image processing with an image of the interior surface of a tire that is taken with a single CCD camera fixed on the wheel rim. This new monitoring system is applied to an aluminum beam and a commercially available radial tire. As a result, the monitoring system provides concurrent measurement of in-plane strain, out-of-plane displacement and tire pressure, and is shown to be an effective monitoring system for intelligent tires.

  16. Using global magnetospheric models for simulation and interpretation of Swarm external field measurements

    DEFF Research Database (Denmark)

    Moretto, T.; Vennerstrøm, Susanne; Olsen, Nils

    2006-01-01

    simulated external contributions relevant for internal field modeling. These have proven very valuable for the design and planning of the up-coming multi-satellite Swarm mission. In addition, a real event simulation was carried out for a moderately active time interval when observations from the Orsted...... it consistently underestimates the dayside region 2 currents and overestimates the horizontal ionospheric closure currents in the dayside polar cap. Furthermore, with this example we illustrate the great benefit of utilizing the global model for the interpretation of Swarm external field observations and......, likewise, the potential of using Swarm measurements to test and improve the global model....

  17. Information support model and its impact on utility, satisfaction and loyalty of users

    Directory of Open Access Journals (Sweden)

    Sead Šadić

    2016-11-01

    Full Text Available In today’s modern age, information systems are of vital importance for successful performance of any organization. The most important role of any information system is its information support. This paper develops an information support model and presents the results of the survey examining the effects of such model. The survey was performed among the employees of Brčko District Government and comprised three phases. The first phase assesses the influence of the quality of information support and information on information support when making decisions. The second phase examines the impact of information support when making decisions on the perceived availability and user satisfaction with information support. The third phase examines the effects of perceived usefulness as well as information support satisfaction on user loyalty. The model is presented using six hypotheses, which were tested by means of a multivariate regression analysis. The demonstrated model shows that the quality of information support and information is of vital importance in the decision-making process. The perceived usefulness and customer satisfaction are of vital importance for continuous usage of information support. The model is universal, and if slightly modified, it can be used in any sphere of life where satisfaction is measured for clients and users of some service.

  18. CO{sub 2}-mitigation measures through reduction of fossil fuel burning in power utilities. Which road to go?

    Energy Technology Data Exchange (ETDEWEB)

    Kaupp, A. [Energetica International Inc., Suva (Fiji)

    1996-12-31

    Five conditions, at minimum, should be examined in the comparative analysis of CO{sub 2}-mitigation options for the power sector. Under the continuing constraint of scarce financial resources for any private or public investment in the power sector, the following combination of requirements characterise a successful CO{sub 2}-mitigation project: (1) Financial attractiveness for private or public investors. (2) Low, or even negative, long range marginal costs per ton of `CO{sub 2} saved`. (3) High impact on CO{sub 2}-mitigation, which indicates a large market potential for the measure. (4) The number of individual investments required to achieve the impact is relatively small. In other words, logistical difficulties in project implementation are minimised. (5) The projects are `socially fair` and have minimal negative impact on any segment of the society. This paper deals with options to reduce carbonaceous fuel burning in the power sector. Part I explains how projects should be selected and classified. Part II describes the technical options. Since reduction of carbonaceous fuel burning may be achieved through Demand Side Management (DSM) and Supply Side Management (SSM) both are treated. Within the context of this paper SSM does not mean to expand power supply as demand grows. It means to economically generate and distribute power as efficiently as possible. In too many instances DSM has degenerated into efficient lighting programs and utility managed incentives and rebate programs. To what extent this is a desirable situation for utilities in Developing Countries that face totally different problems as their counterparts in highly industrialised countries remains to be seen. Which road to go is the topic of this paper.

  19. Using LISREL to Evaluate Measurement Models and Scale Reliability.

    Science.gov (United States)

    Fleishman, John; Benson, Jeri

    1987-01-01

    LISREL program was used to examine measurement model assumptions and to assess reliability of Coopersmith Self-Esteem Inventory for Children, Form B. Data on 722 third-sixth graders from over 70 schools in large urban school district were used. LISREL program assessed (1) nature of basic measurement model for scale, (2) scale invariance across…

  20. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Directory of Open Access Journals (Sweden)

    Roque Calvo

    2016-09-01

    Full Text Available The development of an error compensation model for coordinate measuring machines (CMMs and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.

  1. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Science.gov (United States)

    Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario

    2016-01-01

    The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052

  2. An evaluation of the utility of physiologically based models of pharmacokinetics in early drug discovery.

    Science.gov (United States)

    Parrott, Neil; Paquereau, Nicolas; Coassolo, Philippe; Lavé, Thierry

    2005-10-01

    Generic physiologically-based models of pharmacokinetics were evaluated for early drug discovery. Plasma profiles after intravenous and oral dosing were simulated in rat for 68 compounds from six chemical classes. Input data consisted of structure based predictions of lipophilicity, ionization, and protein binding plus intrinsic clearance measured in rat hepatocytes, single measured values of aqueous solubility, and artificial membrane permeability. LogP of compounds was high with a mean of 3.9 while free fraction in plasma (mean 9%) and solubility (mean 37 microg/mL) were low. Predicted and observed clearance and volume showed mean fold-error and R2 of 1.8, 0.56, and 1.9, 0.25 respectively. Predicted bioavailability showed strong bias to under prediction correlated to very low aqueous solubility and a theoretical correction for bile salt solubilization in vivo brought some improvement in average prediction error (to 31%). Overall, this evaluation shows that generic simulation may be applicable for typical drug-like compounds to predict differences in pharmacokinetic parameters of more than twofold based upon minimal measured input data. However verification of the simulations with in vivo data for a few compounds of each compound class is recommended since recent discovery compounds may have properties beyond the scope of the current generic models. Copyright (c) 2005 Wiley-Liss, Inc. and the American Pharmacists Association

  3. Utilizing a rat delayed implantation model to teach integrative endocrinology and reproductive biology.

    Science.gov (United States)

    Geisert, Rodney D; Smith, Michael F; Schmelzle, Amanda L; Green, Jonathan A

    2018-03-01

    In this teaching laboratory, the students are directed in an exercise that involves designing and performing an experiment to determine estrogen's role in regulating delayed implantation (diapause) in female rats. To encourage active participation by the students, a discussion question is provided before the laboratory exercise in which each student is asked to search the literature and provide written answers to questions and to formulate an experiment to test the role of ovarian estrogen in inducing implantation in female rats. One week before the laboratory exercise, students discuss their answers to the questions with the instructor to develop an experiment to test their hypothesis that estrogen is involved with inducing implantation in the rat. A rat delayed implantation model was established that utilizes an estrogen receptor antagonist (ICI 182,780), which inhibits the action of ovarian estrogens. Groups of mated females are treated with either carrier (control) or ICI 182,780 (ICI) every other day, starting on day 2 postcoitus (pc) until day 8 pc. One-half of the females receiving ICI are injected with estradiol-17β on day 8 pc to induce implantation 4 days after the controls. If the ICI-treated females are not administered estradiol, embryo implantation occurs spontaneously ~4 days after the last ICI injection on day 8. This is a very simple protocol that is very effective and provides an excellent experiment for student discussion on hormone action and the use of agonists and antagonists.

  4. Research utilization in the building industry: decision model and preliminary assessment

    Energy Technology Data Exchange (ETDEWEB)

    Watts, R.L.; Johnson, D.R.; Smith, S.A.; Westergard, E.J.

    1985-10-01

    The Research Utilization Program was conceived as a far-reaching means for managing the interactions of the private sector and the federal research sector as they deal with energy conservation in buildings. The program emphasizes a private-public partnership in planning a research agenda and in applying the results of ongoing and completed research. The results of this task support the hypothesis that the transfer of R and D results to the buildings industry can be accomplished more efficiently and quickly by a systematic approach to technology transfer. This systematic approach involves targeting decision makers, assessing research and information needs, properly formating information, and then transmitting the information through trusted channels. The purpose of this report is to introduce elements of a market-oriented knowledge base, which would be useful to the Building Systems Division, the Office of Buildings and Community Systems and their associated laboratories in managing a private-public research partnership on a rational systematic basis. This report presents conceptual models and data bases that can be used in formulating a technology transfer strategy and in planning technology transfer programs.

  5. Towards utilizing GPUs in information visualization: a model and implementation of image-space operations.

    Science.gov (United States)

    McDonnel, Bryan; Elmqvist, Niklas

    2009-01-01

    Modern programmable GPUs represent a vast potential in terms of performance and visual flexibility for information visualization research, but surprisingly few applications even begin to utilize this potential. In this paper, we conjecture that this may be due to the mismatch between the high-level abstract data types commonly visualized in our field, and the low-level floating-point model supported by current GPU shader languages. To help remedy this situation, we present a refinement of the traditional information visualization pipeline that is amenable to implementation using GPU shaders. The refinement consists of a final image-space step in the pipeline where the multivariate data of the visualization is sampled in the resolution of the current view. To concretize the theoretical aspects of this work, we also present a visual programming environment for constructing visualization shaders using a simple drag-and-drop interface. Finally, we give some examples of the use of shaders for well-known visualization techniques.

  6. Optimal urban water conservation strategies considering embedded energy: coupling end-use and utility water-energy models.

    Science.gov (United States)

    Escriva-Bou, A.; Lund, J. R.; Pulido-Velazquez, M.; Spang, E. S.; Loge, F. J.

    2014-12-01

    Although most freshwater resources are used in agriculture, a greater amount of energy is consumed per unit of water supply for urban areas. Therefore, efforts to reduce the carbon footprint of water in cities, including the energy embedded within household uses, can be an order of magnitude larger than for other water uses. This characteristic of urban water systems creates a promising opportunity to reduce global greenhouse gas emissions, particularly given rapidly growing urbanization worldwide. Based on a previous Water-Energy-CO2 emissions model for household water end uses, this research introduces a probabilistic two-stage optimization model considering technical and behavioral decision variables to obtain the most economical strategies to minimize household water and water-related energy bills given both water and energy price shocks. Results show that adoption rates to reduce energy intensive appliances increase significantly, resulting in an overall 20% growth in indoor water conservation if household dwellers include the energy cost of their water use. To analyze the consequences on a utility-scale, we develop an hourly water-energy model based on data from East Bay Municipal Utility District in California, including the residential consumption, obtaining that water end uses accounts for roughly 90% of total water-related energy, but the 10% that is managed by the utility is worth over 12 million annually. Once the entire end-use + utility model is completed, several demand-side management conservation strategies were simulated for the city of San Ramon. In this smaller water district, roughly 5% of total EBMUD water use, we found that the optimal household strategies can reduce total GHG emissions by 4% and utility's energy cost over 70,000/yr. Especially interesting from the utility perspective could be the "smoothing" of water use peaks by avoiding daytime irrigation that among other benefits might reduce utility energy costs by 0.5% according to our

  7. Parametric model measurement: reframing traditional measurement ideas in neuropsychological practice and research.

    Science.gov (United States)

    Brown, Gregory G; Thomas, Michael L; Patt, Virginie

    Neuropsychology is an applied measurement field with its psychometric work primarily built upon classical test theory (CTT). We describe a series of psychometric models to supplement the use of CTT in neuropsychological research and test development. We introduce increasingly complex psychometric models as measurement algebras, which include model parameters that represent abilities and item properties. Within this framework of parametric model measurement (PMM), neuropsychological assessment involves the estimation of model parameters with ability parameter values assuming the role of test 'scores'. Moreover, the traditional notion of measurement error is replaced by the notion of parameter estimation error, and the definition of reliability becomes linked to notions of item and test information. The more complex PMM approaches incorporate into the assessment of neuropsychological performance formal parametric models of behavior validated in the experimental psychology literature, along with item parameters. These PMM approaches endorse the use of experimental manipulations of model parameters to assess a test's construct representation. Strengths and weaknesses of these models are evaluated by their implications for measurement error conditional upon ability level, sensitivity to sample characteristics, computational challenges to parameter estimation, and construct validity. A family of parametric psychometric models can be used to assess latent processes of interest to neuropsychologists. By modeling latent abilities at the item level, psychometric studies in neuropsychology can investigate construct validity and measurement precision within a single framework and contribute to a unification of statistical methods within the framework of generalized latent variable modeling.

  8. Measurement system and model for simultaneously measuring 6DOF geometric errors.

    Science.gov (United States)

    Zhao, Yuqiong; Zhang, Bin; Feng, Qibo

    2017-09-04

    A measurement system to simultaneously measure six degree-of-freedom (6DOF) geometric errors is proposed. The measurement method is based on a combination of mono-frequency laser interferometry and laser fiber collimation. A simpler and more integrated optical configuration is designed. To compensate for the measurement errors introduced by error crosstalk, element fabrication error, laser beam drift, and nonparallelism of two measurement beam, a unified measurement model, which can improve the measurement accuracy, is deduced and established using the ray-tracing method. A numerical simulation using the optical design software Zemax is conducted, and the results verify the correctness of the model. Several experiments are performed to demonstrate the feasibility and effectiveness of the proposed system and measurement model.

  9. Finite element model updating of the UCF grid benchmark using measured frequency response functions

    Science.gov (United States)

    Sipple, Jesse D.; Sanayei, Masoud

    2014-05-01

    A frequency response function based finite element model updating method is presented and used to perform parameter estimation of the University of Central Florida Grid Benchmark Structure. The proposed method is used to calibrate the initial finite element model using measured frequency response functions from the undamaged, intact structure. Stiffness properties, mass properties, and boundary conditions of the initial model were estimated and updated. Model updating was then performed using measured frequency response functions from the damaged structure to detect physical structural change. Grouping and ungrouping were utilized to determine the exact location and magnitude of the damage. The fixity in rotation of two boundary condition nodes was accurately and successfully estimated. The usefulness of the proposed method for finite element model updating is shown by being able to detect, locate, and quantify change in structural properties.

  10. Constraining new physics with collider measurements of Standard Model signatures

    Energy Technology Data Exchange (ETDEWEB)

    Butterworth, Jonathan M. [Department of Physics and Astronomy, University College London,Gower St., London, WC1E 6BT (United Kingdom); Grellscheid, David [IPPP, Department of Physics, Durham University,Durham, DH1 3LE (United Kingdom); Krämer, Michael; Sarrazin, Björn [Institute for Theoretical Particle Physics and Cosmology, RWTH Aachen University,Sommerfeldstr. 16, 52056 Aachen (Germany); Yallup, David [Department of Physics and Astronomy, University College London,Gower St., London, WC1E 6BT (United Kingdom)

    2017-03-14

    A new method providing general consistency constraints for Beyond-the-Standard-Model (BSM) theories, using measurements at particle colliders, is presented. The method, ‘Constraints On New Theories Using Rivet’, CONTUR, exploits the fact that particle-level differential measurements made in fiducial regions of phase-space have a high degree of model-independence. These measurements can therefore be compared to BSM physics implemented in Monte Carlo generators in a very generic way, allowing a wider array of final states to be considered than is typically the case. The CONTUR approach should be seen as complementary to the discovery potential of direct searches, being designed to eliminate inconsistent BSM proposals in a context where many (but perhaps not all) measurements are consistent with the Standard Model. We demonstrate, using a competitive simplified dark matter model, the power of this approach. The CONTUR method is highly scaleable to other models and future measurements.

  11. Utilizing remote sensing data for modeling water and heat regimes of the Black Earth Region territory of the European Russia

    Science.gov (United States)

    Muzylev, Eugene; Startseva, Zoya; Uspensky, Alexander; Volkova, Elena; Uspensky, Sergey

    2014-05-01

    At present physical-mathematical modeling processes of water and heat exchange between vegetation covered land surfaces and atmosphere is the most appropriate method to describe peculiarities of water and heat regime formation for large territories. The developed model of such processes (Land Surface Model, LSM) is intended for calculation evaporation, transpiration by vegetation, soil water content and other water and heat regime characteristics, as well as distributions of the soil temperature and humidity in depth utilizing remote sensing data from satellites on land surface and meteorological conditions. The model parameters and input variables are the soil and vegetation characteristics and the meteorological characteristics, correspondingly. Their values have been determined from ground-based observations or satellite-based measurements by radiometers AVHRR/NOAA, MODIS/EOS Terra and Aqua, SEVIRI/Meteosat-9, -10. The case study has been carried out for the part of the agricultural Central Black Earth region with coordinates 49.5 deg. - 54 deg. N, 31 deg. - 43 deg. E and a total area of 227,300 km2 located in the steppe-forest zone of the European Russia for years 2009-2012 vegetation seasons. From AVHRR data there have been derived the estimates of three types of land surface temperature (LST): land surface skin temperature Tsg, air-foliage temperature Ta and efficient radiation temperature Ts.eff, emissivity E, normalized vegetation index NDVI, vegetation cover fraction B, leaf area index LAI, cloudiness and precipitation. From MODIS data the estimates of LST Tls, E, NDVI and LAI have been obtained. The SEVIRI data have been used to build the estimates of Tls, Ta, E, LAI and precipitation. Previously developed method and technology of above AVHRR-derived estimates have been improved and adapted to the study area. To check the reliability of the Ts.eff and Ta estimations for named seasons the error statistics of their definitions has been analyzed through

  12. Utilization of Integrated Assessment Modeling for determining geologic CO2 storage security

    Science.gov (United States)

    Pawar, R.

    2017-12-01

    Geologic storage of carbon dioxide (CO2) has been extensively studied as a potential technology to mitigate atmospheric concentration of CO2. Multiple international research & development efforts, large-scale demonstration and commercial projects are helping advance the technology. One of the critical areas of active investigation is prediction of long-term CO2 storage security and risks. A quantitative methodology for predicting a storage site's long-term performance is critical for making key decisions necessary for successful deployment of commercial scale projects where projects will require quantitative assessments of potential long-term liabilities. These predictions are challenging given that they require simulating CO2 and in-situ fluid movements as well as interactions through the primary storage reservoir, potential leakage pathways (such as wellbores, faults, etc.) and shallow resources such as groundwater aquifers. They need to take into account the inherent variability and uncertainties at geologic sites. This talk will provide an overview of an approach based on integrated assessment modeling (IAM) to predict long-term performance of a geologic storage site including, storage reservoir, potential leakage pathways and shallow groundwater aquifers. The approach utilizes reduced order models (ROMs) to capture the complex physical/chemical interactions resulting due to CO2 movement and interactions but are computationally extremely efficient. Applicability of the approach will be demonstrated through examples that are focused on key storage security questions such as what is the probability of leakage of CO2 from a storage reservoir? how does storage security vary for different geologic environments and operational conditions? how site parameter variability and uncertainties affect storage security, etc.

  13. Effective UV radiation from model calculations and measurements

    Science.gov (United States)

    Feister, Uwe; Grewe, Rolf

    1994-01-01

    Model calculations have been made to simulate the effect of atmospheric ozone and geographical as well as meteorological parameters on solar UV radiation reaching the ground. Total ozone values as measured by Dobson spectrophotometer and Brewer spectrometer as well as turbidity were used as input to the model calculation. The performance of the model was tested by spectroradiometric measurements of solar global UV radiation at Potsdam. There are small differences that can be explained by the uncertainty of the measurements, by the uncertainty of input data to the model and by the uncertainty of the radiative transfer algorithms of the model itself. Some effects of solar radiation to the biosphere and to air chemistry are discussed. Model calculations and spectroradiometric measurements can be used to study variations of the effective radiation in space in space time. The comparability of action spectra and their uncertainties are also addressed.

  14. Validation of theoretical models through measured pavement response

    DEFF Research Database (Denmark)

    Ullidtz, Per

    1999-01-01

    mechanics was quite different from the measured stress, the peak theoretical value being only half of the measured value.On an instrumented pavement structure in the Danish Road Testing Machine, deflections were measured at the surface of the pavement under FWD loading. Different analytical models were...... then used to derive the elastic parameters of the pavement layeres, that would produce deflections matching the measured deflections. Stresses and strains were then calculated at the position of the gauges and compared to the measured values. It was found that all analytical models would predict the tensile...

  15. Modelling the effects of road traffic safety measures.

    Science.gov (United States)

    Lu, Meng

    2006-05-01

    A model is presented for assessing the effects of traffic safety measures, based on a breakdown of the process in underlying components of traffic safety (risk and consequence), and five (speed and conflict related) variables that influence these components, and are influenced by traffic safety measures. The relationships between measures, variables and components are modelled as coefficients. The focus is on probabilities rather than historical statistics, although in practice statistics may be needed to find values for the coefficients. The model may in general contribute to improve insight in the mechanisms between traffic safety measures and their safety effects. More specifically it allows comparative analysis of different types of measures by defining an effectiveness index, based on the coefficients. This index can be used to estimate absolute effects of advanced driver assistance systems (ADAS) related measures from absolute effects of substitutional (in terms of safety effects) infrastructure measures.

  16. Conditioning a segmented stem profile model for two diameter measurements

    Science.gov (United States)

    Raymond L. Czaplewski; Joe P. Mcclure

    1988-01-01

    The stem profile model of Max and Burkhart (1976) is conditioned for dbh and a second upper stem measurement. This model was applied to a loblolly pine data set using diameter outside bark at 5.3m (i.e., height of 17.3 foot Girard form class) as the second upper stem measurement, and then compared to the original, unconditioned model. Variance of residuals was reduced...

  17. Comparison of Echo 7 field line length measurements to magnetospheric model predictions

    International Nuclear Information System (INIS)

    Nemzek, R.J.; Winckler, J.R.; Malcolm, P.R.

    1992-01-01

    The Echo 7 sounding rocket experiment injected electron beams on central tail field lines near L = 6.5. Numerous injections returned to the payload as conjugate echoes after mirroring in the southern hemisphere. The authors compare field line lengths calculated from measured conjugate echo bounce times and energies to predictions made by integrating electron trajectories through various magnetospheric models: the Olson-Pfitzer Quiet and Dynamic models and the Tsyganenko-Usmanov model. Although Kp at launch was 3-, quiet time magnetic models est fit the echo measurements. Geosynchronous satellite magnetometer measurements near the Echo 7 field lies during the flight were best modeled by the Olson-Pfitzer Dynamic Model and the Tsyganenko-Usmanov model for Kp = 3. The discrepancy between the models that best fit the Echo 7 data and those that fit the satellite data was most likely due to uncertainties in the small-scale configuration of the magnetospheric models. The field line length measured by the conjugate echoes showed some temporal variation in the magnetic field, also indicated by the satellite magnetometers. This demonstrates the utility an Echo-style experiment could have in substorm studies

  18. Measurement and Modeling of Particle Radiation in Coal Flames

    DEFF Research Database (Denmark)

    Bäckström, Daniel; Johansson, Robert; Andersson, Klas Jerker

    2014-01-01

    flame. Spectral radiation, total radiative intensity, gas temperature, and gas composition were measured, and the radiative intensity in the furnace was modeled with an axisymmetric cylindrical radiation model using Mie theory for the particle properties and a statistical narrow-band model for the gas...

  19. Bayesian modeling of measurement error in predictor variables

    NARCIS (Netherlands)

    Fox, Gerardus J.A.; Glas, Cornelis A.W.

    2003-01-01

    It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between

  20. Model Predictive Control of Wind Turbines using Uncertain LIDAR Measurements

    DEFF Research Database (Denmark)

    Mirzaei, Mahmood; Soltani, Mohsen; Poulsen, Niels Kjølstad

    2013-01-01

    The problem of Model predictive control (MPC) of wind turbines using uncertain LIDAR (LIght Detection And Ranging) measurements is considered. A nonlinear dynamical model of the wind turbine is obtained. We linearize the obtained nonlinear model for different operating points, which are determined...