WorldWideScience

Sample records for values calculated based

  1. 19 CFR 351.405 - Calculation of normal value based on constructed value.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Calculation of normal value based on constructed value. 351.405 Section 351.405 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE ANTIDUMPING AND COUNTERVAILING DUTIES Calculation of Export Price, Constructed Export Price, Fair Value, and...

  2. [Calculation on ecological security baseline based on the ecosystem services value and the food security].

    Science.gov (United States)

    He, Ling; Jia, Qi-jian; Li, Chao; Xu, Hao

    2016-01-01

    The rapid development of coastal economy in Hebei Province caused rapid transition of coastal land use structure, which has threatened land ecological security. Therefore, calculating ecosystem service value of land use and exploring ecological security baseline can provide the basis for regional ecological protection and rehabilitation. Taking Huanghua, a city in the southeast of Hebei Province, as an example, this study explored the joint point, joint path and joint method between ecological security and food security, and then calculated the ecological security baseline of Huanghua City based on the ecosystem service value and the food safety standard. The results showed that ecosystem service value of per unit area from maximum to minimum were in this order: wetland, water, garden, cultivated land, meadow, other land, salt pans, saline and alkaline land, constructive land. The order of contribution rates of each ecological function value from high to low was nutrient recycling, water conservation, entertainment and culture, material production, biodiversity maintenance, gas regulation, climate regulation and environmental purification. The security baseline of grain production was 0.21 kg · m⁻², the security baseline of grain output value was 0.41 yuan · m⁻², the baseline of ecosystem service value was 21.58 yuan · m⁻², and the total of ecosystem service value in the research area was 4.244 billion yuan. In 2081 the ecological security will reach the bottom line and the ecological system, in which human is the subject, will be on the verge of collapse. According to the ecological security status, Huanghua can be divided into 4 zones, i.e., ecological core protection zone, ecological buffer zone, ecological restoration zone and human activity core zone.

  3. Using 3d Bim Model for the Value-Based Land Share Calculations

    Science.gov (United States)

    Çelik Şimşek, N.; Uzun, B.

    2017-11-01

    According to the Turkish condominium ownership system, 3D physical buildings and its condominium units are registered to the condominium ownership books via 2D survey plans. Currently, 2D representations of the 3D physical objects, causes inaccurate and deficient implementations for the determination of the land shares. Condominium ownership and easement right are established with a clear indication of land shares (condominium ownership law, article no. 3). So, the land share of each condominium unit have to be determined including the value differences among the condominium units. However the main problem is that, land share has often been determined with area based over the project before construction of the building. The objective of this study is proposing a new approach in terms of value-based land share calculations of the condominium units that subject to condominium ownership. So, the current approaches and its failure that have taken into account in determining the land shares are examined. And factors that affect the values of the condominium units are determined according to the legal decisions. This study shows that 3D BIM models can provide important approaches for the valuation problems in the determination of the land shares.

  4. Shapley Value-Based Payment Calculation for Energy Exchange between Micro- and Utility Grids

    Directory of Open Access Journals (Sweden)

    Robin Pilling

    2017-10-01

    Full Text Available In recent years, microgrids have developed as important parts of power systems and have provided affordable, reliable, and sustainable supplies of electricity. Each microgrid is managed as a single controllable entity with respect to the existing power system but demands for joint operation and sharing the benefits between a microgrid and its hosting utility. This paper is focused on the joint operation of a microgrid and its hosting utility, which cooperatively minimize daily generation costs through energy exchange, and presents a payment calculation scheme for power transactions based on a fair allocation of reduced generation costs. To fairly compensate for energy exchange between the micro- and utility grids, we adopt the cooperative game theoretic solution concept of Shapley value. We design a case study for a fictitious interconnection model between the Mueller microgrid in Austin, Texas and the utility grid in Taiwan. Our case study shows that when compared to standalone generations, both the micro- and utility grids are better off when they collaborate in power exchange regardless of their individual contributions to the power exchange coalition.

  5. The calculation of the chemical exergies of coal-based fuels by using the higher heating values

    International Nuclear Information System (INIS)

    Bilgen, Selcuk; Kaygusuz, Kamil

    2008-01-01

    This paper demonstrates the application of exergy to gain a better understanding of coal properties, especially chemical exergy and specific chemical exergy. In this study, a BASIC computer program was used to calculation of the chemical exergies of the coal-based fuels. Calculations showed that the chemical composition of the coal influences strongly the values of the chemical exergy. The exergy value of a coal is closely related to the H:C and O:C ratios. High proportions of hydrogen and/or oxygen, compared to carbon, generally reduce the exergy value of the coal. High contents of the moisture and/or the ash cause to low values of the chemical exergy. The aim of this paper is to calculate the chemical exergy of coals by using equations given in the literature and to detect and to evaluate quantitatively the effect of irreversible phenomena increased the thermodynamic imperfection of the processes. In this paper, the calculated exergy values of the fuels will be useful for energy experts studied in the coal mining area and coal-fired powerplants

  6. A study to determine the differences between the displayed dose values for two full-field digital mammography units and values calculated using a range of Monte-Carlo-based techniques: A phantom study

    International Nuclear Information System (INIS)

    Borg, M.; Badr, I.; Royle, G. J.

    2013-01-01

    Modern full-field digital mammography (FFDM) units display the mean glandular dose (MGD) and the entrance or incident air kerma (K) to the breast following each exposure. Information on how these values are calculated is limited and knowing how displayed MGD values compare and correlate to conventional Monte-Carlo-based methods is useful. From measurements done on polymethyl methacrylate (PMMA) phantoms, it has been shown that displayed and calculated MGD values are similar for thin to medium thicknesses and appear to differ with larger PMMA thicknesses. As a result, a multiple linear regression analysis on the data was performed to generate models by which displayed MGD values on the two FFDM units included in the study may be converted to the Monte-Carlo values calculated by conventional methods. These models should be a useful tool for medical physicists requiring MGD data from FFDM units included in this paper and should reduce the survey time spent on dose calculations. (authors)

  7. 40 CFR 600.208-12 - Calculation of FTP-based and HFET-based fuel economy and carbon-related exhaust emission values...

    Science.gov (United States)

    2010-07-01

    ...-based fuel economy and carbon-related exhaust emission values for a model type. 600.208-12 Section 600... ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later...-based and HFET-based fuel economy and carbon-related exhaust emission values for a model type. (a) Fuel...

  8. Theoretical calculation of G-value

    International Nuclear Information System (INIS)

    Sato, Shin

    1979-01-01

    The slowing down spectra of secondary electrons seem to be the most important concept in the case of considering the initial process of radiation chemistry. This paper is described on the consideration for it and the approximation method used. G-value can be determined by the result of integration of the product of the whole slowing down spectrum and the total production cross section of a product to be determined over electron energy. After the relation of G-value to electron beam irradiation and γ-ray decomposition are described, the calculated and experimental values are compared, unexpected agreement is obtained. The reason why the plausible G-values were obtained to such extent by rough calculation is not known. From these G-values, the production of O 3 from O 2 , the radiolysis of NO, the chemical ionization of excited acetylene and others were estimated. The most interesting object in radiation chemistry is the condensing phase. A simple but important problem in radiation chemistry is the definition of the ionization in condensing phase. That is, it is of problem that what distance electrons have to come away from their original molecule to regard as the ionization. The considerations on the size of spur produced in water by γ-irradiation, the distribution of ion pairs in a spur, and Jesse effect are also made. (Wakatsuki, Y.)

  9. The pKa Cooperative: a collaborative effort to advance structure-based calculations of pKa values and electrostatic effects in proteins.

    Science.gov (United States)

    Nielsen, Jens E; Gunner, M R; García-Moreno, Bertrand E

    2011-12-01

    The pK(a) Cooperative (http://www.pkacoop.org) was organized to advance development of accurate and useful computational methods for structure-based calculation of pK(a) values and electrostatic energies in proteins. The Cooperative brings together laboratories with expertise and interest in theoretical, computational, and experimental studies of protein electrostatics. To improve structure-based energy calculations, it is necessary to better understand the physical character and molecular determinants of electrostatic effects. Thus, the Cooperative intends to foment experimental research into fundamental aspects of proteins that depend on electrostatic interactions. It will maintain a depository for experimental data useful for critical assessment of methods for structure-based electrostatics calculations. To help guide the development of computational methods, the Cooperative will organize blind prediction exercises. As a first step, computational laboratories were invited to reproduce an unpublished set of experimental pK(a) values of acidic and basic residues introduced in the interior of staphylococcal nuclease by site-directed mutagenesis. The pK(a) values of these groups are unique and challenging to simulate owing to the large magnitude of their shifts relative to normal pK(a) values in water. Many computational methods were tested in this first Blind Prediction Challenge and critical assessment exercise. A workshop was organized in the Telluride Science Research Center to objectively assess the performance of many computational methods tested on this one extensive data set. This volume of Proteins: Structure, Function, and Bioinformatics introduces the pK(a) Cooperative, presents reports submitted by participants in the Blind Prediction Challenge, and highlights some of the problems in structure-based calculations identified during this exercise. Copyright © 2011 Wiley-Liss, Inc.

  10. 40 CFR 600.206-08 - Calculation and use of FTP-based and HFET-based fuel economy values for vehicle configurations.

    Science.gov (United States)

    2010-07-01

    ... EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures... economy value exists for an electric vehicle configuration, all values for that vehicle configuration are... HFET-based fuel economy values for vehicle configurations. 600.206-08 Section 600.206-08 Protection of...

  11. 40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.

    Science.gov (United States)

    2010-07-01

    ... the original base level fuel economy values); and (iii) All subconfigurations within the new base... separating subconfigurations from an existing base level and placing them into a new base level. The new base... this paragraph, as containing a new basic engine. The manufacturer will be permitted to designate such...

  12. Energy spectrum based calculation of the half and the tenth value layers for brachytherapy sources using a semiempirical parametrized mass attenuation coefficient formulism

    International Nuclear Information System (INIS)

    Yue, Ning J.

    2008-01-01

    As different types of radionuclides (e.g., 131 Cs source) are introduced for clinical use in brachytherapy, the question is raised regarding whether a relatively simple method exists for the derivation of values of the half value layer (HVL) or the tenth value layer (TVL). For the radionuclide that has been clinically used for years, such as 125 I and 103 Pd, the sources have been manufactured and marketed by several vendors with different designs and structures. Because of the nature of emission of low energy photons for these radionuclides, energy spectra of the sources are very dependent on their individual designs. Though values of the HVL or the TVL in certain commonly used shielding materials are relatively small for these low energy photon emitting sources, the question remains how the variations in energy spectra affect the HVL (or TVL) values and whether these values can be calculated with a relatively simple method. A more fundamental question is whether a method can be established to derive the HVL (TVL) values for any brachytherapy sources and for different materials in a relatively straightforward fashion. This study was undertaken to answer these questions. Based on energy spectra, a well established semiempirical mass attenuation coefficient computing scheme was utilized to derive the HVL (TVL) values of different materials for different types of brachytherapy sources. The method presented in this study may be useful to estimate HVL (TVL) values of different materials for brachytherapy sources of different designs and containing different radionuclides

  13. Value-based pricing

    OpenAIRE

    Netseva-Porcheva Tatyana

    2010-01-01

    The main aim of the paper is to present the value-based pricing. Therefore, the comparison between two approaches of pricing is made - cost-based pricing and value-based pricing. The 'Price sensitively meter' is presented. The other topic of the paper is the perceived value - meaning of the perceived value, the components of perceived value, the determination of perceived value and the increasing of perceived value. In addition, the best company strategies in matrix 'value-cost' are outlined. .

  14. Stochastic LMP (Locational marginal price) calculation method in distribution systems to minimize loss and emission based on Shapley value and two-point estimate method

    International Nuclear Information System (INIS)

    Azad-Farsani, Ehsan; Agah, S.M.M.; Askarian-Abyaneh, Hossein; Abedi, Mehrdad; Hosseinian, S.H.

    2016-01-01

    LMP (Locational marginal price) calculation is a serious impediment in distribution operation when private DG (distributed generation) units are connected to the network. A novel policy is developed in this study to guide distribution company (DISCO) to exert its control over the private units when power loss and green-house gases emissions are minimized. LMP at each DG bus is calculated according to the contribution of the DG to the reduced amount of loss and emission. An iterative algorithm which is based on the Shapley value method is proposed to allocate loss and emission reduction. The proposed algorithm will provide a robust state estimation tool for DISCOs in the next step of operation. The state estimation tool provides the decision maker with the ability to exert its control over private DG units when loss and emission are minimized. Also, a stochastic approach based on the PEM (point estimate method) is employed to capture uncertainty in the market price and load demand. The proposed methodology is applied to a realistic distribution network, and efficiency and accuracy of the method are verified. - Highlights: • Reduction of the loss and emission at the same time. • Fair allocation of loss and emission reduction. • Estimation of the system state using an iterative algorithm. • Ability of DISCOs to control DG units via the proposed policy. • Modeling the uncertainties to calculate the stochastic LMP.

  15. Value-based pricing

    Directory of Open Access Journals (Sweden)

    Netseva-Porcheva Tatyana

    2010-01-01

    Full Text Available The main aim of the paper is to present the value-based pricing. Therefore, the comparison between two approaches of pricing is made - cost-based pricing and value-based pricing. The 'Price sensitively meter' is presented. The other topic of the paper is the perceived value - meaning of the perceived value, the components of perceived value, the determination of perceived value and the increasing of perceived value. In addition, the best company strategies in matrix 'value-cost' are outlined. .

  16. The principal axis approach to value-added calculation.

    OpenAIRE

    He, Q.; Tymms, P.

    2014-01-01

    The assessment of the achievement of students and the quality of schools has drawn increasing attention from educational researchers, policy makers, and practitioners. Various test-based accountability and feedback systems involving the use of value-added techniques have been developed for evaluating the effectiveness of individual teaching professionals and schools. A variety of models have been employed for calculating value-added measures, including the use of linear regression models whic...

  17. Calculation of Biological Assets Fair Value and Their Transformations Results

    OpenAIRE

    Ludmyla Khoruzhiy

    2013-01-01

    In the article the IAS 41 'Agriculture' (fair value of biological assets and agricultural products) terminology has been considered within the Russian theory and practice of accounting. A multifactor model of assets and liabilities fair value calculation has been proposed. It has been found that the application of fair value to measure biological assets and agricultural produce may be a burdensome due to the requirement of fair value calculation at each balance sheet date. In addition, some r...

  18. Value-based genomics.

    Science.gov (United States)

    Gong, Jun; Pan, Kathy; Fakih, Marwan; Pal, Sumanta; Salgia, Ravi

    2018-03-20

    Advancements in next-generation sequencing have greatly enhanced the development of biomarker-driven cancer therapies. The affordability and availability of next-generation sequencers have allowed for the commercialization of next-generation sequencing platforms that have found widespread use for clinical-decision making and research purposes. Despite the greater availability of tumor molecular profiling by next-generation sequencing at our doorsteps, the achievement of value-based care, or improving patient outcomes while reducing overall costs or risks, in the era of precision oncology remains a looming challenge. In this review, we highlight available data through a pre-established and conceptualized framework for evaluating value-based medicine to assess the cost (efficiency), clinical benefit (effectiveness), and toxicity (safety) of genomic profiling in cancer care. We also provide perspectives on future directions of next-generation sequencing from targeted panels to whole-exome or whole-genome sequencing and describe potential strategies needed to attain value-based genomics.

  19. Different percentages of false-positive results obtained using five methods for the calculation of reference change values based on simulated normal and ln-normal distributions of data

    DEFF Research Database (Denmark)

    Lund, Flemming; Petersen, Per Hyltoft; Fraser, Callum G

    2016-01-01

    a homeostatic set point that follows a normal (Gaussian) distribution. This set point (or baseline in steady-state) should be estimated from a set of previous samples, but, in practice, decisions based on reference change value are often based on only two consecutive results. The original reference change value......-positive results. The aim of this study was to investigate false-positive results using five different published methods for calculation of reference change value. METHODS: The five reference change value methods were examined using normally and ln-normally distributed simulated data. RESULTS: One method performed...... best in approaching the theoretical false-positive percentages on normally distributed data and another method performed best on ln-normally distributed data. The commonly used reference change value method based on two results (without use of estimated set point) performed worst both on normally...

  20. 40 CFR 600.207-08 - Calculation and use of vehicle-specific 5-cycle-based fuel economy values for vehicle...

    Science.gov (United States)

    2010-07-01

    ...-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year... for each vehicle under § 600.114-08 and as approved in § 600.008-08 (c), are used to determine vehicle... fuel economy value exists for an electric vehicle configuration, all values for that vehicle...

  1. Correct fair market value calculation needed to avoid regulatory challenges.

    Science.gov (United States)

    Dietrich, M O

    1997-09-01

    In valuing a physician practice for acquisition, it is important for buyers and sellers to distinguish between fair market value and strategic value. Although many buyers would willingly pay for the strategic value of a practice, tax-exempt buyers are required by law to consider only the fair market value in setting a bid price. Valuators must adjust group earnings to exclude items that do not apply to any willing seller and include items that do apply to any willing seller to arrive at the fair market value of the practice. In addition, the weighted average cost of capital (WACC), which becomes the discount rate in the valuation model, is critical to the measure of value of the practice. Small medical practices are assumed to have few hard assets and little long-term debt, and the WACC is calculated on the basis of those assumptions. When a small practice has considerable debt, however, this calculated WACC may be inappropriate for valuing the practice. In every case, evidence that shows that a transaction has been negotiated "at arm's length" should stave off any regulatory challenge.

  2. Programmable calculator programs to solve softwood volume and value equations.

    Science.gov (United States)

    Janet K. Ayer. Sachet

    1982-01-01

    This paper presents product value and product volume equations as programs for handheld calculators. These tree equations are for inland Douglas-fir, young-growth Douglas-fir, western white pine, ponderosa pine, and western larch. Operating instructions and an example are included.

  3. Comparison of calculated integral values using measured and calculated neutron spectra for fusion neutronics analyses

    International Nuclear Information System (INIS)

    Sekimoto, H.

    1987-01-01

    The kerma heat production density, tritum production density, and dose in a lithium-fluoride pile with a deuterium-tritum neutron source were calculated with a data processing code, UFO, from the pulse height distribution of a miniature NE213 neutron spectrometer, and compared with the values calculated with a Monte Carlo code, MORSE-CV. Both the UFO and MORSE-CV values agreed with the statistical error (less than 6%) of the MORSE-CV calculations, except for the outer-most point in the pile. The MORSE-CV values were slightly smaller than the UFO values for almost all cases, and this tendency increased with increasing distance from the neutron source

  4. Calculation of critical level value for radioactivity detection in gamma spectrometric analysis on the base of semiconductor detectors under the Chernobyl' conditions in 1986-1987

    International Nuclear Information System (INIS)

    Glazunov, V.O.; Rusyaev, R.V.

    1989-01-01

    The problem of determination of radioactivity critical level in a sample by means of gamma spectrometer with semiconductor detector is studied theoretically. The formula for critical level, which shows that it is necessary to know the background pulse counting rate in order to determine the minimum gamma photon pulse counting rates, is derived. Calculations of critical level for the Chernobyl' conditions in time period from October 1986 till July 1987 are made. 8 refs.; 7 figs.; 17 tabs

  5. Actuarial values calculated using the incomplete Gamma function

    Directory of Open Access Journals (Sweden)

    Giovanni Mingari Scarpello

    2013-03-01

    Full Text Available The complete expectation-of-life for a person and the actuarial present value of continuous life annuities are defined by integrals. In all of them at least one of the factors is a survival function value ratio. If de Moivre’s law of mortality is chosen, such integrals can easily be evaluated; but if the Makeham survival function is adopted, they are used to be calculated numerically. For the above actuarial figures, closed form integrations are hereafter provided by means of the incomplete Gamma function.

  6. Influence of the Different Primary Cancers and Different Types of Bone Metastasis on the Lesion-based Artificial Neural Network Value Calculated by a Computer-aided Diagnostic System,BONENAVI, on Bone Scintigraphy Images

    Directory of Open Access Journals (Sweden)

    TAKURO ISODA

    2017-01-01

    .2% for lung cancer. The corresponding possibility were 14.7% for osteoblastic metastases, 23.9% for mildly osteoblastic metastases, 7.14% for mixedtype metastases, and 16.0% for osteolytic metastases.Conclusion: The lesion-based ANN values calculated by BONENAVI can be influenced by the type of primary cancer and bone metastasis.

  7. The cost of nuclear electricity: economic values and political calculations

    International Nuclear Information System (INIS)

    Stauffer, T.

    1985-01-01

    The subject is covered in sections: introduction (monetary inflation; US-style rate-base formula; cost escalation); electricity generation costs (rate-base calculation formula; regulatory versus economic costs; inflationary case; cost-of-service rates versus inflation; first year electricity costs); rate shock (A. comparison with oil; B. nuclear case; C. comparison with coal/nuclear system; vintaged electricity costs versus growth and inflation); conclusions. (U.K.)

  8. Method of allowing for resonances in calculating reactivity values

    International Nuclear Information System (INIS)

    Kumpf, H.

    1985-01-01

    On the basis of the integral transport equation for the source density an expression has been derived for calculating reactivity values taking resonances in the core and in the sample into account. The model has been used for evaluating reactivities measured in the Rossendorf SEG IV configuration. It is shown that the influence of resonances in the core can be kept tolerable, if a sufficiently thick buffer zone of only slightly absorbing non-resonant material is arranged between the sample and the core. (author)

  9. Calculation of cut-off values based on the Autoimmune Bullous Skin Disorder Intensity Score (ABSIS) and Pemphigus Disease Area Index (PDAI) pemphigus scoring systems for defining moderate, significant and extensive types of pemphigus.

    Science.gov (United States)

    Boulard, C; Duvert Lehembre, S; Picard-Dahan, C; Kern, J S; Zambruno, G; Feliciani, C; Marinovic, B; Vabres, P; Borradori, L; Prost-Squarcioni, C; Labeille, B; Richard, M A; Ingen-Housz-Oro, S; Houivet, E; Werth, V P; Murrell, D F; Hertl, M; Benichou, J; Joly, P

    2016-07-01

    Two pemphigus severity scores, Autoimmune Bullous Skin Disorder Intensity Score (ABSIS) and Pemphigus Disease Area Index (PDAI), have been proposed to provide an objective measure of disease activity. However, the use of these scores in clinical practice is limited by the absence of cut-off values that allow differentiation between moderate, significant and extensive types of pemphigus. To calculate cut-off values defining moderate, significant and extensive pemphigus based on the ABSIS and PDAI scores. In 31 dermatology departments in six countries, consecutive patients with newly diagnosed pemphigus were assessed for pemphigus severity, using ABSIS, PDAI, Physician's Global Assessment (PGA) and Dermatology Life Quality Index (DLQI) scores. Cut-off values defining moderate, significant and extensive subgroups were calculated based on the 25th and 75th percentiles of the ABSIS and PDAI scores. The median ABSIS, PDAI, PGA and DLQI scores of the three severity subgroups were compared in order to validate these subgroups. Ninety-six patients with pemphigus vulgaris (n = 77) or pemphigus foliaceus (n = 19) were included. The median PDAI activity and ABSIS total scores were 27·5 (range 3-84) and 34·8 points (range 0·5-90·5), respectively. The respective cut-off values corresponding to the first and third quartiles of the scores were 15 and 45 for the PDAI, and 17 and 53 for ABSIS. The moderate, significant and extensive subgroups were thus defined, and had distinguishing median ABSIS (P cut-off values of 15 and 45 for PDAI and 17 and 53 for ABSIS, to distinguish moderate, significant and extensive pemphigus forms. Identifying these pemphigus activity subgroups should help physicians to classify and manage patients with pemphigus. © 2016 British Association of Dermatologists.

  10. 40 CFR 600.207-86 - Calculation of fuel economy values for a model type.

    Science.gov (United States)

    2010-07-01

    ... calculation of the original base level fuel economy values), and (iii) All subconfigurations within the new... a new base level. The new base level is identical to the existing base level except that it shall be considered, for the purposes of this paragraph, as containing a new basic engine. The manufacturer will be...

  11. 40 CFR 600.207-93 - Calculation of fuel economy values for a model type.

    Science.gov (United States)

    2010-07-01

    ... calculation of the original base level fuel economy values); and (iii) All subconfigurations within the new... a new base level. The new base level is identical to the existing base level except that it shall be considered, for the purposes of this paragraph, as containing a new basic engine. The manufacturer will be...

  12. Value representations: a value based dialogue tool

    DEFF Research Database (Denmark)

    Petersen, Marianne Graves; Rasmussen, Majken Kirkegaard

    2011-01-01

    Stereotypic presumptions about gender affect the design process, both in relation to how users are understood and how products are designed. As a way to decrease the influence of stereotypic presumptions in design process, we propose not to disregard the aspect of gender in the design process......, as the perspective brings valuable insights on different approaches to technology, but instead to view gender through a value lens. Contributing to this perspective, we have developed Value Representations as a design-oriented instrument for staging a reflective dialogue with users. Value Representations...

  13. Calculation of weighted averages approach for the estimation of ping tolerance values

    Science.gov (United States)

    Silalom, S.; Carter, J.L.; Chantaramongkol, P.

    2010-01-01

    A biotic index was created and proposed as a tool to assess water quality in the Upper Mae Ping sub-watersheds. The Ping biotic index was calculated by utilizing Ping tolerance values. This paper presents the calculation of Ping tolerance values of the collected macroinvertebrates. Ping tolerance values were estimated by a weighted averages approach based on the abundance of macroinvertebrates and six chemical constituents that include conductivity, dissolved oxygen, biochemical oxygen demand, ammonia nitrogen, nitrate nitrogen and orthophosphate. Ping tolerance values range from 0 to 10. Macroinvertebrates assigned a 0 are very sensitive to organic pollution while macroinvertebrates assigned 10 are highly tolerant to pollution.

  14. Groebner bases in perturbative calculations

    Energy Technology Data Exchange (ETDEWEB)

    Gerdt, Vladimir P. [Laboratory of Information Technologies, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation)

    2004-10-01

    In this paper we outline the most general and universal algorithmic approach to reduction of loop integrals to basic integrals. The approach is based on computation of Groebner bases for recurrence relations derived from the integration by parts method. In doing so we consider generic recurrence relations when propagators have arbitrary integer powers treated as symbolic variables (indices) for the relations.

  15. Groebner bases in perturbative calculations

    International Nuclear Information System (INIS)

    Gerdt, Vladimir P.

    2004-01-01

    In this paper we outline the most general and universal algorithmic approach to reduction of loop integrals to basic integrals. The approach is based on computation of Groebner bases for recurrence relations derived from the integration by parts method. In doing so we consider generic recurrence relations when propagators have arbitrary integer powers treated as symbolic variables (indices) for the relations

  16. Hospital Value-Based Purchasing

    Data.gov (United States)

    U.S. Department of Health & Human Services — Hospital Value-Based Purchasing (VBP) is part of the Centers for Medicare and Medicaid Services (CMS) long-standing effort to link Medicares payment system to a...

  17. Calculating infrared contributions to vacuum expectation values of gluonic and quark fields

    International Nuclear Information System (INIS)

    Arbuzov, B.A.; Boos, E.E.; Turashvili, K.Sh.

    1986-01-01

    Based on the infrared asymptotics of the lower QCD Green's functions obtained before, we propose a definition and elaborate a technique for calculating non-perturbative vacuum expectations of gluon and quark fields. In our calculations, we use only the known QCD parameters: constituent quark masses, the confining potential slope and the QCD parameter Λ. The values obtained for the vacuum expectations agree well with experiment. (orig.)

  18. Tariff based value of wind energy

    Energy Technology Data Exchange (ETDEWEB)

    Raekkoelaeinen, J; Vilkko, M; Antila, H; Lautala, P [Tampere Univ. of Technology (Finland)

    1996-12-31

    In this article an approach for determining a value of wind energy is presented. Calculation is based on wholesale tariffs, i.e. the value of wind energy is defined in comparison with other purchase. This approach can be utilised as an aid in the investment planning in defining the benefits of new wind generation capacity. Linear programming optimization method is used. A case study is presented for different wind scenarios. The value of wind energy can vary remarkably depending on timing of power output. (author)

  19. Tariff based value of wind energy

    Energy Technology Data Exchange (ETDEWEB)

    Raekkoelaeinen, J.; Vilkko, M.; Antila, H.; Lautala, P. [Tampere Univ. of Technology (Finland)

    1995-12-31

    In this article an approach for determining a value of wind energy is presented. Calculation is based on wholesale tariffs, i.e. the value of wind energy is defined in comparison with other purchase. This approach can be utilised as an aid in the investment planning in defining the benefits of new wind generation capacity. Linear programming optimization method is used. A case study is presented for different wind scenarios. The value of wind energy can vary remarkably depending on timing of power output. (author)

  20. HOW TO CALCULATE INFORMATION VALUE FOR EFFECTIVE SECURITY RISK ASSESSMENT

    Directory of Open Access Journals (Sweden)

    Mario Sajko

    2006-12-01

    Full Text Available The actual problem of information security (infosec risk assessment is determining the value of information property or asset. This is particularly manifested through the use of quantitative methodology in which it is necessary to state the information value in quantitative sizes. The aim of this paper is to describe the evaluation possibilities of business information values, and the criteria needed for determining importance of information. For this purpose, the dimensions of information values will be determined and the ways used to present the importance of information contents will be studied. There are two basic approaches that can be used in evaluation: qualitative and quantitative. Often they are combined to determine forms of information content. The proposed criterion is the three-dimension model, which combines the existing experiences (i.e. possible solutions for information value assessment with our own criteria. An attempt for structuring information value in a business environment will be made as well.

  1. 19 CFR 351.407 - Calculation of constructed value and cost of production.

    Science.gov (United States)

    2010-04-01

    ... ANTIDUMPING AND COUNTERVAILING DUTIES Calculation of Export Price, Constructed Export Price, Fair Value, and Normal Value § 351.407 Calculation of constructed value and cost of production. (a) Introduction. This... 19 Customs Duties 3 2010-04-01 2010-04-01 false Calculation of constructed value and cost of...

  2. Data base to compare calculations and observations

    International Nuclear Information System (INIS)

    Tichler, J.L.

    1985-01-01

    Meteorological and climatological data bases were compared with known tritium release points and diffusion calculations to determine if calculated concentrations could replace measure concentrations at the monitoring stations. Daily tritium concentrations were monitored at 8 stations and 16 possible receptors. Automated data retrieval strategies are listed

  3. 19 CFR 351.403 - Sales used in calculating normal value; transactions between affiliated parties.

    Science.gov (United States)

    2010-04-01

    ..., Constructed Export Price, Fair Value, and Normal Value § 351.403 Sales used in calculating normal value... 19 Customs Duties 3 2010-04-01 2010-04-01 false Sales used in calculating normal value... offers for sale in determining normal value. Additionally, this section clarifies the authority of the...

  4. Customer Lifetime and After Lifetime Value - Calculations from an Iranian perspective

    DEFF Research Database (Denmark)

    Hollensen, Svend; Wilson, Jonathan A.J.; Ebrahimi, Mehdi

    2011-01-01

    Customer Lifetime Value (CLV) is an established relationship marketing-centric approach to evaluating the significance of a customer, and what resources should be allocated towards maintaining relations – beyond short-term transactional views. The conceptual argument presented in this paper...... contributes one very simple, yet significant argument, which is both transactional and relational. Namely, a large portion of humanity believes in a life beyond current existence – the Afterlife. Therefore, death in the psyche of such a person does not terminate benefit seeking, and there is value...... in the afterlife. The aim here, is to refine value-based calculations, drawing from varying religious perspectives: reincarnation, heaven, and enlightenment, amongst others....

  5. 21 CFR 868.1890 - Predictive pulmonary-function value calculator.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Predictive pulmonary-function value calculator... SERVICES (CONTINUED) MEDICAL DEVICES ANESTHESIOLOGY DEVICES Diagnostic Devices § 868.1890 Predictive pulmonary-function value calculator. (a) Identification. A predictive pulmonary-function value calculator is...

  6. Exact-exchange-based quasiparticle calculations

    International Nuclear Information System (INIS)

    Aulbur, Wilfried G.; Staedele, Martin; Goerling, Andreas

    2000-01-01

    One-particle wave functions and energies from Kohn-Sham calculations with the exact local Kohn-Sham exchange and the local density approximation (LDA) correlation potential [EXX(c)] are used as input for quasiparticle calculations in the GW approximation (GWA) for eight semiconductors. Quasiparticle corrections to EXX(c) band gaps are small when EXX(c) band gaps are close to experiment. In the case of diamond, quasiparticle calculations are essential to remedy a 0.7 eV underestimate of the experimental band gap within EXX(c). The accuracy of EXX(c)-based GWA calculations for the determination of band gaps is as good as the accuracy of LDA-based GWA calculations. For the lowest valence band width a qualitatively different behavior is observed for medium- and wide-gap materials. The valence band width of medium- (wide-) gap materials is reduced (increased) in EXX(c) compared to the LDA. Quasiparticle corrections lead to a further reduction (increase). As a consequence, EXX(c)-based quasiparticle calculations give valence band widths that are generally 1-2 eV smaller (larger) than experiment for medium- (wide-) gap materials. (c) 2000 The American Physical Society

  7. Calculation for simulation of archery goal value using a web camera and ultrasonic sensor

    Science.gov (United States)

    Rusjdi, Darma; Abdurrasyid, Wulandari, Dewi Arianti

    2017-08-01

    Development of the device simulator digital indoor archery-based embedded systems as a solution to the limitations of the field or open space is adequate, especially in big cities. Development of the device requires simulations to calculate the value of achieving the target based on the approach defined by the parabolic motion variable initial velocity and direction of motion of the arrow reaches the target. The simulator device should be complemented with an initial velocity measuring device using ultrasonic sensors and measuring direction of the target using a digital camera. The methodology uses research and development of application software from modeling and simulation approach. The research objective to create simulation applications calculating the value of the achievement of the target arrows. Benefits as a preliminary stage for the development of the simulator device of archery. Implementation of calculating the value of the target arrows into the application program generates a simulation game of archery that can be used as a reference development of the digital archery simulator in a room with embedded systems using ultrasonic sensors and web cameras. Applications developed with the simulation calculation comparing the outer radius of the circle produced a camera from a distance of three meters.

  8. Contribution to the calculation of the alpha value in the study of optimization on radiological protection

    International Nuclear Information System (INIS)

    Perez, Clarice de Freitas Acosta

    2007-01-01

    The Alpha value is an extremely important criterion because it determines the time that each country takes to reach its proposals to decrease the doses to workers involved with ionizing radiation sources. Presently, countries adopt a single value for alpha based in the annual gross national product, GNP, per capita. The aim of this paper is to show that it should be more efficient the selection of a curve for alpha in place of a single value. This curve, in its turn, should allow an alpha value that would be constraint to the greatest individual doses present in each optimization process, applied to design and operation. These maximum individual doses should represent the dose distribution between the workers team. To build the curve, alpha values suggested will not be based on the GNP per capita but on a distribution function of the maximum individual doses and on the time necessary to reach the goal of 1/10 of the annual dose limit, that is, to reach the region where the individual doses are considered acceptable. This new alpha value approach solves several problems risen by the present methodology, among which we emphasize: a) It can be accomplished only one optimization for each radiological protection option set; b) each country may have different constraints limits that can create serious problems in the international interchange; c) it avoids the possibility to calculate the probable death rate due to the collective dose. This type of calculation is undesirable to international organization. (author)

  9. Tailoring Agility: Promiscuous Pair Story Authoring and Value Calculation

    Science.gov (United States)

    Tendon, Steve

    This chapter describes how a multi-national software organization created a business plan involving business units from eight countries that followed an agile way, after two previously failed attempts with traditional approaches. The case is told by the consultant who initiated implementation of agility into requirements gathering, estimation and planning processes in an international setting. The agile approach was inspired by XP, but then tailored to meet the peculiar requirements. Two innovations were critical. The first innovation was promiscuous pair story authoring, where user stories were written by two people (similarly to pair programming), and the pairing changed very often (as frequently as every 15-20 minutes) to achieve promiscuity and cater for diverse point of views. The second innovation was an economic value evaluation (and not the cost) which was attributed to stories. Continuous recalculation of the financial value of the stories allowed to assess the projects financial return. In this case implementation of agility in the international context allowed the involved team members to reach consensus and unanimity of decisions, vision and purpose.

  10. Value concepts and value based collaboration in building projects

    DEFF Research Database (Denmark)

    Jensen, Per Anker

    2005-01-01

    Value has in recent years become a popular term in management theory and practice in general as well as in economic theory and architectural management. This paper attempts to clarify the various uses and meanings of concepts of value/values. Six different value concepts are identified. The ori......-gin and use of value concepts in classic and modern economic theory and in management theory is outlined. The question of objectivity and subjectivity is discussed in relation to economic value and customer value. Value creation is put in relation to development in products and processes and a number...... of design strategies are identified. The concept and methods of value based management and collaboration is discussed in this context. The paper is mainly theoretical and based on work during a MBA study in 2002-04 as well as many years of experience as building client and facilities manager....

  11. Analysis of Solar Census Remote Solar Access Value Calculation Methodology

    Energy Technology Data Exchange (ETDEWEB)

    Nangle, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dean, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Van Geet, O. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-03-01

    The costs of photovoltaic (PV) system hardware (PV panels, inverters, racking, etc.) have fallen dramatically over the past few years. Nonhardware (soft) costs, however, have failed to keep pace with the decrease in hardware costs, and soft costs have become a major driver of U.S. PV system prices. Upfront or 'sunken' customer acquisition costs make up a portion of an installation's soft costs and can be addressed through software solutions that aim to streamline sales and system design aspects of customer acquisition. One of the key soft costs associated with sales and system design is collecting information on solar access for a particular site. Solar access, reported in solar access values (SAVs), is a measurement of the available clear sky over a site and is used to characterize the impacts of local shading objects. Historically, onsite shading studies have been required to characterize the SAV of the proposed array and determine the potential energy production of a photovoltaic system.

  12. Assessing absorbed dose heterogeneities for organ S-value calculation in mice

    International Nuclear Information System (INIS)

    Mauxion, T.; Villoing, D.; Marcatili, S.; Garcia, M.P.; Poirot, M.; Bardies, M.; Suhard, J.; Barbet, J.

    2015-01-01

    Full text of publication follows. Introduction and aim: S-values calculated according to the MIRD scheme strongly depend on the size of source/target regions and particle ranges (1). Several mean organ S-values were recently calculated for mice in the context of targeted radionuclide therapy and molecular imaging (2). However, the heterogeneity of energy deposition at the sub-organ level is seldom taken into account and the relevance of mean organ S-values is not systematically evaluated. This study aims at assessing spatial variations associated to mean S-values for small animals to estimate energy deposition heterogeneity at the sub-organ or voxel level. Materials and methods: a 29 g-mouse-model generated at high spatial sampling (200*200*200 μm 3 ) from the Moby software was used to calculate S-values for several radionuclides of interest (3). Monte Carlo simulations were performed with GATE (v6.2), in which specific corrections were implemented and validated to improve the accuracy of voxel energy-scoring. Mean S-values and standard deviations were calculated from 3D-voxel-based energy deposition maps for several source/target organ pairs. As the standard deviation associated to the mean S-value in a given target organ includes both spatial and statistical fluctuations, we simulated an increasing number of primary particles (typically from 10 6 to 10 10 ) to estimate the impact of relative statistical/spatial fluctuations for several source/target pairs. A spatial dispersion factor (HS-value for Heterogeneity of S-value) was obtained when the standard deviation converged to a stable value. Results: several HS-values calculated for source organs were significant in case of self-irradiation for all considered radionuclides, but remained very low as compared to values obtained for short and large source/target distances. For example, for 131 I sources located in the thyroid, S(thyroid - thyroid)=1.80*10 -9 Gy.Bq -1 .s -1 and HS(thyroid - thyroid)=3.09*10 -10 Gy

  13. Which percentile-based approach should be preferred for calculating normalized citation impact values? An empirical comparison of five approaches including a newly developed citation-rank approach (P100)

    NARCIS (Netherlands)

    Bornmann, L.; Leydesdorff, L.; Wang, J.

    2013-01-01

    For comparisons of citation impacts across fields and over time, bibliometricians normalize the observed citation counts with reference to an expected citation value. Percentile-based approaches have been proposed as a non-parametric alternative to parametric central-tendency statistics. Percentiles

  14. An evaluation of airline beta values and their application in calculating the cost of equity capital.

    OpenAIRE

    Turner, Sheelah; Morrell, Peter

    2003-01-01

    This paper focuses on the calculation of the cost of equity capital in a sample of airlines, in comparison to industry-calculated values. The approach usually taken is to apply the Capital Asset Pricing Model to airline stock prices and market indices. The research shows that the calculated b values are sensitive to the precise methodology and calculations used. Further, the low regression model fits indicate the Capital Asset Pricing Model may not be the most suitable model for b value calcul...

  15. Market value calculation and the solution of circularity between value and the weighted average cost of capital WACC

    Directory of Open Access Journals (Sweden)

    Ignacio Vélez-Pareja

    2009-12-01

    Full Text Available Most finance textbooks present the Weighted Average Cost of Capital (WACC calculation as: WACC = Kd×(1-T×D% + Ke×E%, where Kd is the cost of debt before taxes, T is the tax rate, D% is the percentage of debt on total value, Ke is the cost of equity and E% is the percentage of equity on total value. All of them precise (but not with enough emphasis that the values to calculate D% y E% are market values. Although they devote special space and thought to calculate Kd and Ke, little effort is made to the correct calculation of market values. This means that there are several points that are not sufficiently dealt with: Market values, location in time, occurrence of tax payments, WACC changes in time and the circularity in calculating WACC. The purpose of this note is to clear up these ideas, solve the circularity problem and emphasize in some ideas that usually are looked over. Also, some suggestions are presented on how to calculate, or estimate, the equity cost of capital.

  16. What Is the Value of Value-Based Purchasing?

    Science.gov (United States)

    Tanenbaum, Sandra J

    2016-10-01

    Value-based purchasing (VBP) is a widely favored strategy for improving the US health care system. The meaning of value that predominates in VBP schemes is (1) conformance to selected process and/or outcome metrics, and sometimes (2) such conformance at the lowest possible cost. In other words, VBP schemes choose some number of "quality indicators" and financially incent providers to meet them (and not others). Process measures are usually based on clinical science that cannot determine the effects of a process on individual patients or patients with comorbidities, and do not necessarily measure effects that patients value; additionally, there is no provision for different patients valuing different things. Proximate outcome measures may or may not predict distal ones, and the more distal the outcome, the less reliably it can be attributed to health care. Outcome measures may be quite rudimentary, such as mortality rates, or highly contestable: survival or function after prostate surgery? When cost is an element of value-based purchasing, it is the cost to the value-based payer and not to other payers or patients' families. The greatest value of value-based purchasing may not be to patients or even payers, but to policy makers seeking a morally justifiable alternative to politically contested regulatory policies. Copyright © 2016 by Duke University Press.

  17. Intensity of emission lines of the quiescent solar corona: comparison between calculated and observed values

    Science.gov (United States)

    Krissinel, Boris

    2018-03-01

    The paper reports the results of calculations of the center-to-limb intensity of optically thin line emission in EUV and FUV wavelength ranges. The calculations employ a multicomponent model for the quiescent solar corona. The model includes a collection of loops of various sizes, spicules, and free (inter-loop) matter. Theoretical intensity values are found from probabilities of encountering parts of loops in the line of sight with respect to the probability of absence of other coronal components. The model uses 12 loops with sizes from 3200 to 210000 km with different values of rarefaction index and pressure at the loop base and apex. The temperature at loop apices is 1 400 000 K. The calculations utilize the CHIANTI database. The comparison between theoretical and observed emission intensity values for coronal and transition region lines obtained by the SUMER, CDS, and EIS telescopes shows quite satisfactory agreement between them, particularly for the solar disk center. For the data acquired above the limb, the enhanced discrepancies after the analysis refer to errors in EIS measurements.

  18. Reference Value Advisor: a new freeware set of macroinstructions to calculate reference intervals with Microsoft Excel.

    Science.gov (United States)

    Geffré, Anne; Concordet, Didier; Braun, Jean-Pierre; Trumel, Catherine

    2011-03-01

    International recommendations for determination of reference intervals have been recently updated, especially for small reference sample groups, and use of the robust method and Box-Cox transformation is now recommended. Unfortunately, these methods are not included in most software programs used for data analysis by clinical laboratories. We have created a set of macroinstructions, named Reference Value Advisor, for use in Microsoft Excel to calculate reference limits applying different methods. For any series of data, Reference Value Advisor calculates reference limits (with 90% confidence intervals [CI]) using a nonparametric method when n≥40 and by parametric and robust methods from native and Box-Cox transformed values; tests normality of distributions using the Anderson-Darling test and outliers using Tukey and Dixon-Reed tests; displays the distribution of values in dot plots and histograms and constructs Q-Q plots for visual inspection of normality; and provides minimal guidelines in the form of comments based on international recommendations. The critical steps in determination of reference intervals are correct selection of as many reference individuals as possible and analysis of specimens in controlled preanalytical and analytical conditions. Computing tools cannot compensate for flaws in selection and size of the reference sample group and handling and analysis of samples. However, if those steps are performed properly, Reference Value Advisor, available as freeware at http://www.biostat.envt.fr/spip/spip.php?article63, permits rapid assessment and comparison of results calculated using different methods, including currently unavailable methods. This allows for selection of the most appropriate method, especially as the program provides the CI of limits. It should be useful in veterinary clinical pathology when only small reference sample groups are available. ©2011 American Society for Veterinary Clinical Pathology.

  19. 12 CFR 997.4 - Calculation of the quarterly present-value determination.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Calculation of the quarterly present-value determination. 997.4 Section 997.4 Banks and Banking FEDERAL HOUSING FINANCE BOARD NON-BANK SYSTEM ENTITIES RESOLUTION FUNDING CORPORATION OBLIGATIONS OF THE BANKS § 997.4 Calculation of the quarterly present-value...

  20. 40 CFR 600.209-85 - Calculation of fuel economy values for labeling.

    Science.gov (United States)

    2010-07-01

    ... (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel Economy Values § 600.209... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of fuel economy values for...

  1. 40 CFR 600.209-95 - Calculation of fuel economy values for labeling.

    Science.gov (United States)

    2010-07-01

    ... (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel Economy Values § 600.209... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of fuel economy values for...

  2. Critical Values for Lawshe's Content Validity Ratio: Revisiting the Original Methods of Calculation

    Science.gov (United States)

    Ayre, Colin; Scally, Andrew John

    2014-01-01

    The content validity ratio originally proposed by Lawshe is widely used to quantify content validity and yet methods used to calculate the original critical values were never reported. Methods for original calculation of critical values are suggested along with tables of exact binomial probabilities.

  3. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions.

    Science.gov (United States)

    Tao, Guohua; Miller, William H

    2011-07-14

    An efficient time-dependent importance sampling method is developed for the Monte Carlo calculation of time correlation functions via the initial value representation (IVR) of semiclassical (SC) theory. A prefactor-free time-dependent sampling function weights the importance of a trajectory based on the magnitude of its contribution to the time correlation function, and global trial moves are used to facilitate the efficient sampling the phase space of initial conditions. The method can be generally applied to sampling rare events efficiently while avoiding being trapped in a local region of the phase space. Results presented in the paper for two system-bath models demonstrate the efficiency of this new importance sampling method for full SC-IVR calculations.

  4. Analysis of a calculation method for the determination of the value of safety or control bars

    International Nuclear Information System (INIS)

    Aguilar H, F.; Torres A, C.; Filio L, C.

    1982-09-01

    Due to the control or safety bars in a nuclear reactor are constituted by strongly absorbent materials, the Diffusion Theory like tool for the calculation of bar values is not directly applicable, should it use the Transport Theory. However the speed and economy of the Diffusion codes for the reactors calculation, those make attractiveness and by this reason its are used in the determination of characteristic parameters and even in the determination of bar values, not without before to make some theoretical developments that allow to make applicable this theory. The application of the Diffusion Theory in strongly absorbent media is based on the use of some effective cross sections distinct from the real ones obtained when imposing the reason that among the flow and it gradient in the external surface of such media (control element in general, bar type or flagstone) be similar to the one obtained using Transport Theory in all the control region (multiplicative and absorbent media) with those real cross sections. The effective cross sections were obtained of the Leopard-NUMICE cell code which has incorporate the respective calculation theory of effective cross sections. Later these constants its were used in the bidimensional diffusion code Exterminator-II, simulating in it, the distribution of safety or control bars. From the cell code its were also obtained the respective constants of the homogeneous fuel cell. The results as soon as those obtained bar values of the diffusion code, its were compared with some experimental results obtained in the Rφ Swedish reactor of natural uranium and heavy water. In this work an analysis of the bar value of one of them, trying to determine the applicability of the method is made. (Author)

  5. Multivariate statics employed as proposal for calculating the market value and property taxation

    Directory of Open Access Journals (Sweden)

    Jonilson Heil

    2013-05-01

    Full Text Available It is well known that the Brazilian municipalities aim to increase their own revenues and reduce dependence on state and federal financial transfers, optimizing their tax revenues. It is also known that the municipalities intend to carry out that mission with integrity, clarity and to present easily the accountability to regulators, as well as to their respective populations. In this paper carried out a study on the methodology employed in a town in central-southern state of Paraná to calculate the venal values and property tax (IPTU and the consequent taxation of IPTU and ITBI in these goods. Based on municipality registration data was developed, by means of multivariate statistical techniques, an analysis of the characteristics that most influence the monetary valuations of the property, and applying multiple linear regression analysis are proposed models to estimate values of the venal values of properties, allowing tax calculations predict through it. Finally, comparisons are presented between the results from the methodology used by the municipality with those obtained by the models developed, proposed for use in general.

  6. THE ACCOUNTING POSTEMPLOYMENT BENEFITS BASED ON ACTUARIAL CALCULATIONS

    Directory of Open Access Journals (Sweden)

    Anna CEBOTARI

    2017-11-01

    Full Text Available The accounting post-employment benefits, based on actuarial calculations, at present remains a subject studied in Moldova only theoretically. Applying actuarial calculations of accounting in fact denotes its character of evolving. Because national accounting standards have been adapted to international, which, in turn, require the valuation of assets and debts at fair value, there is a need to draw up exact calculations on which stands the theory of probability and mathematical statistics. One of the main objectives of accounting information is reflected in its financial situations and providing internal and external users of the entity. Hence, arises the need to reflect highly reliable information that can be provided by applying actuarial calculations.

  7. Approximate calculation method for integral of mean square value of nonstationary response

    International Nuclear Information System (INIS)

    Aoki, Shigeru; Fukano, Azusa

    2010-01-01

    The response of the structure subjected to nonstationary random vibration such as earthquake excitation is nonstationary random vibration. Calculating method for statistical characteristics of such a response is complicated. Mean square value of the response is usually used to evaluate random response. Integral of mean square value of the response corresponds to total energy of the response. In this paper, a simplified calculation method to obtain integral of mean square value of the response is proposed. As input excitation, nonstationary white noise and nonstationary filtered white noise are used. Integrals of mean square value of the response are calculated for various values of parameters. It is found that the proposed method gives exact value of integral of mean square value of the response.

  8. Value-based medicine: concepts and application

    OpenAIRE

    Jong-Myon Bae

    2015-01-01

    Global healthcare in the 21st century is characterized by evidence-based medicine (EBM), patient-centered care, and cost effectiveness. EBM involves clinical decisions being made by integrating patient preference with medical treatment evidence and physician experiences. The Center for Value-Based Medicine suggested value-based medicine (VBM) as the practice of medicine based upon the patient-perceived value conferred by an intervention. VBM starts with the best evidence-based data and conver...

  9. Compilations of measured and calculated physicochemical property values for PCBs, PBDEs, PCDDs and PAHs

    Data.gov (United States)

    U.S. Environmental Protection Agency — The dataset consists of compilations of measured and calculated physicochemical property values for PCBs, PBDEs, PCDDs and PAHs. The properties included in this...

  10. The value of pulmonary vessel CT measuring and calculating of relative ratio

    International Nuclear Information System (INIS)

    Ji Jiansong; Xu Xiaoxiong; Lv Suzhen; Zhao Zhongwei; Wang Zufei; Xu Min; Gong Jianping

    2004-01-01

    Objective: To evaluate value of CT measurement and calculation of vessels of isolate pig lung, by compare with measurement and calculation of resin cast of them. Methods: CT scanned and measured the four isolated pig lung which vessels were full with ABS liquid or self-solidification resin liquid, and calculate the relative ratio of superior/inferior order and vein/artery of same order. After resin cast were made, measure and calculate the same as CT did. Results: Second order of calculation of vein/artery of same order by the two method had statistic difference (P 0.05). Conclusion: CT has high value in calculation of the relative ratio of superior/inferior order

  11. Calculation of the number of branches of multi-valued decision trees in computer aided importance rank of parameters

    Directory of Open Access Journals (Sweden)

    Tiszbierek Agnieszka

    2017-01-01

    Full Text Available An elaborated digital computer programme supporting the time-consuming process of selecting the importance rank of construction and operation parameters by means of stating optimum sets is based on the Quine – McCluskey algorithm of minimizing individual partial multi-valued logic functions. The example with real time data, calculated by means of the programme, showed that among the obtained optimum sets there were such which had a different number of real branches after being presented on the multi-valued logic decision tree. That is why an idea of elaborating another functionality of the programme – a module calculating the number of branches of real, multi-valued logic decision trees presenting optimum sets chosen by the programme was pursued. This paper presents the idea and the method for developing a module calculating the number of branches, real for each of optimum sets indicated by the programme, as well as to the calculation process.

  12. Values-based recruitment in health care.

    Science.gov (United States)

    Miller, Sam Louise

    2015-01-27

    Values-based recruitment is a process being introduced to student selection for nursing courses and appointment to registered nurse posts. This article discusses the process of values-based recruitment and demonstrates why it is important in health care today. It examines the implications of values-based recruitment for candidates applying to nursing courses and to newly qualified nurses applying for their first posts in England. To ensure the best chance of success, candidates should understand the principles and process of values-based recruitment and how to prepare for this type of interview.

  13. Decay hazard (Scheffer) index values calculated from 1971-2000 climate normal data

    Science.gov (United States)

    Charles G. Carll

    2009-01-01

    Climate index values for estimating decay hazard to wood exposed outdoors above ground (commonly known as Scheffer index values) were calculated for 280 locations in the United States (270 locations in the conterminous United States) using the most current climate normal data available from the National Climatic Data Center. These were data for the period 1971–2000. In...

  14. 19 CFR 351.408 - Calculation of normal value of merchandise from nonmarket economy countries.

    Science.gov (United States)

    2010-04-01

    ... nonmarket economy countries. 351.408 Section 351.408 Customs Duties INTERNATIONAL TRADE ADMINISTRATION... economy countries. (a) Introduction. In identifying dumping from a nonmarket economy country, the Secretary normally will calculate normal value by valuing the nonmarket economy producers' factors of...

  15. Sensitivity of NTCP parameter values against a change of dose calculation algorithm

    International Nuclear Information System (INIS)

    Brink, Carsten; Berg, Martin; Nielsen, Morten

    2007-01-01

    Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis with those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models

  16. GPU based acceleration of first principles calculation

    International Nuclear Information System (INIS)

    Tomono, H; Tsumuraya, K; Aoki, M; Iitaka, T

    2010-01-01

    We present a Graphics Processing Unit (GPU) accelerated simulations of first principles electronic structure calculations. The FFT, which is the most time-consuming part, is about 10 times accelerated. As the result, the total computation time of a first principles calculation is reduced to 15 percent of that of the CPU.

  17. CALCULATION OF LAND VALUE ON STATE FOREST ENTERPRISES BY USING FAUSTMANN FORMULA

    Directory of Open Access Journals (Sweden)

    Atakan Öztürk

    2000-04-01

    Full Text Available The land that is one of the asset items of the forest enterprises is both an establishment space, and source and depots of raw materials for forest enterprises. Land value is second following the growing stock in fixed and total capital items. The land value helps to determine the forest value, the total capital and the economic performance on forest enterprises. This study proposes and applies a method, Faustmann formula, for calculation of the land value. In conclusion, the land value of Artvin State Forest Enterprise and Ardanuç State Forest Enterprise were found as - 1 259 103 TL/ha and 408 194 TL/ha, respectively.

  18. Neurocognitive Mechanisms Underlying Value-Based Decision-Making: From Core Values to Economic Value

    Directory of Open Access Journals (Sweden)

    Tobias eBrosch

    2013-07-01

    Full Text Available Value plays a central role in practically every aspect of human life that requires a decision: whether we choose between different consumer goods, whether we decide which person we marry or which political candidate gets our vote, we choose the option that has more value to us. Over the last decade, neuroeconomic research has mapped the neural substrates of economic value, revealing that activation in brain regions such as ventromedial prefrontal cortex (VMPFC, ventral striatum or posterior cingulate cortex reflects how much an individual values an option and which of several options he/she will choose. However, while great progress has been made exploring the mechanisms underlying concrete decisions, neuroeconomic research has been less concerned with the questions of why people value what they value, and why different people value different things. Social psychologists and sociologists have long been interested in core values, motivational constructs that are intrinsically linked to the self-schema and are used to guide actions and decisions across different situations and different time points. Core value may thus be an important determinant of individual differences in economic value computation and decision-making. Based on a review of recent neuroimaging studies investigating the neural representation of core values and their interactions with neural systems representing economic value, we outline a common framework that integrates the core value concept and neuroeconomic research on value-based decision-making.

  19. Neurocognitive mechanisms underlying value-based decision-making: from core values to economic value.

    Science.gov (United States)

    Brosch, Tobias; Sander, David

    2013-01-01

    VALUE PLAYS A CENTRAL ROLE IN PRACTICALLY EVERY ASPECT OF HUMAN LIFE THAT REQUIRES A DECISION: whether we choose between different consumer goods, whether we decide which person we marry or which political candidate gets our vote, we choose the option that has more value to us. Over the last decade, neuroeconomic research has mapped the neural substrates of economic value, revealing that activation in brain regions such as ventromedial prefrontal cortex (VMPFC), ventral striatum or posterior cingulate cortex reflects how much an individual values an option and which of several options he/she will choose. However, while great progress has been made exploring the mechanisms underlying concrete decisions, neuroeconomic research has been less concerned with the questions of why people value what they value, and why different people value different things. Social psychologists and sociologists have long been interested in core values, motivational constructs that are intrinsically linked to the self-schema and are used to guide actions and decisions across different situations and different time points. Core value may thus be an important determinant of individual differences in economic value computation and decision-making. Based on a review of recent neuroimaging studies investigating the neural representation of core values and their interactions with neural systems representing economic value, we outline a common framework that integrates the core value concept and neuroeconomic research on value-based decision-making.

  20. Neurocognitive mechanisms underlying value-based decision-making: from core values to economic value

    Science.gov (United States)

    Brosch, Tobias; Sander, David

    2013-01-01

    Value plays a central role in practically every aspect of human life that requires a decision: whether we choose between different consumer goods, whether we decide which person we marry or which political candidate gets our vote, we choose the option that has more value to us. Over the last decade, neuroeconomic research has mapped the neural substrates of economic value, revealing that activation in brain regions such as ventromedial prefrontal cortex (VMPFC), ventral striatum or posterior cingulate cortex reflects how much an individual values an option and which of several options he/she will choose. However, while great progress has been made exploring the mechanisms underlying concrete decisions, neuroeconomic research has been less concerned with the questions of why people value what they value, and why different people value different things. Social psychologists and sociologists have long been interested in core values, motivational constructs that are intrinsically linked to the self-schema and are used to guide actions and decisions across different situations and different time points. Core value may thus be an important determinant of individual differences in economic value computation and decision-making. Based on a review of recent neuroimaging studies investigating the neural representation of core values and their interactions with neural systems representing economic value, we outline a common framework that integrates the core value concept and neuroeconomic research on value-based decision-making. PMID:23898252

  1. Value-based medicine: concepts and application

    Directory of Open Access Journals (Sweden)

    Jong-Myon Bae

    2015-03-01

    Full Text Available Global healthcare in the 21st century is characterized by evidence-based medicine (EBM, patient-centered care, and cost effectiveness. EBM involves clinical decisions being made by integrating patient preference with medical treatment evidence and physician experiences. The Center for Value-Based Medicine suggested value-based medicine (VBM as the practice of medicine based upon the patient-perceived value conferred by an intervention. VBM starts with the best evidence-based data and converts it to patient value-based data, so that it allows clinicians to deliver higher quality patient care than EBM alone. The final goals of VBM are improving quality of healthcare and using healthcare resources efficiently. This paper introduces the concepts and application of VBM and suggests some strategies for promoting related research.

  2. Value-based medicine: concepts and application

    Science.gov (United States)

    Bae, Jong-Myon

    2015-01-01

    Global healthcare in the 21st century is characterized by evidence-based medicine (EBM), patient-centered care, and cost effectiveness. EBM involves clinical decisions being made by integrating patient preference with medical treatment evidence and physician experiences. The Center for Value-Based Medicine suggested value-based medicine (VBM) as the practice of medicine based upon the patient-perceived value conferred by an intervention. VBM starts with the best evidence-based data and converts it to patient value-based data, so that it allows clinicians to deliver higher quality patient care than EBM alone. The final goals of VBM are improving quality of healthcare and using healthcare resources efficiently. This paper introduces the concepts and application of VBM and suggests some strategies for promoting related research. PMID:25773441

  3. Value-based medicine: concepts and application.

    Science.gov (United States)

    Bae, Jong-Myon

    2015-01-01

    Global healthcare in the 21st century is characterized by evidence-based medicine (EBM), patient-centered care, and cost effectiveness. EBM involves clinical decisions being made by integrating patient preference with medical treatment evidence and physician experiences. The Center for Value-Based Medicine suggested value-based medicine (VBM) as the practice of medicine based upon the patient-perceived value conferred by an intervention. VBM starts with the best evidence-based data and converts it to patient value-based data, so that it allows clinicians to deliver higher quality patient care than EBM alone. The final goals of VBM are improving quality of healthcare and using healthcare resources efficiently. This paper introduces the concepts and application of VBM and suggests some strategies for promoting related research.

  4. Application of maximum values for radiation exposure and principles for the calculation of radiation doses

    International Nuclear Information System (INIS)

    2007-08-01

    The guide presents the definitions of equivalent dose and effective dose, the principles for calculating these doses, and instructions for applying their maximum values. The limits (Annual Limit on Intake and Derived Air Concentration) derived from dose limits are also presented for the purpose of monitoring exposure to internal radiation. The calculation of radiation doses caused to a patient from medical research and treatment involving exposure to ionizing radiation is beyond the scope of this ST Guide

  5. Development of MATLAB Scripts for the Calculation of Thermal Manikin Regional Resistance Values

    Science.gov (United States)

    2016-01-01

    TECHNICAL NOTE NO. TN16-1 DATE January 2016 ADA DEVELOPMENT OF MATLAB ® SCRIPTS FOR THE...USARIEM TECHNICAL NOTE TN16-1 DEVELOPMENT OF MATLAB ® SCRIPTS FOR THE CALCULATION OF THERMAL MANIKIN REGIONAL RESISTANCE VALUES...EXECUTIVE SUMMARY A software tool has been developed via MATLAB ® scripts to reduce the amount of repetitive and time-consuming calculations that are

  6. Evaluation bases for calculation methods in radioecology

    International Nuclear Information System (INIS)

    Bleck-Neuhaus, J.; Boikat, U.; Franke, B.; Hinrichsen, K.; Hoepfner, U.; Ratka, R.; Steinhilber-Schwab, B.; Teufel, D.; Urbach, M.

    1982-03-01

    The seven contributions in this book deal with the state and problems of radioecology. In particular it analyses: The propagation of radioactive materials in the atmosphere, the transfer of radioactive substances from the soil into plants, respectively from animal feed into meat, the exposure pathways for, and high-risk groups of the population, the uncertainties and the band width of the ingestion factor, as well as the treatment of questions of radioecology in practice. The calculation model is assessed and the difficulty evaluated of laying down data in the general calculation basis. (DG) [de

  7. Calculating Heat of Formation Values of Energetic Compounds: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Michael S. Elioff

    2016-01-01

    Full Text Available Heat of formation is one of several important parameters used to assess the performance of energetic compounds. We evaluated the ability of six different methods to accurately calculate gas-phase heat of formation (ΔfH298,go values for a test set of 45 nitrogen-containing energetic compounds. Density functional theory coupled with the use of isodesmic or other balanced equations yielded calculated results in which 82% (37 of 45 of the ΔfH298,go values were within ±2.0 kcal/mol of the most recently recommended experimental/reference values available. This was compared to a procedure using density functional theory (DFT coupled with an atom and group contribution method in which 51% (23 of 45 of the ΔfH298,go values were within ±2.0 kcal/mol of these values. The T1 procedure and Benson’s group additivity method yielded results in which 51% (23 of 45 and 64% (23 of 36 of the ΔfH298,go values, respectively, were within ±2.0 kcal/mol of these values. We also compared two relatively new semiempirical approaches (PM7 and RM1 with regard to their ability to accurately calculate ΔfH298,go. Although semiempirical methods continue to improve, they were found to be less accurate than the other approaches for the test set used in this investigation.

  8. Calculating Traffic based on Road Sensor Data

    NARCIS (Netherlands)

    Bisseling, Rob; Gao, Fengnan; Hafkenscheid, Patrick; Idema, Reijer; Jetka, Tomasz; Guerra Ones, Valia; Rata, Debanshu; Sikora, Monika

    2014-01-01

    Road sensors gather a lot of statistical data about traffic. In this paper, we discuss how a measure for the amount of traffic on the roads can be derived from this data, such that the measure is independent of the number and placement of sensors, and the calculations can be performed quickly for

  9. Criticality criteria for submissions based on calculations

    International Nuclear Information System (INIS)

    Burgess, M.H.

    1975-06-01

    Calculations used in criticality clearances are subject to errors from various sources, and allowance must be made for these errors is assessing the safety of a system. A simple set of guidelines is defined, drawing attention to each source of error, and recommendations as to its application are made. (author)

  10. A Unifying Theory of Value Based Management

    OpenAIRE

    Weaver, Samuel C.; Weston, J. Fred

    2003-01-01

    We identify four alternative performance metrics used in value based management (VBM). (1) Basic is an intrinsic value analysis (IVA), the discounted cash flow (DCF) methodology. (2) We show that this framework will be consistent with returns to shareholder (RTS, capital gains plus dividends) measured over appropriate time horizons. (3) Economic profit (EP) [also called economic value added (EVA®)] takes from the DCF free cash flow valuation, net operating profits after taxes (NOPAT), divide...

  11. Measurement and estimation of dew point for SNG. [Comparison of calculated and measured values

    Energy Technology Data Exchange (ETDEWEB)

    Furuyama, Y.

    1974-08-01

    Toho Gas measured and estimated SNG dew points in high-pressure deliveries by calculating the theoretical values by the high-pressure gas-liquid equilibrium theory using the pressure-extrapolation method to reach K = 1, and the BWR method to estimate fugacity, then verifying these values experimentally. The experimental values were measured at 161.7 to 367.5 psi using the conventional static and circulation methods, in addition to a newly developed method consisting of circulating a known composition of gas mixtures, partially freezing them, and monitoring the dew point by observing the droplets on a mirror cooled by blowing liquid nitrogen. Good agreement was found between the calculated and the experimental values.

  12. Comparison of integral values for measured and calculated fast neutron spectra in lithium fluoride piles

    International Nuclear Information System (INIS)

    Sekimoto, Hiroshi

    1989-01-01

    The tritium production density, kerma heat production density, dose and certain integral values of scalar neutron spectra in bare and graphite-reflected lithium-fluoride piles irradiated with D-T neutrons were evaluated from the pulse height distribution of a miniature NE213 neutron spectrometer with UFO data processing code, and compared with the values calculated with MORSE-CV Monte Carlo code. (author). 8 refs.; 1 fig.; 2 tabs

  13. Value-based management of design reuse

    Science.gov (United States)

    Carballo, Juan Antonio; Cohn, David L.; Belluomini, Wendy; Montoye, Robert K.

    2003-06-01

    Effective design reuse in electronic products has the potential to provide very large cost savings, substantial time-to-market reduction, and extra sources of revenue. Unfortunately, critical reuse opportunities are often missed because, although they provide clear value to the corporation, they may not benefit the business performance of an internal organization. It is therefore crucial to provide tools to help reuse partners participate in a reuse transaction when the transaction provides value to the corporation as a whole. Value-based Reuse Management (VRM) addresses this challenge by (a) ensuring that all parties can quickly assess the business performance impact of a reuse opportunity, and (b) encouraging high-value reuse opportunities by supplying value-based rewards to potential parties. In this paper we introduce the Value-Based Reuse Management approach and we describe key results on electronic designs that demonstrate its advantages. Our results indicate that Value-Based Reuse Management has the potential to significantly increase the success probability of high-value electronic design reuse.

  14. The internal radiation dose calculations based on Chinese mathematical phantom

    International Nuclear Information System (INIS)

    Wang Haiyan; Li Junli; Cheng Jianping; Fan Jiajin

    2006-01-01

    The internal radiation dose calculations built on Chinese facts become more and more important according to the development of nuclear medicine. the MIRD method developed and consummated by the society of Nuclear Medicine (America) is based on the European and American mathematical phantom and can't fit Chinese well. The transport of γ-ray in the Chinese mathematical phantom was simulated with Monte Carlo method in programs as MCNP4C. the specific absorbed fraction (Φ) of Chinese were calculated and the Chinese Φ database was created. The results were compared with the recommended values by ORNL. the method was proved correct by the coherence when the target organ was the same with the source organ. Else, the difference was due to the different phantom and the choice of different physical model. (authors)

  15. Adding Production Value Through Application of Value Based Scheduling

    DEFF Research Database (Denmark)

    Lindhard, Søren; Wandahl, Søren

    2012-01-01

    Customer value is a key goal in the Lean philosophy, essentially only actions that adds value should be conducted. In a transformation view, the basic lean approach is to remove waste, which indirectly increases value (or withstand value lose). Lean Construction acknowledges two different types o...... be minimized and management should seek towards democratic leadership....

  16. 19 CFR 351.406 - Calculation of normal value if sales are made at less than cost of production.

    Science.gov (United States)

    2010-04-01

    ... Price, Fair Value, and Normal Value § 351.406 Calculation of normal value if sales are made at less than cost of production. (a) Introduction. In determining normal value, the Secretary may disregard sales of... 19 Customs Duties 3 2010-04-01 2010-04-01 false Calculation of normal value if sales are made at...

  17. Event-based historical value-at-risk

    NARCIS (Netherlands)

    Hogenboom, F.P.; Winter, Michael; Hogenboom, A.C.; Jansen, Milan; Frasincar, F.; Kaymak, U.

    2012-01-01

    Value-at-Risk (VaR) is an important tool to assess portfolio risk. When calculating VaR based on historical stock return data, we hypothesize that this historical data is sensitive to outliers caused by news events in the sampled period. In this paper, we research whether the VaR accuracy can be

  18. 40 CFR 600.210-08 - Calculation of fuel economy values for labeling.

    Science.gov (United States)

    2010-07-01

    ... including, but not limited to battery electric vehicles, fuel cell vehicles, plug-in hybrid electric... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of fuel economy values for... (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy...

  19. Method of calculation of critical values of financial indicators for developing food security strategy

    Science.gov (United States)

    Aigyl Ilshatovna, Sabirova; Svetlana Fanilevna, Khasanova; Vildanovna, Nagumanova Regina

    2018-05-01

    On the basis of decision making theory (minimax and maximin approaches) the authors propose a technique with the results of calculations of the critical values of effectiveness indicators of agricultural producers in the Republic of Tatarstan for 2013-2015. There is justified necessity of monitoring the effectiveness of the state support and the direction of its improvement.

  20. Activity Based Costing in Value Stream Mapping

    Directory of Open Access Journals (Sweden)

    S. S. Abuthakeer

    2010-12-01

    Full Text Available This paper attempts to integrate Value Stream Map (VSM with the cost aspects. A value stream map provides a blueprint for implementing lean manufacturing concepts by illustrating information and materials flow in a value stream. The objective of the present work is to integrate the various cost aspects. The idea is to introduce a cost line, which enhances the clarity in decision making. The redesigned map proves to be effective in highlighting the improvement areas, in terms of quantitative data. TAKT time calculation is carried out to set the pace of production. Target cost is set as a bench mark for product cost. The results of the study indicates that implementing VSM led to reduction in the following areas: processing lead time by 34%, processing cycle time was reduced by 35%, Inventory level by 66% and product cost from Rs 137 to Rs 125. It was found that adopting VSM in a small scale industry can make significant improvements.

  1. Update on value-based medicine.

    Science.gov (United States)

    Brown, Melissa M; Brown, Gary C

    2013-05-01

    To update concepts in Value-Based Medicine, especially in view of the Patient Protection and Affordable Care Act. The Patient Protection and Affordable Care Act assures that some variant of Value-Based Medicine cost-utility analysis will play a key role in the healthcare system. It identifies the highest quality care, thereby maximizing the most efficacious use of healthcare resources and empowering patients and physicians.Standardization is critical for the creation and acceptance of a Value-Based Medicine, cost-utility analysis, information system, since 27 million different input variants can go into a cost-utility analysis. Key among such standards is the use of patient preferences (utilities), as patients best understand the quality of life associated with their health states. The inclusion of societal costs, versus direct medical costs alone, demonstrates that medical interventions are more cost effective and, in many instances, provide a net financial return-on-investment to society referent to the direct medical costs expended. Value-Based Medicine provides a standardized methodology, integrating critical, patient, quality-of-life preferences, and societal costs, to allow the highest quality, most cost-effective care. Central to Value-Based Medicine is the concept that all patients deserve the interventions that provide the greatest patient value (improvement in quality of life and/or length of life).

  2. Benchmark Calculation of Radial Expectation Value for Confined Hydrogen-Like Atoms and Isotropic Harmonic Oscillators

    International Nuclear Information System (INIS)

    Yu, Rong Mei; Zan, Li Rong; Jiao, Li Guang; Ho, Yew Kam

    2017-01-01

    Spatially confined atoms have been extensively investigated to model atomic systems in extreme pressures. For the simplest hydrogen-like atoms and isotropic harmonic oscillators, numerous physical quantities have been established with very high accuracy. However, the expectation value of which is of practical importance in many applications has significant discrepancies among calculations by different methods. In this work we employed the basis expansion method with cut-off Slater-type orbitals to investigate these two confined systems. Accurate values for several low-lying bound states were obtained by carefully examining the convergence with respect to the size of basis. A scaling law for was derived and it is used to verify the accuracy of numerical results. Comparison with other calculations show that the present results establish benchmark values for this quantity, which may be useful in future studies. (author)

  3. Steganography based on pixel intensity value decomposition

    Science.gov (United States)

    Abdulla, Alan Anwar; Sellahewa, Harin; Jassim, Sabah A.

    2014-05-01

    This paper focuses on steganography based on pixel intensity value decomposition. A number of existing schemes such as binary, Fibonacci, Prime, Natural, Lucas, and Catalan-Fibonacci (CF) are evaluated in terms of payload capacity and stego quality. A new technique based on a specific representation is proposed to decompose pixel intensity values into 16 (virtual) bit-planes suitable for embedding purposes. The proposed decomposition has a desirable property whereby the sum of all bit-planes does not exceed the maximum pixel intensity value, i.e. 255. Experimental results demonstrate that the proposed technique offers an effective compromise between payload capacity and stego quality of existing embedding techniques based on pixel intensity value decomposition. Its capacity is equal to that of binary and Lucas, while it offers a higher capacity than Fibonacci, Prime, Natural, and CF when the secret bits are embedded in 1st Least Significant Bit (LSB). When the secret bits are embedded in higher bit-planes, i.e., 2nd LSB to 8th Most Significant Bit (MSB), the proposed scheme has more capacity than Natural numbers based embedding. However, from the 6th bit-plane onwards, the proposed scheme offers better stego quality. In general, the proposed decomposition scheme has less effect in terms of quality on pixel value when compared to most existing pixel intensity value decomposition techniques when embedding messages in higher bit-planes.

  4. [Biometric bases: basic concepts of probability calculation].

    Science.gov (United States)

    Dinya, E

    1998-04-26

    The author gives or outline of the basic concepts of probability theory. The bases of the event algebra, definition of the probability, the classical probability model and the random variable are presented.

  5. Calculating p-values and their significances with the Energy Test for large datasets

    Science.gov (United States)

    Barter, W.; Burr, C.; Parkes, C.

    2018-04-01

    The energy test method is a multi-dimensional test of whether two samples are consistent with arising from the same underlying population, through the calculation of a single test statistic (called the T-value). The method has recently been used in particle physics to search for samples that differ due to CP violation. The generalised extreme value function has previously been used to describe the distribution of T-values under the null hypothesis that the two samples are drawn from the same underlying population. We show that, in a simple test case, the distribution is not sufficiently well described by the generalised extreme value function. We present a new method, where the distribution of T-values under the null hypothesis when comparing two large samples can be found by scaling the distribution found when comparing small samples drawn from the same population. This method can then be used to quickly calculate the p-values associated with the results of the test.

  6. The transition to value-based care.

    Science.gov (United States)

    Ray, Jordan C; Kusumoto, Fred

    2016-10-01

    Delivery of medical care is evolving rapidly worldwide. Over the past several years in the USA, there has been a rapid shift in reimbursement from a simple fee-for-service model to more complex models that attempt to link payment to quality and value. Change in any large system can be difficult, but with medicine, the transition to a value-based system has been particularly hard to implement because both quality and cost are difficult to quantify. Professional societies and other medical groups are developing different programs in an attempt to define high value care. However, applying a national standard of value for any treatment is challenging, since value varies from person to person, and the individual benefit must remain the central tenet for delivering best patient-centered medical care. Regardless of the specific operational features of the rapidly changing healthcare environment, physicians must first and foremost always remain patient advocates.

  7. THE VALUE-BASED MANAGEMENT APPROACH: FROM THE SHAREHOLDER VALUE TO THE STAKEHOLDER VALUE

    OpenAIRE

    VALENTIN MUNTEANU; DOINA DANAIATA; LUMINITA HURBEAN; ALICE BERGLER

    2012-01-01

    The ongoing discussion about the adequate management form and the purpose of organizations in the contemporary postmodern society have once again gained in importance and interest after the financial crises of 2008. Different management concepts have been developed throughout the time, which propose objectives for organizations and thus managerial goals, activities and decision making. Considering the value based management approach and the stakeholder theory, we propose a shift in the value ...

  8. Countervailing incentives in value-based payment.

    Science.gov (United States)

    Arnold, Daniel R

    2017-09-01

    Payment reform has been at the forefront of the movement toward higher-value care in the U.S. health care system. A common belief is that volume-based incentives embedded in fee-for-service need to be replaced with value-based payments. While this belief is well-intended, value-based payment also contains perverse incentives. In particular, behavioral economists have identified several features of individual decision making that reverse some of the typical recommendations for inducing desirable behavior through financial incentives. This paper discusses the countervailing incentives associated with four behavioral economic concepts: loss aversion, relative social ranking, inertia or status quo bias, and extrinsic vs. intrinsic motivation. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Valuing Convertible Bonds Based on LSRQM Method

    Directory of Open Access Journals (Sweden)

    Jian Liu

    2014-01-01

    Full Text Available Convertible bonds are one of the essential financial products for corporate finance, while the pricing theory is the key problem to the theoretical research of convertible bonds. This paper demonstrates how to price convertible bonds with call and put provisions using Least-Squares Randomized Quasi-Monte Carlo (LSRQM method. We consider the financial market with stochastic interest rates and credit risk and present a detailed description on calculating steps of convertible bonds value. The empirical results show that the model fits well the market prices of convertible bonds in China’s market and the LSRQM method is effective.

  10. Overstating values: medical facts, diverse values, bioethics and values-based medicine.

    Science.gov (United States)

    Parker, Malcolm

    2013-02-01

    Fulford has argued that (1) the medical concepts illness, disease and dysfunction are inescapably evaluative terms, (2) illness is conceptually prior to disease, and (3) a model conforming to (2) has greater explanatory power and practical utility than the conventional value-free medical model. This 'reverse' model employs Hare's distinction between description and evaluation, and the sliding relationship between descriptive and evaluative meaning. Fulford's derivative 'Values Based Medicine' (VBM) readjusts the imbalance between the predominance of facts over values in medicine. VBM allegedly responds to the increased choices made available by, inter alia, the progress of medical science itself. VBM attributes appropriate status to evaluative meaning, where strong consensus about descriptive meaning is lacking. According to Fulford, quasi-legal bioethics, while it can be retained as a kind of deliberative framework, is outcome-based and pursues 'the right answer', while VBM approximates a democratic, process-oriented method for dealing with diverse values, in partnership with necessary contributions from evidence-based medicine (EBM). I support the non-cognitivist underpinnings of VBM, and its emphasis on the importance of values in medicine. But VBM overstates the complexity and diversity of values, misrepresents EBM and VBM as responses to scientific and evaluative complexity, and mistakenly depicts 'quasi-legal bioethics' as a space of settled descriptive meaning. Bioethical reasoning can expose strategies that attempt to reduce authentic values to scientific facts, illustrating that VBM provides no advantage over bioethics in delineating the connections between facts and values in medicine. © 2011 Blackwell Publishing Ltd.

  11. Optimizing Discount Rates: Expressing Preferences for Sustainable Outcomes in Present Value Calculations

    OpenAIRE

    Axelrod, David

    2017-01-01

    This paper describes how the discount rate used in present value calculations expresses the preference for sustainability in decision making, and its implication for sustainable economic growth. In essence, the lower the discount rate, the greater the regard for the future, and the more likely we choose behaviors that lead to long-term sustainability. The theoretical framework combines behavioral economics and holonomics, which involve limitations of regard for the future due to constraints o...

  12. Calculation of climatic reference values and its use for automatic outlier detection in meteorological datasets

    Directory of Open Access Journals (Sweden)

    B. Téllez

    2008-04-01

    Full Text Available The climatic reference values for monthly and annual average air temperature and total precipitation in Catalonia – northeast of Spain – are calculated using a combination of statistical methods and geostatistical techniques of interpolation. In order to estimate the uncertainty of the method, the initial dataset is split into two parts that are, respectively, used for estimation and validation. The resulting maps are then used in the automatic outlier detection in meteorological datasets.

  13. Entropy Evaluation Based on Value Validity

    Directory of Open Access Journals (Sweden)

    Tarald O. Kvålseth

    2014-09-01

    Full Text Available Besides its importance in statistical physics and information theory, the Boltzmann-Shannon entropy S has become one of the most widely used and misused summary measures of various attributes (characteristics in diverse fields of study. It has also been the subject of extensive and perhaps excessive generalizations. This paper introduces the concept and criteria for value validity as a means of determining if an entropy takes on values that reasonably reflect the attribute being measured and that permit different types of comparisons to be made for different probability distributions. While neither S nor its relative entropy equivalent S* meet the value-validity conditions, certain power functions of S and S* do to a considerable extent. No parametric generalization offers any advantage over S in this regard. A measure based on Euclidean distances between probability distributions is introduced as a potential entropy that does comply fully with the value-validity requirements and its statistical inference procedure is discussed.

  14. View Estimation Based on Value System

    Science.gov (United States)

    Takahashi, Yasutake; Shimada, Kouki; Asada, Minoru

    Estimation of a caregiver's view is one of the most important capabilities for a child to understand the behavior demonstrated by the caregiver, that is, to infer the intention of behavior and/or to learn the observed behavior efficiently. We hypothesize that the child develops this ability in the same way as behavior learning motivated by an intrinsic reward, that is, he/she updates the model of the estimated view of his/her own during the behavior imitated from the observation of the behavior demonstrated by the caregiver based on minimizing the estimation error of the reward during the behavior. From this view, this paper shows a method for acquiring such a capability based on a value system from which values can be obtained by reinforcement learning. The parameters of the view estimation are updated based on the temporal difference error (hereafter TD error: estimation error of the state value), analogous to the way such that the parameters of the state value of the behavior are updated based on the TD error. Experiments with simple humanoid robots show the validity of the method, and the developmental process parallel to young children's estimation of its own view during the imitation of the observed behavior of the caregiver is discussed.

  15. Value-based healthcare in Lynch syndrome

    NARCIS (Netherlands)

    Hennink, Simone D; Hofland, N.; Gopie, J.P.; van der Kaa, C.; de Koning, K.; Nielsen, M.; Tops, C.; Morreau, H.; de Vos Tot Nederveen Cappel, W.H.; Langers, A.M.; Hardwick, J.C.; Gaarenstroom, K.N.; Tollenaar, R.A.; Veenendaal, R.A.; Tibben, A.; Wijnen, J.; van Heck, M.; van Asperen, C.; Roukema, J.A.; Hommes, D.W.; Hes, F.J.; Vasen, H.F.A.

    2013-01-01

    Lynch syndrome (LS), one of the most frequent forms of hereditary colorectal cancer (CRC), is caused by a defect in one of the mismatch repair (MMR) genes. Carriers of MMR defects have a strongly increased risk of developing CRC and endometrial cancer. Over the last few years, value-based healthcare

  16. The value of innovation under value-based pricing

    Science.gov (United States)

    Moreno, Santiago G.; Ray, Joshua A.

    2016-01-01

    Objective The role of cost-effectiveness analysis (CEA) in incentivizing innovation is controversial. Critics of CEA argue that its use for pricing purposes disregards the ‘value of innovation’ reflected in new drug development, whereas supporters of CEA highlight that the value of innovation is already accounted for. Our objective in this article is to outline the limitations of the conventional CEA approach, while proposing an alternative method of evaluation that captures the value of innovation more accurately. Method The adoption of a new drug benefits present and future patients (with cost implications) for as long as the drug is part of clinical practice. Incidence patients and off-patent prices are identified as two key missing features preventing the conventional CEA approach from capturing 1) benefit to future patients and 2) future savings from off-patent prices. The proposed CEA approach incorporates these two features to derive the total lifetime value of an innovative drug (i.e., the value of innovation). Results The conventional CEA approach tends to underestimate the value of innovative drugs by disregarding the benefit to future patients and savings from off-patent prices. As a result, innovative drugs are underpriced, only allowing manufacturers to capture approximately 15% of the total value of innovation during the patent protection period. In addition to including the incidence population and off-patent price, the alternative approach proposes pricing new drugs by first negotiating the share of value of innovation to be appropriated by the manufacturer (>15%?) and payer (price that satisfies this condition. Conclusion We argue for a modification to the conventional CEA approach that integrates the total lifetime value of innovative drugs into CEA, by taking into account off-patent pricing and future patients. The proposed approach derives a price that allows manufacturers to capture an agreed share of this value, thereby incentivizing

  17. The value of innovation under value-based pricing.

    Science.gov (United States)

    Moreno, Santiago G; Ray, Joshua A

    2016-01-01

    The role of cost-effectiveness analysis (CEA) in incentivizing innovation is controversial. Critics of CEA argue that its use for pricing purposes disregards the 'value of innovation' reflected in new drug development, whereas supporters of CEA highlight that the value of innovation is already accounted for. Our objective in this article is to outline the limitations of the conventional CEA approach, while proposing an alternative method of evaluation that captures the value of innovation more accurately. The adoption of a new drug benefits present and future patients (with cost implications) for as long as the drug is part of clinical practice. Incidence patients and off-patent prices are identified as two key missing features preventing the conventional CEA approach from capturing 1) benefit to future patients and 2) future savings from off-patent prices. The proposed CEA approach incorporates these two features to derive the total lifetime value of an innovative drug (i.e., the value of innovation). The conventional CEA approach tends to underestimate the value of innovative drugs by disregarding the benefit to future patients and savings from off-patent prices. As a result, innovative drugs are underpriced, only allowing manufacturers to capture approximately 15% of the total value of innovation during the patent protection period. In addition to including the incidence population and off-patent price, the alternative approach proposes pricing new drugs by first negotiating the share of value of innovation to be appropriated by the manufacturer (>15%?) and payer (price that satisfies this condition. We argue for a modification to the conventional CEA approach that integrates the total lifetime value of innovative drugs into CEA, by taking into account off-patent pricing and future patients. The proposed approach derives a price that allows manufacturers to capture an agreed share of this value, thereby incentivizing innovation, while supporting health

  18. A generally applicable lightweight method for calculating a value structure for tools and services in bioinformatics infrastructure projects.

    Science.gov (United States)

    Mayer, Gerhard; Quast, Christian; Felden, Janine; Lange, Matthias; Prinz, Manuel; Pühler, Alfred; Lawerenz, Chris; Scholz, Uwe; Glöckner, Frank Oliver; Müller, Wolfgang; Marcus, Katrin; Eisenacher, Martin

    2017-10-30

    Sustainable noncommercial bioinformatics infrastructures are a prerequisite to use and take advantage of the potential of big data analysis for research and economy. Consequently, funders, universities and institutes as well as users ask for a transparent value model for the tools and services offered. In this article, a generally applicable lightweight method is described by which bioinformatics infrastructure projects can estimate the value of tools and services offered without determining exactly the total costs of ownership. Five representative scenarios for value estimation from a rough estimation to a detailed breakdown of costs are presented. To account for the diversity in bioinformatics applications and services, the notion of service-specific 'service provision units' is introduced together with the factors influencing them and the main underlying assumptions for these 'value influencing factors'. Special attention is given on how to handle personnel costs and indirect costs such as electricity. Four examples are presented for the calculation of the value of tools and services provided by the German Network for Bioinformatics Infrastructure (de.NBI): one for tool usage, one for (Web-based) database analyses, one for consulting services and one for bioinformatics training events. Finally, from the discussed values, the costs of direct funding and the costs of payment of services by funded projects are calculated and compared. © The Author 2017. Published by Oxford University Press.

  19. Value and limitations of transpulmonary pressure calculations during intra-abdominal hypertension.

    Science.gov (United States)

    Cortes-Puentes, Gustavo A; Gard, Kenneth E; Adams, Alexander B; Faltesek, Katherine A; Anderson, Christopher P; Dries, David J; Marini, John J

    2013-08-01

    To clarify the effect of progressively increasing intra-abdominal pressure on esophageal pressure, transpulmonary pressure, and functional residual capacity. Controlled application of increased intra-abdominal pressure at two positive end-expiratory pressure levels (1 and 10 cm H2O) in an anesthetized porcine model of controlled ventilation. Large animal laboratory of a university-affiliated hospital. Eleven deeply anesthetized swine (weight 46.2 ± 6.2 kg). Air-regulated intra-abdominal hypertension (0-25 mm Hg). Esophageal pressure, tidal compliance, bladder pressure, and end-expiratory lung aeration by gas dilution. Functional residual capacity was significantly reduced by increasing intra-abdominal pressure at both positive end-expiratory pressure levels (p ≤ 0.0001) without corresponding changes of end-expiratory esophageal pressure. Above intra-abdominal pressure 5 mm Hg, plateau airway pressure increased linearly by ~ 50% of the applied intra-abdominal pressure value, associated with commensurate changes of esophageal pressure. With tidal volume held constant, negligible changes occurred in transpulmonary pressure due to intra-abdominal pressure. Driving pressures calculated from airway pressures alone (plateau airway pressure--positive end-expiratory pressure) did not equate to those computed from transpulmonary pressure (tidal changes in transpulmonary pressure). Increasing positive end-expiratory pressure shifted the predominantly negative end-expiratory transpulmonary pressure at positive end-expiratory pressure 1 cm H2O (mean -3.5 ± 0.4 cm H2O) into the positive range at positive end-expiratory pressure 10 cm H2O (mean 0.58 ± 1.2 cm H2O). Despite its insensitivity to changes in functional residual capacity, measuring transpulmonary pressure may be helpful in explaining how different levels of positive end-expiratory pressure influence recruitment and collapse during tidal ventilation in the presence of increased intra-abdominal pressure and in

  20. DEVELOPING AN INTERCULTURAL VALUE-BASED DIALOGUE

    Directory of Open Access Journals (Sweden)

    Tiziano Telleschi

    2015-01-01

    Full Text Available Peaceful co-existence and inclusion do not depend solely on the availability of goods and welfare systems, but primarily on shared cultural values. In order to build shared values, we propose a new concept, the worthy, as the pull-factor of the value. A value-based dialogue begins from making each ones’ worthies ‘speak to each other’ so each actor can enter into Alter’s point of view to gain, afterward, a sharing of values. Beginning from the worthy, we outline the path of an innovative integrative model: safeguard some features of the ‘diversity’ (multiculturalism and to build some ‘resemblances’ (interculturality. By this resemblances/differences trade-off both migrants, ethnic groups and autochthonous absorb something of Alter’s believes and values, and at the same time gain awareness about complementarity and interdependency with Alter, the core of an otherness mind and the requirement to manage conflicts. By this way, each actor embraces a wider and wider network of Alter (linking bonds without losing his own identity and belonging. Finally, this paper suggests operative ways involving, as game-changers of a ‘feasible’ society, school and social services from one side, and local, political entities and the civil society, from the other side (deliberative democracy.

  1. Value-Based Care and Strategic Priorities.

    Science.gov (United States)

    Gross, Wendy L; Cooper, Lebron; Boggs, Steven; Gold, Barbara

    2017-12-01

    The anesthesia market continues to undergo disruption. Financial margins are shrinking, and buyers are demanding that anesthesia services be provided in an efficient, low-cost manner. To help anesthesiologists analyze their market, Drucker and Porter's framework of buyers, suppliers, quality, barriers to entry, substitution, and strategic priorities allows for a structured analysis. Once this analysis is completed, anesthesiologists must articulate their value to other medical professionals and to hospitals. Anesthesiologists can survive and thrive in a value-based health care environment if they are capable of providing services differently and able to deliver cost-effective care. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Value of the energy data base

    Energy Technology Data Exchange (ETDEWEB)

    King, D.W.; Griffiths, J.M.; Roderer, N.K.; Wiederkehr, R.R.V.

    1982-03-31

    An assessment was made of the Energy Data Base (EDB) of the Department of Energy's Technical Information Center (TIC). As the major resource containing access information to the world's energy literature, EDB products and services are used extensively by energy researchers to identify journal articles, technical reports and other items of potential utility in their work. The approach taken to assessing value begins with the measurement of extent of use of the EDB. Apparent value is measured in terms of willingness to pay. Consequential value is measured in terms of effect - for searching, the cost of reading which results; and for reading, the savings which result from the application of the information obtained in reading. Resulting estimates of value reflect value to the searchers, the reader, and the reader's organization or funding source. A survey of the 60,000 scientists and eingineers funded by the DOE shows that annually they read about 7.1 million journal articles and 6.6 million technical reports. A wide range of savings values were reported for one-fourth of all article readings and three-fourths of all report readings. There was an average savings of $590 per reading of all articles; there was an average savings of $1280 for technical reports. The total annual savings attributable to reading by DOE-funded scientists and engineers is estimated to be about $13 billion. An investment of $5.3 billion in the generation of information and about $500 million in processing and using information yields a partial return of about $13 billion. Overall, this partial return on investment is about 2.2 to 1. In determining the value of EDB only those searches and readings directly attributable to it are included in the analysis. The values are $20 million to the searchers, $117 million to the readers and $3.6 billion to DOE.

  3. Geometric constraints in semiclassical initial value representation calculations in Cartesian coordinates: accurate reduction in zero-point energy.

    Science.gov (United States)

    Issack, Bilkiss B; Roy, Pierre-Nicholas

    2005-08-22

    An approach for the inclusion of geometric constraints in semiclassical initial value representation calculations is introduced. An important aspect of the approach is that Cartesian coordinates are used throughout. We devised an algorithm for the constrained sampling of initial conditions through the use of multivariate Gaussian distribution based on a projected Hessian. We also propose an approach for the constrained evaluation of the so-called Herman-Kluk prefactor in its exact log-derivative form. Sample calculations are performed for free and constrained rare-gas trimers. The results show that the proposed approach provides an accurate evaluation of the reduction in zero-point energy. Exact basis set calculations are used to assess the accuracy of the semiclassical results. Since Cartesian coordinates are used, the approach is general and applicable to a variety of molecular and atomic systems.

  4. Application of the discounted value flows method in production cost calculations for Czechoslovak nuclear power plants

    International Nuclear Information System (INIS)

    Majer, P.

    1990-01-01

    The fundamentals are outlined of the discounted value flows method, which is used in industrial countries for calculating the specific electricity production costs. Actual calculations were performed for the first two units of the Temelin nuclear power plant. All costs associated with the construction, operation and decommissioning of this nuclear power plant were taken into account. With a high degree of certainty, the specific production costs of the Temelin nuclear power plant will lie within the range of 0.32 to 0.36 CSK/kWh. Nearly all results of the sensitivity analysis performed for the possible changes in the input values fall within this range. An increase in the interest rate to above 8% is an exception; this, however, can be regarded as rather improbable on a long-term basis. Sensitivity analysis gave evidence that the results of the electricity production cost calculations for the Temelin nuclear power plant can be considered sufficiently stable. (Z.M.). 7 figs., 2 tabs., 14 refs

  5. Value-Based Medicine and Pharmacoeconomics.

    Science.gov (United States)

    Brown, Gary C; Brown, Melissa M

    2016-01-01

    Pharmacoeconomics is assuming increasing importance in the pharmaceutical field since it is entering the public policy arena in many countries. Among the variants of pharmacoeconomic analysis are cost-minimization, cost-benefit, cost-effectiveness and cost-utility analyses. The latter is the most versatile and sophisticated in that it integrates the patient benefit (patient value) conferred by a drug in terms of improvement in length and/or quality of life. It also incorporates the costs expended for that benefit, as well as the dollars returned to patients and society from the use of a drug (financial value). Unfortunately, one cost-utility analysis in the literature is generally not comparable to another because of the lack of standardized formats and standardized input variables (costs, cost perspective, quality-of-life measurement instruments, quality-of-life respondents, discounting and so forth). Thus, millions of variants can be used. Value-based medicine® (VBM) cost-utility analysis standardizes these variants so that one VBM analysis is comparable to another. This system provides a highly rational methodology that allows providers and patients to quantify and compare the patient value and financial value gains associated with the use of pharmaceutical agents for example. © 2016 S. Karger AG, Basel.

  6. Value based pricing: the least valued pricing strategy

    OpenAIRE

    Hoenen, Bob

    2017-01-01

    Pricing has been one of the least researched topics in marketing, although within these pricing strategies: cost-plus pricing is considered as the leading pricing strategy worldwide. Why should companies use such an unprofitable strategy, where fighting for a higher market share due to low prices is more a rule than exception? VBP is one of the most underestimated strategies by organizations. The definition of VBP is: 'value pricing applies to products that have the potential of being differe...

  7. The PHREEQE Geochemical equilibrium code data base and calculations

    International Nuclear Information System (INIS)

    Andersoon, K.

    1987-01-01

    Compilation of a thermodynamic data base for actinides and fission products for use with PHREEQE has begun and a preliminary set of actinide data has been tested for the PHREEQE code in a version run on an IBM XT computer. The work until now has shown that the PHREEQE code mostly gives satisfying results for specification of actinides in natural water environment. For U and Np under oxidizing conditions, however, the code has difficulties to converge with pH and Eh conserved when a solubility limit is applied. For further calculations of actinide and fission product specification and solubility in a waste repository and in the surrounding geosphere, more data are needed. It is necessary to evaluate the influence of the large uncertainties of some data. A quality assurance and a check on the consistency of the data base is also needed. Further work with data bases should include: an extension to fission products, an extension to engineering materials, an extension to other ligands than hydroxide and carbonate, inclusion of more mineral phases, inclusion of enthalpy data, a control of primary references in order to decide if values from different compilations are taken from the same primary reference and contacts and discussions with other groups, working with actinide data bases, e.g. at the OECD/NEA and at the IAEA. (author)

  8. Application of maximum values for radiation exposure and principles for the calculation of radiation dose

    International Nuclear Information System (INIS)

    2000-01-01

    The guide sets out the mathematical definitions and principles involved in the calculation of the equivalent dose and the effective dose, and the instructions concerning the application of the maximum values of these quantities. further, for monitoring the dose caused by internal radiation, the guide defines the limits derived from annual dose limits (the Annual Limit on Intake and the Derived Air Concentration). Finally, the guide defines the operational quantities to be used in estimating the equivalent dose and the effective dose, and also sets out the definitions of some other quantities and concepts to be used in monitoring radiation exposure. The guide does not include the calculation of patient doses carried out for the purposes of quality assurance

  9. Application of maximum values for radiation exposure and principles for the calculation of radiation dose

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-07-01

    The guide sets out the mathematical definitions and principles involved in the calculation of the equivalent dose and the effective dose, and the instructions concerning the application of the maximum values of these quantities. further, for monitoring the dose caused by internal radiation, the guide defines the limits derived from annual dose limits (the Annual Limit on Intake and the Derived Air Concentration). Finally, the guide defines the operational quantities to be used in estimating the equivalent dose and the effective dose, and also sets out the definitions of some other quantities and concepts to be used in monitoring radiation exposure. The guide does not include the calculation of patient doses carried out for the purposes of quality assurance.

  10. A Non-parametric Method for Calculating Conditional Stressed Value at Risk

    Directory of Open Access Journals (Sweden)

    Kohei Marumo

    2017-01-01

    Full Text Available We consider the Value at Risk (VaR of a portfolio under stressed conditions. In practice, the stressed VaR (sVaR is commonly calculated using the data set that includes the stressed period. It tells us how much the risk amount increases if we use the stressed data set. In this paper, we consider the VaR under stress scenarios. Technically, this can be done by deriving the distribution of profit or loss conditioned on the value of risk factors. We use two methods; the one that uses the linear model and the one that uses the Hermite expansion discussed by Marumo and Wolff (2013, 2016. Numerical examples shows that the method using the Hermite expansion is capable of capturing the non-linear effects such as correlation collapse and volatility clustering, which are often observed in the markets.

  11. A method for calculation of dose per unit concentration values for aquatic biota

    International Nuclear Information System (INIS)

    Batlle, J Vives i; Jones, S R; Gomez-Ros, J M

    2004-01-01

    A dose per unit concentration database has been generated for application to ecosystem assessments within the FASSET framework. Organisms are represented by ellipsoids of appropriate dimensions, and the proportion of radiation absorbed within the organisms is calculated using a numerical method implemented in a series of spreadsheet-based programs. Energy-dependent absorbed fraction functions have been derived for calculating the total dose per unit concentration of radionuclides present in biota or in the media they inhabit. All radionuclides and reference organism dimensions defined within FASSET for marine and freshwater ecosystems are included. The methodology has been validated against more complex dosimetric models and compared with human dosimetry based on ICRP 72. Ecosystem assessments for aquatic biota within the FASSET framework can now be performed simply, once radionuclide concentrations in target organisms are known, either directly or indirectly by deduction from radionuclide concentrations in the surrounding medium

  12. Strategies for defining traits when calculating economic values for livestock breeding: a review.

    Science.gov (United States)

    Wolfová, M; Wolf, J

    2013-09-01

    The objective of the present review was (i) to survey different approaches for choosing the complex of traits for which economic values (EVs) are calculated, (ii) to call attention to the proper definition of traits and (iii) to discuss the manner and extent to which relationships among traits have been considered in the calculation of EVs. For this purpose, papers dealing with the estimation of EVs of traits in livestock were reviewed. The most important reasons for incompatibility of EVs for similar traits estimated in different countries and by different authors were found to be inconsistencies in trait definitions and in assumptions being made about relationships among traits. An important problem identified was how to choose the most appropriate criterion to characterise production or functional ability for a particular class of animals. Accordingly, the review covered the following three topics: (i) which trait(s) would best characterise the growth ability of an animal; (ii) how to define traits expressed repeatedly in subsequent reproductive cycles of breeding females and (iii) how to deal with traits that differ in average value between sexes or among animal groups. Various approaches that have been used to solve these problems were discussed. Furthermore, the manner in which diverse authors chose one or more traits from a group of alternatives for describing a specific biological potential were reviewed and commented on. The consequences of including or excluding relationships among economically important traits when estimating the EV for a specific trait were also examined. An important conclusion of the review is that, for a better comparability and interpretability of estimated EVs in the literature, it is desirable that clear and unique definitions of the traits, complete information on assumptions used in analytical models and details on inter-relationships between traits are documented. Furthermore, the method and the model used for the genetic

  13. Value-based purchasing of medical devices.

    Science.gov (United States)

    Obremskey, William T; Dail, Teresa; Jahangir, A Alex

    2012-04-01

    Health care in the United States is known for its continued innovation and production of new devices and techniques. While the intention of these devices is to improve the delivery and outcome of patient care, they do not always achieve this goal. As new technologies enter the market, hospitals and physicians must determine which of these new devices to incorporate into practice, and it is important these devices bring value to patient care. We provide a model of a physician-engaged process to decrease cost and increase review of physician preference items. We describe the challenges, implementation, and outcomes of cost reduction and product stabilization of a value-based process for purchasing medical devices at a major academic medical center. We implemented a physician-driven committee that standardized and utilized evidence-based, clinically sound, and financially responsible methods for introducing or consolidating new supplies, devices, and technology for patient care. This committee worked with institutional finance and administrative leaders to accomplish its goals. Utilizing this physician-driven committee, we provided access to new products, standardized some products, decreased costs of physician preference items 11% to 26% across service lines, and achieved savings of greater than $8 million per year. The implementation of a facility-based technology assessment committee that critically evaluates new technology can decrease hospital costs on implants and standardize some product lines.

  14. Calculating a Continuous Metabolic Syndrome Score Using Nationally Representative Reference Values.

    Science.gov (United States)

    Guseman, Emily Hill; Eisenmann, Joey C; Laurson, Kelly R; Cook, Stephen R; Stratbucker, William

    2018-02-26

    The prevalence of metabolic syndrome in youth varies on the basis of the classification system used, prompting implementation of continuous scores; however, the use of these scores is limited to the sample from which they were derived. We sought to describe the derivation of the continuous metabolic syndrome score using nationally representative reference values in a sample of obese adolescents and a national sample obtained from National Health and Nutrition Examination Survey (NHANES) 2011-2012. Clinical data were collected from 50 adolescents seeking obesity treatment at a stage 3 weight management center. A second analysis relied on data from adolescents included in NHANES 2011-2012, performed for illustrative purposes. The continuous metabolic syndrome score was calculated by regressing individual values onto nationally representative age- and sex-specific standards (NHANES III). Resultant z scores were summed to create a total score. The final sample included 42 obese adolescents (15 male and 35 female subjects; mean age, 14.8 ± 1.9 years) and an additional 445 participants from NHANES 2011-2012. Among the clinical sample, the mean continuous metabolic syndrome score was 4.16 ± 4.30, while the NHANES sample mean was quite a bit lower, at -0.24 ± 2.8. We provide a method to calculate the continuous metabolic syndrome by comparing individual risk factor values to age- and sex-specific percentiles from a nationally representative sample. Copyright © 2018 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  15. A free software for the calculation of T2* values for iron overload assessment.

    Science.gov (United States)

    Fernandes, Juliano Lara; Fioravante, Luciana Andrea Barozi; Verissimo, Monica P; Loggetto, Sandra R

    2017-06-01

    Background Iron overload assessment with magnetic resonance imaging (MRI) using T2* has become a key diagnostic method in the management of many diseases. Quantitative analysis of the MRI images with a cost-effective tool has been a limitation to increased use of the method. Purpose To provide a free software solution for this purpose comparing the results with a commercial solution. Material and Methods The free tool was developed as a standalone program to be directly downloaded and ran in a common personal computer platform without the need of a dedicated workstation. Liver and cardiac T2* values were calculated using both tools and the values obtained compared between them in a group of 56 patients with suspected iron overload using Bland-Altman plots and concordance correlation coefficients (CCC). Results In the heart, the mean T2* differences between the two methods was 0.46 ms (95% confidence interval [CI], -0.037 -0.965) and in the liver 0.49 ms (95% CI, 0.257-0.722). The CCC for both the heart and the liver were significantly high (0.98 [95% CI, 0.966-0.988] with a Pearson ρ of 0.9811 and 0.991 [95% CI, 0.986-0.994] with a Pearson ρ of 0.996, respectively. No significant differences were observed when analyzing only patients with abnormal concentrations of iron in both organs compared to the whole cohort. Conclusion The proposed free software tool is accurate for calculation of T2* values of the liver and heart and might be a solution for centers that cannot use paid commercial solutions.

  16. Disclosed Values of Option-Based Compensation

    DEFF Research Database (Denmark)

    Bechmann, Ken L.; Hjortshøj, Toke Lilhauge

    New accounting standards require firms to expense the costs of option-based compensation (OBC), but the associated valuations offer many challenges for firms. Earlier research has documented that firms in the U.S. generally underreport the values of OBC by manipulating the inputs used for valuation......-Scholes parameters in their valuations. Furthermore, firms determine the expected time to maturity in a way that is generally consistent with the guidelines provided by the new accounting standards. The findings differ from those of the U.S., but is consistent with the more limited use of OBC and the lower level...

  17. Value-based metrics and Internet-based enterprises

    Science.gov (United States)

    Gupta, Krishan M.

    2001-10-01

    Within the last few years, a host of value-based metrics like EVA, MVA, TBR, CFORI, and TSR have evolved. This paper attempts to analyze the validity and applicability of EVA and Balanced Scorecard for Internet based organizations. Despite the collapse of the dot-com model, the firms engaged in e- commerce continue to struggle to find new ways to account for customer-base, technology, employees, knowledge, etc, as part of the value of the firm. While some metrics, like the Balance Scorecard are geared towards internal use, others like EVA are for external use. Value-based metrics are used for performing internal audits as well as comparing firms against one another; and can also be effectively utilized by individuals outside the firm looking to determine if the firm is creating value for its stakeholders.

  18. The ratio of ICRP103 to ICRP60 calculated effective doses from CT: Monte Carlo calculations with the ADELAIDE voxel paediatric model and comparisons with published values

    International Nuclear Information System (INIS)

    Caon, Martin

    2013-01-01

    The ADELAIDE voxel model of paediatric anatomy was used with the EGSnrc Monte Carlo code to compare effective dose from computed tomography (CT) calculated with both the ICRP103 and ICRP60 definitions which are different in their tissue weighting factors and in the included tissues. The new tissue weighting factors resulted in a lower effective dose for pelvis CT (than if calculated using ICRP60 tissue weighting factors), by 6.5 % but higher effective doses for all other examinations. ICRP103 calculated effective dose for CT abdomen + pelvis was higher by 4.6 %, for CT abdomen (by 9.5 %), for CT chest + abdomen + pelvis (by 6 %), for CT chest + abdomen (by 9.6 %), for CT chest (by 10.1 %) and for cardiac CT (by 11.5 %). These values, along with published values of effective dose from CT that were calculated for both sets of tissue weighting factors were used to determine single values for the ratio ICRP103:ICRP60 calculated effective doses from CT, for seven CT examinations. The following values for ICRP103:ICRP60 are suggested for use to convert ICRP60 calculated effective dose to ICRP103 calculated effective dose for the following CT examinations: Pelvis CT, 0.75; for abdomen CT, abdomen + pelvis CT, chest + abdomen + pelvis CT, 1.00; for chest + abdomen CT, and for chest CT. 1.15; for cardiac CT 1.25.

  19. Inverse boundary element calculations based on structural modes

    DEFF Research Database (Denmark)

    Juhl, Peter Møller

    2007-01-01

    The inverse problem of calculating the flexural velocity of a radiating structure of a general shape from measurements in the field is often solved by combining a Boundary Element Method with the Singular Value Decomposition and a regularization technique. In their standard form these methods sol...

  20. Communication: importance sampling including path correlation in semiclassical initial value representation calculations for time correlation functions.

    Science.gov (United States)

    Pan, Feng; Tao, Guohua

    2013-03-07

    Full semiclassical (SC) initial value representation (IVR) for time correlation functions involves a double phase space average over a set of two phase points, each of which evolves along a classical path. Conventionally, the two initial phase points are sampled independently for all degrees of freedom (DOF) in the Monte Carlo procedure. Here, we present an efficient importance sampling scheme by including the path correlation between the two initial phase points for the bath DOF, which greatly improves the performance of the SC-IVR calculations for large molecular systems. Satisfactory convergence in the study of quantum coherence in vibrational relaxation has been achieved for a benchmark system-bath model with up to 21 DOF.

  1. Microscopic calculation of absolute values of two-nucleon transfer cross sections

    International Nuclear Information System (INIS)

    Potel, G.; Bayman, B. F.; Barranco, F.

    2009-01-01

    Arguably, the greatest achievement of many-body physics in the fifties was that of providing the basis for a complete description and a thorough understanding of superconductivity in metals. At the basis of it one finds BCS theory and Josephson effect. The first recognized the central role played by the appearance of a macroscopic coherent field -usually viewed as a condensate of strongly overlapping Cooper pairs-, the quasiparticle vacuum. The second realized that a true gap is not essential for such a state of matter to exist, but rather a finite expectation value of the pair field. Consequently, the specific probe to study the superconducting state is Cooper pair tunneling. Important progress in the understanding of pairing in atomic nuclei may arise from the systematic study of two-particle transfer reactions. Although this subject of research started about the time of the BCS papers, the quantitative calculation of absolute cross sections taking properly into account the full non-locality of the Cooper pairs (correlation length much larger than nuclear dimensions) is still an open problem. We present in this talk the results obtained within a second order DWBA framework for two- nucleon transfer reactions around the Coulomb barrier induced both by heavy and light ions. The calculations were done using a computer code developed for this purpose including the sequential and simultaneous contributions to the process, with microscopic form factors which take into account the relevant structure aspects of the process, such as the nature of the single-particle wavefunctions, the spectroscopic factors, and the interaction potential responsible for the transfer. Reasonable agreement with the experimental absolute values of the differential cross section is obtained without any parameter adjustment (see Figure 1).(author)

  2. Custom auroral electrojet indices calculated by using MANGO value-added services

    Science.gov (United States)

    Bargatze, L. F.; Moore, W. B.; King, T. A.

    2009-12-01

    A set of computational routines called MANGO, Magnetogram Analysis for the Network of Geophysical Observatories, is utilized to calculate customized versions of the auroral electrojet indices, AE, AL, and AU. MANGO is part of an effort to enhance data services available to users of the Heliophysics VxOs, specifically for the Virtual Magnetospheric Observatory (VMO). The MANGO value-added service package is composed of a set of IDL routines that decompose ground magnetic field observations to isolate secular, diurnal, and disturbance variations of magnetic field disturbance, station-by-station. Each MANGO subroutine has been written in modular fashion to allow "plug and play"-style flexibility and each has been designed to account for failure modes and noisy data so that the programs will run to completion producing as much derived data as possible. The capabilities of the MANGO service package will be demonstrated through their application to the study of auroral electrojet current flow during magnetic substorms. Traditionally, the AE indices are calculated by using data from about twelve ground stations located at northern auroral zone latitudes spread longitudinally around the world. Magnetogram data are corrected for secular variation prior to calculating the standard version of the indices but the data are not corrected for diurnal variations. A custom version of the AE indices will be created by using the MANGO routines including a step to subtract diurnal curves from the magnetic field data at each station. The custom AE indices provide more accurate measures of auroral electrojet activity due to isolation of the sunstorm electrojet magnetic field signiture. The improvements in the accuracy of the custom AE indices over the tradition indices are largest during the northern hemisphere summer when the range of diurnal variation reaches its maximum.

  3. Review of proposed values for carcinogenic effects of low dose irradiation: calculations and sensitivity analysis

    International Nuclear Information System (INIS)

    Hubert, P.

    1983-01-01

    The assessment of radiological risk generally relies on no threshold linear relationship, computed by the ICRP and the National Academy of Science in a former report (BEIR II). The last report of the NAS, as well as the publication by Loewe and Mendelsohn of new dose estimates for Hiroshima and Nagasaki, enhanced the controversy on the shape of the curve of the dose effect relationship. The theoretical debate focuses on this shape (linear or quadratic, with or without threshold) which depends on the true impact of radiation in the carcinogenic process. This paper leaves aside the theoretical aspect of the problem. Instead, it describes the flow chart of the calculations which allow to find munerical values for the coefficients of the relationship, starting from the observations on irradiated human populations. In this process, besides the theoretical hypotheses, pragmatic choices, and even the necessary simplifications in the calculation, can result in substantial changes in the risk coefficients. This paper aims to present these factors of variability, as well as some sensitivity analyses. These analyses are performed within the framework of pragmatical problems like the assessment of radiological impact of nuclear facilities or the optimisation of radioprotection. In this respect, the shape of the curve appears not to have greater impact than other alternatives, such as the absolute v relative risk projection model, the choice of data source [fr

  4. Electron density values of various human tissues: in vitro Compton scatter measurements and calculated ranges

    International Nuclear Information System (INIS)

    Shrimpton, P.C.

    1981-01-01

    Accurate direct measurements of electron density have been performed on specimens from 10 different tissue types of the human body, representing the major organs, using a Compton scatter technique. As a supplement to these experimental values, calculations have been carried out to determine the electron densities expected for these tissue types. The densities observed are in good agreement with the broad ranges deduced from the basic data previously published. The results of both the in vitro sample measurements and the approximate calculations indicate that the electron density of most normal healthy soft tissue can be expected to fall within the fairly restricted range of +- 5% around 3.4 X 10 23 electrons per cm 3 . The obvious exception to this generalisation is the result for lung tissue, which falls considerably below this range owing to the high air content inherent in its construction. In view of such an overall limited variation with little difference between tissues, it would appear that electron density alone is likely to be a rather poor clinical parameter for tissue analysis, with high accuracy and precision being essential in any in vivo Compton measurements for imaging or diagnosis on specific organs. (author)

  5. Calculation of electromagnetic parameter based on interpolation algorithm

    International Nuclear Information System (INIS)

    Zhang, Wenqiang; Yuan, Liming; Zhang, Deyuan

    2015-01-01

    Wave-absorbing material is an important functional material of electromagnetic protection. The wave-absorbing characteristics depend on the electromagnetic parameter of mixed media. In order to accurately predict the electromagnetic parameter of mixed media and facilitate the design of wave-absorbing material, based on the electromagnetic parameters of spherical and flaky carbonyl iron mixture of paraffin base, this paper studied two different interpolation methods: Lagrange interpolation and Hermite interpolation of electromagnetic parameters. The results showed that Hermite interpolation is more accurate than the Lagrange interpolation, and the reflectance calculated with the electromagnetic parameter obtained by interpolation is consistent with that obtained through experiment on the whole. - Highlights: • We use interpolation algorithm on calculation of EM-parameter with limited samples. • Interpolation method can predict EM-parameter well with different particles added. • Hermite interpolation is more accurate than Lagrange interpolation. • Calculating RL based on interpolation is consistent with calculating RL from experiment

  6. THE VALUE-BASED NATIONALISM OF PEGIDA

    Directory of Open Access Journals (Sweden)

    Malte Thran

    2015-06-01

    Full Text Available In the fall of 2014, the new German grassroots political protest movement “Pegida” emerged. The movement’s targets of criticism include the pronounced ‘failures’ of German asylum and immigration policy, refugees, and a so-called “Islamization.” This contribution is an analysis of Pegida's programmatic publications. Pegida claims to promote national values and already has a positive image of Germany in particular and the nation state in general. Germany is seen as an agency of ‘humanity’ that fulfils a selfless moral mission in granting asylum to refugees. For Pegida, the consequences of this policy include ‘foreign infiltration,’ as well as so-called “Muslim parallel societies,” which are seen as dangerous imports of competing moralities. To save the ‘good order’ that said “patriotic Europeans” imagine as their home, the integration of foreign ‘national identities’ is seen as both necessary and hard to achieve. Therefore, Pegida demands the optimization of state force and its use to establish a requirement of integration and enforce the sanctioning of criminal immigrants and “asylum betrayers.” According to this logic, the preservation of home through a “zero tolerance policy” is a self-purpose, ensuring to prevent negative social consequences like poverty and unemployment. The result of this analysis proposes an explanation approach to the xenophobic logic of Pegida's value-based nationalism.

  7. In situ measurements for calculating evapotranspiration values using neutron moisture meter

    International Nuclear Information System (INIS)

    El-Gendy, R.W.; El-Moniem, M.; Massoud, M.

    2000-01-01

    Field experiment was conducted at the Wadi Sudr area, south Sinai, Egypt. Two types of residual animal farm (i.e., goat and camel)used wheat crop, beside control (no manure). The neutron scattering method and tensiometers were used to calculate the components of soil moisture depletion, evapotranspiration (ET) and drainage rate (DR). Evapotranspiration (ET) was determined by four methods, i.e, soil moisture depletion (SMD), Active rooting depth (ARD) at 80% SMD, active rooting depth (ARD) at zero hydraulic potential gradient (dh/dz = 0) and Blaney - Criddle formula (climatically data) using published crop coefficient (Kc) ET values for goat and camel residuals and control treatments were found to be 5.59, 5.54 and 6.80; 4.48, 4.43 and 5.44; 5.01, 4.11 and 11.66 and 4.5 mm day 1 for all treatments using the previous four methods respectively. The data obtained also showed that ET values under organic manure treatments were lower than control treatment, while the dry weight of wheat crop was higher in the manure-treated plots relative to the control. These less irrigation water requirements are needed to be applied to manure-treated plots and this should reduce the opportunity of soil deterioration if saline water is used

  8. Communicating Customer Value Based on Modern Technologies

    Directory of Open Access Journals (Sweden)

    Slawomir Czarniewski

    2014-06-01

    Full Text Available The article presents the idea of gaining a position of competitive advantage by companies operating in the knowledge-based economy and in the age of modern technology. The rate of change in companies’ environments forces organizations to react quickly to clients’ needs. In recent years, there has been an observed systematic increase in the importance of communicating customer value in Poland. This paper shows changes (trends in the system of market communication in the age of modern technology and changes in the economy. The author presents the mechanisms and effects of communication in age new technology in Poland. Modern technologies enable the collection, storage and transmission of information. Reflections contained in the paper do not have definite characteristics and should be treated as an opinion in the discussion.

  9. Identifying Intraplate Mechanism by B-Value Calculations in the South of Java Island

    Science.gov (United States)

    Bagus Suananda Y., Ida; Aufa, Irfan; Harlianti, Ulvienin

    2018-03-01

    Java is the most populous island in Indonesia with 50 million people live there. This island geologically formed at the Eurasia plate margin by the subduction of the Australian oceanic crust. At the south part of Java, beside the occurrence of 2-plate convergence earthquake (interplate), there are also the activities of the intraplate earthquake. Research for distinguish this 2 different earthquake type is necessary for estimating the behavior of the earthquake that may occur. The aim of this research is to map the b-value in the south of Java using earthquake data from 1963 until 2008. The research area are divided into clusters based on the epicenter mapping results with magnitude more than 4 and three different depth (0-30 km, 30-60 km, 60-100 km). This location clustering indicate group of earthquakes occurred by the same structure or mechanism. On some cluster in the south of Java, b-value obtained are between 0.8 and 1.25. This range of b-value indicates the region was intraplate earthquake zone, with 0.72-1.2 b-value range is the indication of intraplate earthquake zone. The final validation is to determine the mechanism of a segment done by correlating the epicenter and b-value plot with the available structural geology data. Based on this research, we discover that the earthquakes occur in Java not only the interplate earthquake, the intraplate earthquake also occurred here. By identifying the mechanism of a segment in the south of Java, earthquake characterization that may occur can be done for developing the accurate earthquake disaster mitigation system.

  10. Programmable calculator: alternative to minicomputer-based analyzer

    International Nuclear Information System (INIS)

    Hochel, R.C.

    1979-01-01

    Described are a number of typical field and laboratory counting systems that use standard stand-alone multichannel analyzers (MCA) interfaced to a Hewlett-Packard Company (HP 9830) programmable calculator. Such systems can offer significant advantages in cost and flexibility over a minicomputyr-based system. Because most laboratories tend to accumulate MCA's over the years, the programmable calculator also offers an easy way to upgrade the laboratory while making optimum use of existing systems. Software programs are easily tailored to fit a variety of general or specific applications. The only disadvantage of the calculator vs a computer-based system is in speed of analyses; however, for most applications this handicap is minimal. Applications discussed give a brief overview of the power and flexibility of the MCA-calculator approach to automated counting and data reduction

  11. Calculation of concrete shielding wall thickness for 450kVp X-ray tube with MCNP simulation and result comparison with half value layer method calculation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sang Heon; Lee, Eun Joong; Kim, Chan Kyu; Cho, Gyu Seong [Dept. of Nuclear and Quantum Engineering, KAIST, Daejeon (Korea, Republic of); Hur, Sam Suk [Sam Yong Inspection Engineering Co., Ltd., Seoul (Korea, Republic of)

    2016-11-15

    Radiation generating devices must be properly shielded for their safe application. Although institutes such as US National Bureau of Standards and National Council on Radiation Protection and Measurements (NCRP) have provided guidelines for shielding X-ray tube of various purposes, industry people tend to rely on 'Half Value Layer (HVL) method' which requires relatively simple calculation compared to the case of those guidelines. The method is based on the fact that the intensity, dose, and air kerma of narrow beam incident on shielding wall decreases by about half as the beam penetrates the HVL thickness of the wall. One can adjust shielding wall thickness to satisfy outside wall dose or air kerma requirements with this calculation. However, this may not always be the case because 1) The strict definition of HVL deals with only Intensity, 2) The situation is different when the beam is not 'narrow'; the beam quality inside the wall is distorted and related changes on outside wall dose or air kerma such as buildup effect occurs. Therefore, sometimes more careful research should be done in order to verify the effect of shielding specific radiation generating device. High energy X-ray tubes which is operated at the voltage above 400 kV that are used for 'heavy' nondestructive inspection is an example. People have less experience in running and shielding such device than in the case of widely-used low energy X-ray tubes operated at the voltage below 300 kV. In this study, Air Kerma value per week, outside concrete shielding wall of various thickness surrounding 450 kVp X-ray tube were calculated using MCNP simulation with the aid of Geometry Splitting method which is a famous Variance Reduction technique. The comparison between simulated result, HVL method result, and NCRP Report 147 safety goal 0.02 mGy wk-1 on Air Kerma for the place where the public are free to pass showed that concrete wall of thickness 80 cm is needed to achieve the

  12. Calculation of concrete shielding wall thickness for 450kVp X-ray tube with MCNP simulation and result comparison with half value layer method calculation

    International Nuclear Information System (INIS)

    Lee, Sang Heon; Lee, Eun Joong; Kim, Chan Kyu; Cho, Gyu Seong; Hur, Sam Suk

    2016-01-01

    Radiation generating devices must be properly shielded for their safe application. Although institutes such as US National Bureau of Standards and National Council on Radiation Protection and Measurements (NCRP) have provided guidelines for shielding X-ray tube of various purposes, industry people tend to rely on 'Half Value Layer (HVL) method' which requires relatively simple calculation compared to the case of those guidelines. The method is based on the fact that the intensity, dose, and air kerma of narrow beam incident on shielding wall decreases by about half as the beam penetrates the HVL thickness of the wall. One can adjust shielding wall thickness to satisfy outside wall dose or air kerma requirements with this calculation. However, this may not always be the case because 1) The strict definition of HVL deals with only Intensity, 2) The situation is different when the beam is not 'narrow'; the beam quality inside the wall is distorted and related changes on outside wall dose or air kerma such as buildup effect occurs. Therefore, sometimes more careful research should be done in order to verify the effect of shielding specific radiation generating device. High energy X-ray tubes which is operated at the voltage above 400 kV that are used for 'heavy' nondestructive inspection is an example. People have less experience in running and shielding such device than in the case of widely-used low energy X-ray tubes operated at the voltage below 300 kV. In this study, Air Kerma value per week, outside concrete shielding wall of various thickness surrounding 450 kVp X-ray tube were calculated using MCNP simulation with the aid of Geometry Splitting method which is a famous Variance Reduction technique. The comparison between simulated result, HVL method result, and NCRP Report 147 safety goal 0.02 mGy wk-1 on Air Kerma for the place where the public are free to pass showed that concrete wall of thickness 80 cm is needed to achieve the safety goal

  13. Efficient p-value evaluation for resampling-based tests

    KAUST Repository

    Yu, K.

    2011-01-05

    The resampling-based test, which often relies on permutation or bootstrap procedures, has been widely used for statistical hypothesis testing when the asymptotic distribution of the test statistic is unavailable or unreliable. It requires repeated calculations of the test statistic on a large number of simulated data sets for its significance level assessment, and thus it could become very computationally intensive. Here, we propose an efficient p-value evaluation procedure by adapting the stochastic approximation Markov chain Monte Carlo algorithm. The new procedure can be used easily for estimating the p-value for any resampling-based test. We show through numeric simulations that the proposed procedure can be 100-500 000 times as efficient (in term of computing time) as the standard resampling-based procedure when evaluating a test statistic with a small p-value (e.g. less than 10( - 6)). With its computational burden reduced by this proposed procedure, the versatile resampling-based test would become computationally feasible for a much wider range of applications. We demonstrate the application of the new method by applying it to a large-scale genetic association study of prostate cancer.

  14. Absolute values of inelastic neutron scattering cross-sections calculated with account taken of the pre-equilibrium mechanism

    International Nuclear Information System (INIS)

    Jahn, H.

    1980-01-01

    Absolute values of secondary energy-dependent inelastic neutron scattering cross sections can be calculated either with the master equation pre-equilibrium formalism of Cline and Blann or with Blann's more recent geometry-dependent hybrid model. The master equation formalism was used at Dubna and Dresden to reproduce experimental results for 14 MeV incident energy. The geometry-dependent hybrid model was used at Karlsruhe to cover for a number of materials the whole range from 5 to 14 MeV incident energy and to reproduce smoothed experimental spectra at 7.45 and 14 MeV. Only the geometry-dependent hybrid model accounts for scattering in the diffuse nuclear surface and thus for a certain average over the direct interaction. It is also free of any fit parameters other than those of the usual optical model. The master equation calculations, on the other hand, are based on nucleon-nucleon scattering cross sections inserted into the high-energy approximation of Kikuchi and Kawai for the intranuclear transition rate. Other approaches require either mass- or energy-dependent or more global fit parameters for a satisfactory reproduction of experimental results, but a genuine prediction of the incident-energy dependence of the inelastic neutron cross section, especially below 14 MeV, is needed for transport and shielding calculations for instance in connection with fusion reactor design studies. (author)

  15. Methods, software and datasets to verify DVH calculations against analytical values: Twenty years late(r)

    Energy Technology Data Exchange (ETDEWEB)

    Nelms, Benjamin [Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States); Stambaugh, Cassandra [Department of Physics, University of South Florida, Tampa, Florida 33612 (United States); Hunt, Dylan; Tonner, Brian; Zhang, Geoffrey; Feygelman, Vladimir, E-mail: vladimir.feygelman@moffitt.org [Department of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida 33612 (United States)

    2015-08-15

    Purpose: The authors designed data, methods, and metrics that can serve as a standard, independent of any software package, to evaluate dose-volume histogram (DVH) calculation accuracy and detect limitations. The authors use simple geometrical objects at different orientations combined with dose grids of varying spatial resolution with linear 1D dose gradients; when combined, ground truth DVH curves can be calculated analytically in closed form to serve as the absolute standards. Methods: DICOM RT structure sets containing a small sphere, cylinder, and cone were created programmatically with axial plane spacing varying from 0.2 to 3 mm. Cylinders and cones were modeled in two different orientations with respect to the IEC 1217 Y axis. The contours were designed to stringently but methodically test voxelation methods required for DVH. Synthetic RT dose files were generated with 1D linear dose gradient and with grid resolution varying from 0.4 to 3 mm. Two commercial DVH algorithms—PINNACLE (Philips Radiation Oncology Systems) and PlanIQ (Sun Nuclear Corp.)—were tested against analytical values using custom, noncommercial analysis software. In Test 1, axial contour spacing was constant at 0.2 mm while dose grid resolution varied. In Tests 2 and 3, the dose grid resolution was matched to varying subsampled axial contours with spacing of 1, 2, and 3 mm, and difference analysis and metrics were employed: (1) histograms of the accuracy of various DVH parameters (total volume, D{sub max}, D{sub min}, and doses to % volume: D99, D95, D5, D1, D0.03 cm{sup 3}) and (2) volume errors extracted along the DVH curves were generated and summarized in tabular and graphical forms. Results: In Test 1, PINNACLE produced 52 deviations (15%) while PlanIQ produced 5 (1.5%). In Test 2, PINNACLE and PlanIQ differed from analytical by >3% in 93 (36%) and 18 (7%) times, respectively. Excluding D{sub min} and D{sub max} as least clinically relevant would result in 32 (15%) vs 5 (2

  16. Methods, software and datasets to verify DVH calculations against analytical values: Twenty years late(r).

    Science.gov (United States)

    Nelms, Benjamin; Stambaugh, Cassandra; Hunt, Dylan; Tonner, Brian; Zhang, Geoffrey; Feygelman, Vladimir

    2015-08-01

    The authors designed data, methods, and metrics that can serve as a standard, independent of any software package, to evaluate dose-volume histogram (DVH) calculation accuracy and detect limitations. The authors use simple geometrical objects at different orientations combined with dose grids of varying spatial resolution with linear 1D dose gradients; when combined, ground truth DVH curves can be calculated analytically in closed form to serve as the absolute standards. dicom RT structure sets containing a small sphere, cylinder, and cone were created programmatically with axial plane spacing varying from 0.2 to 3 mm. Cylinders and cones were modeled in two different orientations with respect to the IEC 1217 Y axis. The contours were designed to stringently but methodically test voxelation methods required for DVH. Synthetic RT dose files were generated with 1D linear dose gradient and with grid resolution varying from 0.4 to 3 mm. Two commercial DVH algorithms-pinnacle (Philips Radiation Oncology Systems) and PlanIQ (Sun Nuclear Corp.)-were tested against analytical values using custom, noncommercial analysis software. In Test 1, axial contour spacing was constant at 0.2 mm while dose grid resolution varied. In Tests 2 and 3, the dose grid resolution was matched to varying subsampled axial contours with spacing of 1, 2, and 3 mm, and difference analysis and metrics were employed: (1) histograms of the accuracy of various DVH parameters (total volume, Dmax, Dmin, and doses to % volume: D99, D95, D5, D1, D0.03 cm(3)) and (2) volume errors extracted along the DVH curves were generated and summarized in tabular and graphical forms. In Test 1, pinnacle produced 52 deviations (15%) while PlanIQ produced 5 (1.5%). In Test 2, pinnacle and PlanIQ differed from analytical by >3% in 93 (36%) and 18 (7%) times, respectively. Excluding Dmin and Dmax as least clinically relevant would result in 32 (15%) vs 5 (2%) scored deviations for pinnacle vs PlanIQ in Test 1, while Test 2

  17. OPAL shield design performance assessment. Comparison of measured dose rates against the corresponding design calculated values. A designer perspective

    Energy Technology Data Exchange (ETDEWEB)

    Brizuela, Martin; Albornoz, Felipe [INVAP SE, Av. Cmte. Piedrabuena, Bariloche (Argentina)

    2012-03-15

    A comparison of OPAL shielding calculations against measurements carried out during Commissioning, is presented for relevant structures such as the reactor block, primary shutters, neutron guide bunker, etc. All the results obtained agree very well with the measured values and contribute to establish the confidence on the calculation tools (MCNP4, DORT, etc.) and methodology used for shielding design. (author)

  18. Development of a three dimensional homogeneous calculation model for the BFS-62 critical experiment. Preparation of adjusted equivalent measured values for sodium void reactivity values. Final report

    International Nuclear Information System (INIS)

    Manturov, G.; Semenov, M.; Seregin, A.; Lykova, L.

    2004-01-01

    The BFS-62 critical experiments are currently used as 'benchmark' for verification of IPPE codes and nuclear data, which have been used in the study of loading a significant amount of Pu in fast reactors. The BFS-62 experiments have been performed at BFS-2 critical facility of IPPE (Obninsk). The experimental program has been arranged in such a way that the effect of replacement of uranium dioxied blanket by the steel reflector as well as the effect of replacing UOX by MOX on the main characteristics of the reactor model was studied. Wide experimental program, including measurements of the criticality-keff, spectral indices, radial and axial fission rate distributions, control rod mock-up worth, sodium void reactivity effect SVRE and some other important nuclear physics parameters, was fulfilled in the core. Series of 4 BFS-62 critical assemblies have been designed for studying the changes in BN-600 reactor physics from existing state to hybrid core. All the assemblies are modeling the reactor state prior to refueling, i.e. with all control rod mock-ups withdrawn from the core. The following items are chosen for the analysis in this report: Description of the critical assembly BFS-62-3A as the 3rd assembly in a series of 4 BFS critical assemblies studying BN-600 reactor with MOX-UOX hybrid zone and steel reflector; Development of a 3D homogeneous calculation model for the BFS-62-3A critical experiment as the mock-up of BN-600 reactor with hybrid zone and steel reflector; Evaluation of measured nuclear physics parameters keff and SVRE (sodium void reactivity effect); Preparation of adjusted equivalent measured values for keff and SVRE. Main series of calculations are performed using 3D HEX-Z diffusion code TRIGEX in 26 groups, with the ABBN-93 cross-section set. In addition, precise calculations are made, in 299 groups and Ps-approximation in scattering, by Monte-Carlo code MMKKENO and discrete ordinate code TWODANT. All calculations are based on the common system

  19. Data base for terrestrial food pathways dose commitment calculations

    International Nuclear Information System (INIS)

    Bailey, C.E.

    1979-01-01

    A computer program is under development to allow calculation of the dose-to-man in Georgia and South Carolina from ingestion of radionuclides in terrestrial foods resulting from deposition of airborne radionuclides. This program is based on models described in Regulatory Guide 1.109 (USNRC, 1977). The data base describes the movement of radionuclides through the terrestrial food chain, growth and consumption factors for a variety of radionuclides

  20. Software-Based Visual Loan Calculator For Banking Industry

    Science.gov (United States)

    Isizoh, A. N.; Anazia, A. E.; Okide, S. O. 3; Onyeyili, T. I.; Okwaraoka, C. A. P.

    2012-03-01

    industry is very necessary in modern day banking system using many design techniques for security reasons. This paper thus presents the software-based design and implementation of a Visual Loan calculator for banking industry using Visual Basic .Net (VB.Net). The fundamental approach to this is to develop a Graphical User Interface (GUI) using VB.Net operating tools, and then developing a working program which calculates the interest of any loan obtained. The VB.Net programming was done, implemented and the software proved satisfactory.

  1. Calculation of the a0 value of the unitary cell of garnets

    International Nuclear Information System (INIS)

    Baptista, N.R.

    1984-06-01

    The calculation of the a 0 (Angstrom) reticular constant of four samples of garnets, collected in the States of Minas Gerais, Espirito Santo and Bahia, in Brazil, is presented. The objective of this calculation is to determine the molecular composition of samples to complete other experimental studies. (M.C.K.) [pt

  2. 40 CFR 600.211-08 - Sample calculation of fuel economy values for labeling.

    Science.gov (United States)

    2010-07-01

    ... AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel Economy... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Sample calculation of fuel economy...

  3. Value-Based Leadership Approach: A Way for Principals to Revive the Value of Values in Schools

    Science.gov (United States)

    van Niekerk, Molly; Botha, Johan

    2017-01-01

    The qualitative research discussed in this article is based on the assumption that school principals as leaders need to establish, develop and maintain a core of shared values in their schools. Our focus is on principals' current perceptions of values in their schools. This is important because values underpin their decisions and actions and thus…

  4. The African interpretations of a set of values in a value-based HIV ...

    African Journals Online (AJOL)

    This article focuses on a qualitative exploration of six core values embedded in the Choose Life Training Programme (CLTP), a value-based HIV and AIDS prevention programme. The article is based on a study that explored the possibility that the African interpretations of these values are different to the Western definitions.

  5. Pricing for Higher Education Institutions: A Value-Based Approach

    Science.gov (United States)

    Amir, Amizawati Mohd; Auzair, Sofiah Md; Maelah, Ruhanita; Ahmad, Azlina

    2016-01-01

    Purpose: The purpose of this paper is to propose the concept of higher education institutions (HEIs) offering educational services based on value for money. The value is determined based on customers' (i.e. students) expectations of the service and the costs in comparison to the competitors. Understanding the value and creating customer value are…

  6. Calculations of accelerator-based neutron sources characteristics

    International Nuclear Information System (INIS)

    Tertytchnyi, R.G.; Shorin, V.S.

    2000-01-01

    Accelerator-based quasi-monoenergetic neutron sources (T(p,n), D(d;n), T(d;n) and Li (p,n)-reactions) are widely used in experiments on measuring the interaction cross-sections of fast neutrons with nuclei. The present work represents the code for calculation of the yields and spectra of neutrons generated in (p, n)- and ( d; n)-reactions on some targets of light nuclei (D, T; 7 Li). The peculiarities of the stopping processes of charged particles (with incident energy up to 15 MeV) in multilayer and multicomponent targets are taken into account. The code version is made in terms of the 'SOURCE,' a subroutine for the well-known MCNP code. Some calculation results for the most popular accelerator- based neutron sources are given. (authors)

  7. Application of Risk within Net Present Value Calculations for Government Projects

    Science.gov (United States)

    Grandl, Paul R.; Youngblood, Alisha D.; Componation, Paul; Gholston, Sampson

    2007-01-01

    In January 2004, President Bush announced a new vision for space exploration. This included retirement of the current Space Shuttle fleet by 2010 and the development of new set of launch vehicles. The President's vision did not include significant increases in the NASA budget, so these development programs need to be cost conscious. Current trade study procedures address factors such as performance, reliability, safety, manufacturing, maintainability, operations, and costs. It would be desirable, however, to have increased insight into the cost factors behind each of the proposed system architectures. This paper reports on a set of component trade studies completed on the upper stage engine for the new launch vehicles. Increased insight into architecture costs was developed by including a Net Present Value (NPV) method and applying a set of associated risks to the base parametric cost data. The use of the NPV method along with the risks was found to add fidelity to the trade study and provide additional information to support the selection of a more robust design architecture.

  8. Calculating meal glycemic index by using measured and published food values compared with directly measured meal glycemic index.

    Science.gov (United States)

    Dodd, Hayley; Williams, Sheila; Brown, Rachel; Venn, Bernard

    2011-10-01

    Glycemic index (GI) testing is normally based on individual foods, whereas GIs for meals or diets are based on a formula using a weighted sum of the constituents. The accuracy with which the formula can predict a meal or diet GI is questionable. Our objective was to compare the GI of meals, obtained by using the formula and by using both measured food GI and published values, with directly measured meal GIs. The GIs of 7 foods were tested in 30 healthy people. The foods were combined into 3 meals, each of which provided 50 g available carbohydrate, including a staple (potato, rice, or spaghetti), vegetables, sauce, and pan-fried chicken. The mean (95% CI) meal GIs determined from individual food GI values and by direct measurement were as follows: potato meal [predicted, 63 (56, 70); measured, 53 (46, 62)], rice meal [predicted, 51 (45, 56); measured, 38 (33, 45)], and spaghetti meal [predicted, 54 (49, 60); measured, 38 (33, 44)]. The predicted meal GIs were all higher than the measured GIs (P < 0.001). The extent of the overestimation depended on the particular food, ie, 12, 15, and 19 GI units (or 22%, 40%, and 50%) for the potato, rice, and spaghetti meals, respectively. The formula overestimated the GI of the meals by between 22% and 50%. The use of published food values also overestimated the measured meal GIs. Investigators using the formula to calculate a meal or diet GI should be aware of limitations in the method. This trial is registered with the Australian and New Zealand Clinical Trials Registry as ACTRN12611000210976.

  9. Assessing the implications on performance when aligning customer lifetime value calculations with religious faith groups and afterlifetime values - a Socratic elenchus approach

    DEFF Research Database (Denmark)

    Hollensen, Svend; Wilson, Jonathan

    2013-01-01

    Customer Lifetime Value (CLV) is an established relationship marketing-centric approach to evaluating performance based upon the significance of a customer, and what resources should be allocated towards maintaining relations – beyond short-term transactional views. The conceptual argument...... the branding strategy, based on maximising the sum of CLV and CALV(Customer AfterLife Time Value)...

  10. Valuing Trial Designs from a Pharmaceutical Perspective Using Value-Based Pricing.

    Science.gov (United States)

    Breeze, Penny; Brennan, Alan

    2015-11-01

    Our aim was to adapt the traditional framework for expected net benefit of sampling (ENBS) to be more compatible with drug development trials from the pharmaceutical perspective. We modify the traditional framework for conducting ENBS and assume that the price of the drug is conditional on the trial outcomes. We use a value-based pricing (VBP) criterion to determine price conditional on trial data using Bayesian updating of cost-effectiveness (CE) model parameters. We assume that there is a threshold price below which the company would not market the new intervention. We present a case study in which a phase III trial sample size and trial duration are varied. For each trial design, we sampled 10,000 trial outcomes and estimated VBP using a CE model. The expected commercial net benefit is calculated as the expected profits minus the trial costs. A clinical trial with shorter follow-up, and larger sample size, generated the greatest expected commercial net benefit. Increasing the duration of follow-up had a modest impact on profit forecasts. Expected net benefit of sampling can be adapted to value clinical trials in the pharmaceutical industry to optimise the expected commercial net benefit. However, the analyses can be very time consuming for complex CE models. © 2014 The Authors. Health Economics published by John Wiley & Sons Ltd.

  11. New Products and Technologies, Based on Calculations Developed Areas

    Directory of Open Access Journals (Sweden)

    Gheorghe Vertan

    2013-09-01

    Full Text Available Following statistics, currently prosperous and have high GDP / capita, only countries that have and fructify intensively large natural resources and/or produce and export products massive based on patented inventions accordingly. Without great natural wealth and the lowest GDP / capita in the EU, Romania will prosper only with such products. Starting from the top experience in the country, some patented, can develop new and competitive technologies and patentable and exportable products, based on exact calculations of developed areas, such as that double shells welded assemblies and plating of ships' propellers and blade pump and hydraulic turbines.

  12. Plasma density calculation based on the HCN waveform data

    International Nuclear Information System (INIS)

    Chen Liaoyuan; Pan Li; Luo Cuiwen; Zhou Yan; Deng Zhongchao

    2004-01-01

    A method to improve the plasma density calculation is introduced using the base voltage and the phase zero points obtained from the HCN interference waveform data. The method includes making the signal quality higher by putting the signal control device and the analog-to-digit converters in the same location and charging them by the same power, and excluding the noise's effect according to the possible changing rate of the signal's phase, and to make the base voltage more accurate by dynamical data processing. (authors)

  13. Impact of dietary fiber energy on the calculation of food total energy value in the Brazilian Food Composition Database.

    Science.gov (United States)

    Menezes, Elizabete Wenzel de; Grande, Fernanda; Giuntini, Eliana Bistriche; Lopes, Tássia do Vale Cardoso; Dan, Milana Cara Tanasov; Prado, Samira Bernardino Ramos do; Franco, Bernadette Dora Gombossy de Melo; Charrondière, U Ruth; Lajolo, Franco Maria

    2016-02-15

    Dietary fiber (DF) contributes to the energy value of foods and including it in the calculation of total food energy has been recommended for food composition databases. The present study aimed to investigate the impact of including energy provided by the DF fermentation in the calculation of food energy. Total energy values of 1753 foods from the Brazilian Food Composition Database were calculated with or without the inclusion of DF energy. The energy values were compared, through the use of percentage difference (D%), in individual foods and in daily menus. Appreciable energy D% (⩾10) was observed in 321 foods, mainly in the group of vegetables, legumes and fruits. However, in the Brazilian typical menus containing foods from all groups, only D%foods, when individually considered. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Modeling and Calculation of Dent Based on Pipeline Bending Strain

    Directory of Open Access Journals (Sweden)

    Qingshan Feng

    2016-01-01

    Full Text Available The bending strain of long-distance oil and gas pipelines can be calculated by the in-line inspection tool which used inertial measurement unit (IMU. The bending strain is used to evaluate the strain and displacement of the pipeline. During the bending strain inspection, the dent existing in the pipeline can affect the bending strain data as well. This paper presents a novel method to model and calculate the pipeline dent based on the bending strain. The technique takes inertial mapping data from in-line inspection and calculates depth of dent in the pipeline using Bayesian statistical theory and neural network. To verify accuracy of the proposed method, an in-line inspection tool is used to inspect pipeline to gather data. The calculation of dent shows the method is accurate for the dent, and the mean relative error is 2.44%. The new method provides not only strain of the pipeline dent but also the depth of dent. It is more benefit for integrity management of pipeline for the safety of the pipeline.

  15. THE VALUE-BASED NATIONALISM OF PEGIDA

    OpenAIRE

    Malte Thran; Lukas Boehnke

    2015-01-01

    In the fall of 2014, the new German grassroots political protest movement “Pegida” emerged. The movement’s targets of criticism include the pronounced ‘failures’ of German asylum and immigration policy, refugees, and a so-called “Islamization.” This contribution is an analysis of Pegida's programmatic publications. Pegida claims to promote national values and already has a positive image of Germany in particular and the nation state in general. Germany is seen as an agency of ‘humanity’ that ...

  16. PV panel model based on datasheet values

    DEFF Research Database (Denmark)

    Sera, Dezso; Teodorescu, Remus; Rodriguez, Pedro

    2007-01-01

    This work presents the construction of a model for a PV panel using the single-diode five-parameters model, based exclusively on data-sheet parameters. The model takes into account the series and parallel (shunt) resistance of the panel. The equivalent circuit and the basic equations of the PV cell....... Based on these equations, a PV panel model, which is able to predict the panel behavior in different temperature and irradiance conditions, is built and tested....

  17. Value-Based Requirements Traceability: Lessons Learned

    Science.gov (United States)

    Egyed, Alexander; Grünbacher, Paul; Heindl, Matthias; Biffl, Stefan

    Traceability from requirements to code is mandated by numerous software development standards. These standards, however, are not explicit about the appropriate level of quality of trace links. From a technical perspective, trace quality should meet the needs of the intended trace utilizations. Unfortunately, long-term trace utilizations are typically unknown at the time of trace acquisition which represents a dilemma for many companies. This chapter suggests ways to balance the cost and benefits of requirements traceability. We present data from three case studies demonstrating that trace acquisition requires broad coverage but can tolerate imprecision. With this trade-off our lessons learned suggest a traceability strategy that (1) provides trace links more quickly, (2) refines trace links according to user-defined value considerations, and (3) supports the later refinement of trace links in case the initial value consideration has changed over time. The scope of our work considers the entire life cycle of traceability instead of just the creation of trace links.

  18. (based on the Concept of Value Chain

    Directory of Open Access Journals (Sweden)

    Firouzeh Asghari

    2016-09-01

    Full Text Available One of the most important objectives of academic research is to meet the scientific needs of society. The important factor that contributes to the establishment of trust and values with in order to provide stakeholders with academic research and its results is the scientific quality of research. PhD thesis, carried out by students and requiring a lot of time and money, constitute a considerable percentage of the research conducted at universities. The present study aims to identify the challenges of   “the quality of PhD thesis” in Iran from the viewpoint of professors and students, as two major components in doing research in doctoral programs. This is a phenomenological research study, and the data collected through interviews with professors and students in doctoral programs is analyzed on the concept of “value chain”. The findings of the study indicate that the challenge of the quality of PhD thesis involves a multiple, multilayered interconnection of elements (students, professors, managers and policy-makers, organizational structure, procedures, rules and other environmental actors. In addition, a powerful and fast-moving current has developed between these layers, which is both affecting them and affected by them. There are instances of un-academic behaviors that have marred the scientific identity of universities and turned into a major challenge to the quality of doctoral thesis in Iran.

  19. The neural bases for valuing social equality.

    Science.gov (United States)

    Aoki, Ryuta; Yomogida, Yukihito; Matsumoto, Kenji

    2015-01-01

    The neural basis of how humans value and pursue social equality has become a major topic in social neuroscience research. Although recent studies have identified a set of brain regions and possible mechanisms that are involved in the neural processing of equality of outcome between individuals, how the human brain processes equality of opportunity remains unknown. In this review article, first we describe the importance of the distinction between equality of outcome and equality of opportunity, which has been emphasized in philosophy and economics. Next, we discuss possible approaches for empirical characterization of human valuation of equality of opportunity vs. equality of outcome. Understanding how these two concepts are distinct and interact with each other may provide a better explanation of complex human behaviors concerning fairness and social equality. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  20. A density gradient theory based method for surface tension calculations

    DEFF Research Database (Denmark)

    Liang, Xiaodong; Michelsen, Michael Locht; Kontogeorgis, Georgios

    2016-01-01

    The density gradient theory has been becoming a widely used framework for calculating surface tension, within which the same equation of state is used for the interface and bulk phases, because it is a theoretically sound, consistent and computationally affordable approach. Based on the observation...... that the optimal density path from the geometric mean density gradient theory passes the saddle point of the tangent plane distance to the bulk phases, we propose to estimate surface tension with an approximate density path profile that goes through this saddle point. The linear density gradient theory, which...... assumes linearly distributed densities between the two bulk phases, has also been investigated. Numerical problems do not occur with these density path profiles. These two approximation methods together with the full density gradient theory have been used to calculate the surface tension of various...

  1. Value based health care real estate strategies

    NARCIS (Netherlands)

    van der Voordt, Theo; van der Voordt, DJM; Dijkstra, K

    2009-01-01

    Subject/Research problem
    The healthcare sector in the Netherlands is shifting from a governmentally steered domain towards regulated market forces and performance-based financing. Organizational changes, new ideas about care and cure, demographical developments and technological innovations play

  2. Maintenance & Repair Cost Calculation and Assessment of Resale Value for Different Alternative Commercial Vehicle Powertrain Technologies

    OpenAIRE

    Kleiner, Florian; Friedrich, Horst E.

    2017-01-01

    For detailed evaluation of the Total Cost of Ownership, expenditures for Maintenance & Repair as well as the resale value are important to consider and should not be neglected. However, information on Maintenance & Repair costs as well as residual values for commercial vehicles with alternative powertrains is missing and data on this issue is rare. There is a lack of information and consolidated knowledge. In order to enable a holistic cost assessment for commercial vehicles, a comprehensive ...

  3. A comparison of measured and calculated values of air kerma rates from 137Cs in soil

    Directory of Open Access Journals (Sweden)

    V. P. Ramzaev

    2015-01-01

    Full Text Available In 2010, a study was conducted to determine the air gamma dose rate from 137Cs deposited in soil. The gamma dose rate measurements and soil sampling were performed at 30 reference plots from the south-west districts of the Bryansk region (Russia that had been heavily contaminated as a result of the Chernobyl accident. The 137Cs inventory in the top 20 cm of soil ranged from 260 kBq m–2 to 2800 kBq m–2. Vertical distributions of 137Cs in soil cores (6 samples per a plot were determined after their sectioning into ten horizontal layers of 2 cm thickness. The vertical distributions of 137Cs in soil were employed to calculate air kerma rates, K, using two independent methods proposed by Saito and Jacob [Radiat. Prot. Dosimetry, 1995, Vol. 58, P. 29–45] and Golikov et al. [Contaminated Forests– Recent Developments in Risk Identification and Future Perspective. Kluwer Academic Publishers, 1999. – P. 333–341]. A very good coincidence between the methods was observed (Spearman’s rank coefficient of correlation = 0.952; P<0.01; on average, a difference between the kerma rates calculated with two methods did not exceed 3%. The calculated air kerma rates agreed with the measured dose rates in air very well (Spearman’s coefficient of correlation = 0.952; P<0.01. For large grassland plots (n=19, the measured dose rates were on average 6% less than the calculated kerma rates. The tested methods for calculating the air dose rate from 137Cs in soil can be recommended for practical studies in radiology and radioecology. 

  4. Management of operational enterprise value in value-based management system

    OpenAIRE

    Protasova, Yelizaveta V.

    2014-01-01

    The economic criteria of effectiveness and efficiency for the value-based management system have been proposed. Also, the methodological approach to the management of operational enterprise value on the basis of the definite limit values of influence factors is proposed.

  5. Neutron multipilication factors as a function of temperature: a comparison of calculated and measured values for lattices using 233UO2-ThO2 fuel in graphite

    International Nuclear Information System (INIS)

    Newman, D.F.; Gore, B.F.

    1978-01-01

    Neutron multiplication factors calculated as a function of temperature for three graphite-moderated 233 UO 2 -ThO 2 -fueled lattices are correlated with the values measured for these lattices in the high-temperature lattice test reactor (HTLTR). The correlation analysis is accomplished by fitting calculated values of k/sub infinity/(T) to the measured values using two least-squares-fitted correlation coefficients: (a) a normalization factor and (b) a temperature coefficient bias factor. These correlations indicate the existence of a negative (nonconservative) bias in temperature coefficients of reactivity calculated using ENDF/B-IV cross-section data. Use of an alternate cross-section data set for thorium, which has a smaller resonance integral than ENDF/B-IV data, improved the agreement between calculated and measured temperature coefficients of reactivity for the three experimental lattices. The results of the correlations are used to estimate the bias in the temperature coefficient of reactivity calculated for a lattice typical of fresh 233 U recycle fuel for a high-temperature gas-cooled reactor (HTGR). This extrapolation to a lattice having a heavier fissile loading than the experimental lattices is accomplished using a sensitivity analysis of the estimated bias to alternate thorium cross-section data used in calculations of k/sub infinity/(T). The envelope of uncertainty expected to contain the actual values for the temperature coefficient of the reactivity for the 233 U-fueled HTGR lattice studied remains negative at 1600 K (1327 0 C). Although a broader base of experimental data with improved accuracy is always desirable, the existing data base provided by the HTLTR experiments is judged to be adequate for the verification of neutronic calculations for the HTGR containing 233 U fuel at its current state of development

  6. Establishing values-based leadership and value systems in healthcare organizations.

    Science.gov (United States)

    Graber, David R; Kilpatrick, Anne Osborne

    2008-01-01

    The importance of values in organizations is often discussed in management literature. Possessing strong or inspiring values is increasingly considered to be a key quality of successful leaders. Another common theme is that organizational values contribute to the culture and ultimate success of organizations. These conceptions or expectations are clearly applicable to healthcare organizations in the United States. However, healthcare organizations have unique structures and are subject to societal expectations that must be accommodated within an organizational values system. This article describes theoretical literature on organizational values. Cultural and religious influences on Americans and how they may influence expectations from healthcare providers are discussed. Organizational cultures and the training and socialization of the numerous professional groups in healthcare also add to the considerable heterogeneity of value systems within healthcare organizations. These contribute to another challenge confronting healthcare managers--competing or conflicting values within a unit or the entire organization. Organizations often fail to reward members who uphold or enact the organization's values, which can lead to lack of motivation and commitment to the organization. Four key elements of values-based leadership are presented for healthcare managers who seek to develop as values-based leaders. 1) Recognize your personal and professional values, 2) Determine what you expect from the larger organization and what you can implement within your sphere of influence, 3) Understand and incorporate the values of internal stakeholders, and 4) Commit to values-based leadership.

  7. Validation of GPU based TomoTherapy dose calculation engine.

    Science.gov (United States)

    Chen, Quan; Lu, Weiguo; Chen, Yu; Chen, Mingli; Henderson, Douglas; Sterpin, Edmond

    2012-04-01

    The graphic processing unit (GPU) based TomoTherapy convolution/superposition(C/S) dose engine (GPU dose engine) achieves a dramatic performance improvement over the traditional CPU-cluster based TomoTherapy dose engine (CPU dose engine). Besides the architecture difference between the GPU and CPU, there are several algorithm changes from the CPU dose engine to the GPU dose engine. These changes made the GPU dose slightly different from the CPU-cluster dose. In order for the commercial release of the GPU dose engine, its accuracy has to be validated. Thirty eight TomoTherapy phantom plans and 19 patient plans were calculated with both dose engines to evaluate the equivalency between the two dose engines. Gamma indices (Γ) were used for the equivalency evaluation. The GPU dose was further verified with the absolute point dose measurement with ion chamber and film measurements for phantom plans. Monte Carlo calculation was used as a reference for both dose engines in the accuracy evaluation in heterogeneous phantom and actual patients. The GPU dose engine showed excellent agreement with the current CPU dose engine. The majority of cases had over 99.99% of voxels with Γ(1%, 1 mm) engine also showed similar degree of accuracy in heterogeneous media as the current TomoTherapy dose engine. It is verified and validated that the ultrafast TomoTherapy GPU dose engine can safely replace the existing TomoTherapy cluster based dose engine without degradation in dose accuracy.

  8. SU-E-T-161: Evaluation of Dose Calculation Based On Cone-Beam CT

    International Nuclear Information System (INIS)

    Abe, T; Nakazawa, T; Saitou, Y; Nakata, A; Yano, M; Tateoka, K; Fujimoto, K; Sakata, K

    2014-01-01

    Purpose: The purpose of this study is to convert pixel values in cone-beam CT (CBCT) using histograms of pixel values in the simulation CT (sim-CT) and the CBCT images and to evaluate the accuracy of dose calculation based on the CBCT. Methods: The sim-CT and CBCT images immediately before the treatment of 10 prostate cancer patients were acquired. Because of insufficient calibration of the pixel values in the CBCT, it is difficult to be directly used for dose calculation. The pixel values in the CBCT images were converted using an in-house program. A 7 fields treatment plans (original plan) created on the sim-CT images were applied to the CBCT images and the dose distributions were re-calculated with same monitor units (MUs). These prescription doses were compared with those of original plans. Results: In the results of the pixel values conversion in the CBCT images,the mean differences of pixel values for the prostate,subcutaneous adipose, muscle and right-femur were −10.78±34.60, 11.78±41.06, 29.49±36.99 and 0.14±31.15 respectively. In the results of the calculated doses, the mean differences of prescription doses for 7 fields were 4.13±0.95%, 0.34±0.86%, −0.05±0.55%, 1.35±0.98%, 1.77±0.56%, 0.89±0.69% and 1.69±0.71% respectively and as a whole, the difference of prescription dose was 1.54±0.4%. Conclusion: The dose calculation on the CBCT images achieve an accuracy of <2% by using this pixel values conversion program. This may enable implementation of efficient adaptive radiotherapy

  9. Evaluation of ΔGsub(f) values for unstable compounds: a Fortran program for the calculation of ternary phase equilibria

    International Nuclear Information System (INIS)

    Throop, G.J.; Rogl, P.; Rudy, E.

    1978-01-01

    A Fortran IV program was set up for the calculation of phase equilibria and tieline distributions in ternary systems of the type: transition metal-transition metal-nonmetal (interstitial type of solid solutions). The method offers the possibility of determining the thermodynamic values for unstable compounds through their influence upon ternary phase equilibria. The variation of the free enthalpy of formation of ternary solid solutions is calculated as a function of nonmetal content, thus describing the actual curvature of the phase boundaries. The integral and partial molar free enthalpies of formation of binary nonstoichiometric compounds and of phase solutions are expressed as analytical functions of the nonmetal content within their homogeneity range. The coefficient of these analytical expressions are obtained by the use either of the Wagner-Schottky vacancy model or polynomials second order in composition (parabolic approach). The free energy of formation, ΔGsub(f) has been calculated for the systems Ti-C, Zr-C, and Ta-C. Calculations of the ternary phase equilibria yielded the values for ΔGsub(f) for the unstable compounds Ti 2 C at 1500 0 C and Zr 2 C at 1775 0 C of -22.3 and 22.7 kcal g atom metal respectively. These values were used for the calculation of isothermal sections within the ternary systems Ti-Ta-C (at 1500 0 C) and Zr-Ta-C (at 1775 0 C). The ideal case of ternary phase solutions is extended to regular solutions. (author)

  10. Metric for Calculation of System Complexity based on its Connections

    Directory of Open Access Journals (Sweden)

    João Ricardo Braga de Paiva

    2017-02-01

    Full Text Available This paper proposes a methodology based on system connections to calculate its complexity. Two study cases are proposed: the dining Chinese philosophers’ problem and the distribution center. Both studies are modeled using the theory of Discrete Event Systems and simulations in different contexts were performed in order to measure their complexities. The obtained results present i the static complexity as a limiting factor for the dynamic complexity, ii the lowest cost in terms of complexity for each unit of measure of the system performance and iii the output sensitivity to the input parameters. The associated complexity and performance measures aggregate knowledge about the system.

  11. A method for valuing architecture-based business transformation and measuring the value of solutions architecture

    OpenAIRE

    Slot, R.G.

    2010-01-01

    Enterprise and Solution Architecture are key in today’s business environment. It is surprising that the foundation and business case for these activities are nonexistent; the financial value for the business of these activities is largely undetermined. To determine business value of enterprise and solution architecture, this thesis shows how to measure and quantify, in business terms, the value of enterprise architecture-based on business transformation and the value of solution architecture.

  12. A Novel Approach to Calculation of Reproducing Kernel on Infinite Interval and Applications to Boundary Value Problems

    Directory of Open Access Journals (Sweden)

    Jing Niu

    2013-01-01

    reproducing kernel on infinite interval is obtained concisely in polynomial form for the first time. Furthermore, as a particular effective application of this method, we give an explicit representation formula for calculation of reproducing kernel in reproducing kernel space with boundary value conditions.

  13. Calculation of RABBIT and Simulator Worth in the HFIR Hydraulic Tube and Comparison with Measured Values

    Energy Technology Data Exchange (ETDEWEB)

    Slater, CO

    2005-09-08

    To aid in the determinations of reactivity worths for target materials in a proposed High Flux Isotope Reactor (HFIR) target configuration containing two additional hydraulic tubes, the worths of cadmium rabbits within the current hydraulic tube were calculated using a reference model of the HFIR and the MCNP5 computer code. The worths were compared to measured worths for both static and ejection experiments. After accounting for uncertainties in the calculations and the measurements, excellent agreement between the two was obtained. Computational and measurement limitations indicate that accurate estimation of worth is only possible when the worth exceeds 10 cents. Results indicate that MCNP5 and the reactor model can be used to predict reactivity worths of various samples when the expected perturbations are greater than 10 cents. The level of agreement between calculation and experiment indicates that the accuracy of such predictions would be dependent solely on the quality of the nuclear data for the materials to be irradiated. Transients that are approximated by ''piecewise static'' computational models should likewise have an accuracy that is dependent solely on the quality of the nuclear data.

  14. How much is information worth? Calculating the economic value of a library’s services

    Directory of Open Access Journals (Sweden)

    Maira Nani França

    2017-01-01

    Full Text Available The benefits of the services offered by an informational unit have been characterized by the desired outcome, such as the effective response from a user. These values are made up of enough concrete data if expressed in monetary terms, that is, finding out which is the cost to provide such services. Assigning value to each of the information services offered by university libraries and measuring it are one of the most challenging and less applied managerial tasks in information environments. This paper aims to assign economic value to the services provided by a Brazilian university library, developing a tool that will assist managers in decision -making and evaluation of informational units in order to guarantee the quality of services provided.

  15. Design of software for calculation of shielding based on various standards radiodiagnostic calculation

    International Nuclear Information System (INIS)

    Falero, B.; Bueno, P.; Chaves, M. A.; Ordiales, J. M.; Villafana, O.; Gonzalez, M. J.

    2013-01-01

    The aim of this study was to develop a software application that performs calculation shields in radiology room depending on the type of equipment. The calculation will be done by selecting the user, the method proposed in the Guide 5.11, the Report 144 and 147 and also for the methodology given by the Portuguese Health Ministry. (Author)

  16. Evaluation of RSG-GAS Core Management Based on Burnup Calculation

    International Nuclear Information System (INIS)

    Lily Suparlina; Jati Susilo

    2009-01-01

    Evaluation of RSG-GAS Core Management Based on Burnup Calculation. Presently, U 3 Si 2 -Al dispersion fuel is used in RSG-GAS core and had passed the 60 th core. At the beginning of each cycle the 5/1 fuel reshuffling pattern is used. Since 52 nd core, operators did not use the core fuel management computer code provided by vendor for this activity. They use the manually calculation using excel software as the solving. To know the accuracy of the calculation, core calculation was carried out using two kinds of 2 dimension diffusion codes Batan-2DIFF and SRAC. The beginning of cycle burn-up fraction data were calculated start from 51 st to 60 th using Batan-EQUIL and SRAC COREBN. The analysis results showed that there is a disparity in reactivity values of the two calculation method. The 60 th core critical position resulted from Batan-2DIFF calculation provide the reduction of positive reactivity 1.84 % Δk/k, while the manually calculation results give the increase of positive reactivity 2.19 % Δk/k. The minimum shutdown margin for stuck rod condition for manual and Batan-3DIFF calculation are -3.35 % Δk/k dan -1.13 % Δk/k respectively, it means that both values met the safety criteria, i.e <-0.5 % Δk/k. Excel program can be used for burn-up calculation, but it is needed to provide core management code to reach higher accuracy. (author)

  17. A summary of the sources of input parameter values for the Waste Isolation Pilot Plant final porosity surface calculations

    International Nuclear Information System (INIS)

    Butcher, B.M.

    1997-08-01

    A summary of the input parameter values used in final predictions of closure and waste densification in the Waste Isolation Pilot Plant disposal room is presented, along with supporting references. These predictions are referred to as the final porosity surface data and will be used for WIPP performance calculations supporting the Compliance Certification Application to be submitted to the U.S. Environmental Protection Agency. The report includes tables and list all of the input parameter values, references citing their source, and in some cases references to more complete descriptions of considerations leading to the selection of values

  18. A summary of the sources of input parameter values for the Waste Isolation Pilot Plant final porosity surface calculations

    Energy Technology Data Exchange (ETDEWEB)

    Butcher, B.M.

    1997-08-01

    A summary of the input parameter values used in final predictions of closure and waste densification in the Waste Isolation Pilot Plant disposal room is presented, along with supporting references. These predictions are referred to as the final porosity surface data and will be used for WIPP performance calculations supporting the Compliance Certification Application to be submitted to the U.S. Environmental Protection Agency. The report includes tables and list all of the input parameter values, references citing their source, and in some cases references to more complete descriptions of considerations leading to the selection of values.

  19. Discussion on calculation of disease severity index values from scales with unequal intervals

    Science.gov (United States)

    When estimating severity of disease, a disease interval (or category) scale comprises a number of categories of known numeric values – with plant disease this is generally the percent area with symptoms (e.g., the Horsfall-Barratt (H-B) scale). Studies in plant pathology and plant breeding often use...

  20. Finding the 'sweet spot' in value-based contracts.

    Science.gov (United States)

    Eggbeer, Bill; Sears, Kevin; Homer, Ken

    2015-08-01

    Health systems pursing value-based contracts should address six important considerations: The definition of value. Contracting goals. Cost of implementation. Risk exposure. Contract structure and design. Essential contractual protections.

  1. Jet identification based on probability calculations using Bayes' theorem

    International Nuclear Information System (INIS)

    Jacobsson, C.; Joensson, L.; Lindgren, G.; Nyberg-Werther, M.

    1994-11-01

    The problem of identifying jets at LEP and HERA has been studied. Identification using jet energies and fragmentation properties was treated separately in order to investigate the degree of quark-gluon separation that can be achieved by either of these approaches. In the case of the fragmentation-based identification, a neural network was used, and a test of the dependence on the jet production process and the fragmentation model was done. Instead of working with the separation variables directly, these have been used to calculate probabilities of having a specific type of jet, according to Bayes' theorem. This offers a direct interpretation of the performance of the jet identification and provides a simple means of combining the results of the energy- and fragmentation-based identifications. (orig.)

  2. Three calculations of free cortisol versus measured values in the critically ill.

    Science.gov (United States)

    Molenaar, Nienke; Groeneveld, A B Johan; de Jong, Margriet F C

    2015-11-01

    To investigate the agreement between the calculated free cortisol levels according to widely applied Coolens and adjusted Södergård equations with measured levels in the critically ill. A prospective study in a mixed intensive care unit. We consecutively included 103 patients with treatment-insensitive hypotension in whom an adrenocorticotropic hormone (ACTH) test (250μg) was performed. Serum total and free cortisol (equilibrium dialysis), corticosteroid-binding globulin and albumin were assessed. Free cortisol was estimated by the Coolens method (C) and two adjusted Södergård (S1 and S2) equations. Bland Altman plots were made. The bias for absolute (t=0, 30 and 60min after ACTH injection) cortisol levels was 38, -24, 41nmol/L when the C, S1 and S2 equations were used, with 95% limits of agreement between -65-142, -182-135, and -57-139nmol/L and percentage errors of 66, 85, and 64%, respectively. Bias for delta (peak-baseline) cortisol was 14, -31 and 16nmol/L, with 95% limits of agreement between -80-108, -157-95, and -74-105nmol/L, and percentage errors of 107, 114, and 100% for C, S1 and S2 equations, respectively. Calculated free cortisol levels have too high bias and imprecision to allow for acceptable use in the critically ill. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  3. Absorbed doses behind bones with MR image-based dose calculations for radiotherapy treatment planning.

    Science.gov (United States)

    Korhonen, Juha; Kapanen, Mika; Keyrilainen, Jani; Seppala, Tiina; Tuomikoski, Laura; Tenhunen, Mikko

    2013-01-01

    Magnetic resonance (MR) images are used increasingly in external radiotherapy target delineation because of their superior soft tissue contrast compared to computed tomography (CT) images. Nevertheless, radiotherapy treatment planning has traditionally been based on the use of CT images, due to the restrictive features of MR images such as lack of electron density information. This research aimed to measure absorbed radiation doses in material behind different bone parts, and to evaluate dose calculation errors in two pseudo-CT images; first, by assuming a single electron density value for the bones, and second, by converting the electron density values inside bones from T(1)∕T(2)∗-weighted MR image intensity values. A dedicated phantom was constructed using fresh deer bones and gelatine. The effect of different bone parts to the absorbed dose behind them was investigated with a single open field at 6 and 15 MV, and measuring clinically detectable dose deviations by an ionization chamber matrix. Dose calculation deviations in a conversion-based pseudo-CT image and in a bulk density pseudo-CT image, where the relative electron density to water for the bones was set as 1.3, were quantified by comparing the calculation results with those obtained in a standard CT image by superposition and Monte Carlo algorithms. The calculations revealed that the applied bulk density pseudo-CT image causes deviations up to 2.7% (6 MV) and 2.0% (15 MV) to the dose behind the examined bones. The corresponding values in the conversion-based pseudo-CT image were 1.3% (6 MV) and 1.0% (15 MV). The examinations illustrated that the representation of the heterogeneous femoral bone (cortex denser compared to core) by using a bulk density for the whole bone causes dose deviations up to 2% both behind the bone edge and the middle part of the bone (diameter bones). This study indicates that the decrease in absorbed dose is not dependent on the bone diameter with all types of bones. Thus

  4. SU-E-T-02: 90Y Microspheres Dosimetry Calculation with Voxel-S-Value Method: A Simple Use in the Clinic

    International Nuclear Information System (INIS)

    Maneru, F; Gracia, M; Gallardo, N; Olasolo, J; Fuentemilla, N; Bragado, L; Martin-Albina, M; Lozares, S; Pellejero, S; Miquelez, S; Rubio, A; Otal, A

    2015-01-01

    Purpose: To present a simple and feasible method of voxel-S-value (VSV) dosimetry calculation for daily clinical use in radioembolization (RE) with 90 Y microspheres. Dose distributions are obtained and visualized over CT images. Methods: Spatial dose distributions and dose in liver and tumor are calculated for RE patients treated with Sirtex Medical miscrospheres at our center. Data obtained from the previous simulation of treatment were the basis for calculations: Tc-99m maggregated albumin SPECT-CT study in a gammacamera (Infinia, General Electric Healthcare.). Attenuation correction and ordered-subsets expectation maximization (OSEM) algorithm were applied.For VSV calculations, both SPECT and CT were exported from the gammacamera workstation and registered with the radiotherapy treatment planning system (Eclipse, Varian Medical systems). Convolution of activity matrix and local dose deposition kernel (S values) was implemented with an in-house developed software based on Python code. The kernel was downloaded from www.medphys.it. Final dose distribution was evaluated with the free software Dicompyler. Results: Liver mean dose is consistent with Partition method calculations (accepted as a good standard). Tumor dose has not been evaluated due to the high dependence on its contouring. Small lesion size, hot spots in health tissue and blurred limits can affect a lot the dose distribution in tumors. Extra work includes: export and import of images and other dicom files, create and calculate a dummy plan of external radiotherapy, convolution calculation and evaluation of the dose distribution with dicompyler. Total time spent is less than 2 hours. Conclusion: VSV calculations do not require any extra appointment or any uncomfortable process for patient. The total process is short enough to carry it out the same day of simulation and to contribute to prescription decisions prior to treatment. Three-dimensional dose knowledge provides much more information than other

  5. SU-E-T-02: 90Y Microspheres Dosimetry Calculation with Voxel-S-Value Method: A Simple Use in the Clinic

    Energy Technology Data Exchange (ETDEWEB)

    Maneru, F; Gracia, M; Gallardo, N; Olasolo, J; Fuentemilla, N; Bragado, L; Martin-Albina, M; Lozares, S; Pellejero, S; Miquelez, S; Rubio, A [Complejo Hospitalario de Navarra, Pamplona, Navarra (Spain); Otal, A [Hospital Clinica Benidorm, Benidorm, Alicante (Spain)

    2015-06-15

    Purpose: To present a simple and feasible method of voxel-S-value (VSV) dosimetry calculation for daily clinical use in radioembolization (RE) with {sup 90}Y microspheres. Dose distributions are obtained and visualized over CT images. Methods: Spatial dose distributions and dose in liver and tumor are calculated for RE patients treated with Sirtex Medical miscrospheres at our center. Data obtained from the previous simulation of treatment were the basis for calculations: Tc-99m maggregated albumin SPECT-CT study in a gammacamera (Infinia, General Electric Healthcare.). Attenuation correction and ordered-subsets expectation maximization (OSEM) algorithm were applied.For VSV calculations, both SPECT and CT were exported from the gammacamera workstation and registered with the radiotherapy treatment planning system (Eclipse, Varian Medical systems). Convolution of activity matrix and local dose deposition kernel (S values) was implemented with an in-house developed software based on Python code. The kernel was downloaded from www.medphys.it. Final dose distribution was evaluated with the free software Dicompyler. Results: Liver mean dose is consistent with Partition method calculations (accepted as a good standard). Tumor dose has not been evaluated due to the high dependence on its contouring. Small lesion size, hot spots in health tissue and blurred limits can affect a lot the dose distribution in tumors. Extra work includes: export and import of images and other dicom files, create and calculate a dummy plan of external radiotherapy, convolution calculation and evaluation of the dose distribution with dicompyler. Total time spent is less than 2 hours. Conclusion: VSV calculations do not require any extra appointment or any uncomfortable process for patient. The total process is short enough to carry it out the same day of simulation and to contribute to prescription decisions prior to treatment. Three-dimensional dose knowledge provides much more information than

  6. A drainage data-based calculation method for coalbed permeability

    International Nuclear Information System (INIS)

    Lai, Feng-peng; Li, Zhi-ping; Fu, Ying-kun; Yang, Zhi-hao

    2013-01-01

    This paper establishes a drainage data-based calculation method for coalbed permeability. The method combines material balance and production equations. We use a material balance equation to derive the average pressure of the coalbed in the production process. The dimensionless water production index is introduced into the production equation for the water production stage. In the subsequent stage, which uses both gas and water, the gas and water production ratio is introduced to eliminate the effect of flush-flow radius, skin factor, and other uncertain factors in the calculation of coalbed methane permeability. The relationship between permeability and surface cumulative liquid production can be described as a single-variable cubic equation by derivation. The trend shows that the permeability initially declines and then increases after ten wells in the southern Qinshui coalbed methane field. The results show an exponential relationship between permeability and cumulative water production. The relationship between permeability and cumulative gas production is represented by a linear curve and that between permeability and surface cumulative liquid production is represented by a cubic polynomial curve. The regression result of the permeability and surface cumulative liquid production agrees with the theoretical mathematical relationship. (paper)

  7. A modified earned value management using activity based costing

    Directory of Open Access Journals (Sweden)

    Vahid Aminian

    2016-12-01

    Full Text Available Earned Value Management (EVM has been a well-known methodology used since the 1960s when the US department of defense proposed a standard method to measure project perfor-mance. This system relies on a set of often straightforward metrics to measure and evaluate the general health of a project. These metrics serve as early warning signals to timely detect project problems, or to exploit project opportunities. A key aspect of EVM is to estimate the completion cost of a project by considering both cost and schedule performance indices. However, good performance of cost and schedule performance indices does not necessarily guarantee cost effec-tiveness of the project regardless of the overhead costs. The reason is because, in most project-based organizations, overhead costs constitute a significant proportion of the total costs. Howev-er, EVM indices are usually calculated in the absence of the so-called overhead costs. This paper, first, seeks to remedy this problem by proposing a practical procedure of allocating overhead costs in project-based organizations. Then the traditional EVM indices are revised by consider-ing the allocated overhead costs. Finally, a case study demonstrates the applicability of the pro-posed method for a real-life project.

  8. QED Based Calculation of the Fine Structure Constant

    Energy Technology Data Exchange (ETDEWEB)

    Lestone, John Paul [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-13

    Quantum electrodynamics is complex and its associated mathematics can appear overwhelming for those not trained in this field. Here, semi-classical approaches are used to obtain a more intuitive feel for what causes electrostatics, and the anomalous magnetic moment of the electron. These intuitive arguments lead to a possible answer to the question of the nature of charge. Virtual photons, with a reduced wavelength of λ, are assumed to interact with isolated electrons with a cross section of πλ2. This interaction is assumed to generate time-reversed virtual photons that are capable of seeking out and interacting with other electrons. This exchange of virtual photons between particles is assumed to generate and define the strength of electromagnetism. With the inclusion of near-field effects the model presented here gives a fine structure constant of ~1/137 and an anomalous magnetic moment of the electron of ~0.00116. These calculations support the possibility that near-field corrections are the key to understanding the numerical value of the dimensionless fine structure constant.

  9. DASS: efficient discovery and p-value calculation of substructures in unordered data.

    Science.gov (United States)

    Hollunder, Jens; Friedel, Maik; Beyer, Andreas; Workman, Christopher T; Wilhelm, Thomas

    2007-01-01

    Pattern identification in biological sequence data is one of the main objectives of bioinformatics research. However, few methods are available for detecting patterns (substructures) in unordered datasets. Data mining algorithms mainly developed outside the realm of bioinformatics have been adapted for that purpose, but typically do not determine the statistical significance of the identified patterns. Moreover, these algorithms do not exploit the often modular structure of biological data. We present the algorithm DASS (Discovery of All Significant Substructures) that first identifies all substructures in unordered data (DASS(Sub)) in a manner that is especially efficient for modular data. In addition, DASS calculates the statistical significance of the identified substructures, for sets with at most one element of each type (DASS(P(set))), or for sets with multiple occurrence of elements (DASS(P(mset))). The power and versatility of DASS is demonstrated by four examples: combinations of protein domains in multi-domain proteins, combinations of proteins in protein complexes (protein subcomplexes), combinations of transcription factor target sites in promoter regions and evolutionarily conserved protein interaction subnetworks. The program code and additional data are available at http://www.fli-leibniz.de/tsb/DASS

  10. Data Qualification Report: Calculated Porosity and Porosity-Derived Values for Lithostratigraphic Units for use on the Yucca Mountain Project

    Energy Technology Data Exchange (ETDEWEB)

    P. Sanchez

    2001-05-30

    The qualification is being completed in accordance with the Data Qualification Plan DQP-NBS-GS-000006, Rev. 00 (CRWMS M&O 2001). The purpose of this data qualification activity is to evaluate for qualification the unqualified developed input and porosity output included in Data Tracking Number (DTN) M09910POROCALC.000. The main output of the analyses documented in DTN M09910POROCALC.000 is the calculated total porosity and effective porosity for 40 Yucca Mountain Project boreholes. The porosity data are used as input to Analysis Model Report (AMR) 10040, ''Rock Properties Model'' (MDL-NBS-GS-000004, Rev. 00), Interim Change Notice [ICN] 02 (CRWMS M&O 2000b). The output from the rock properties model is used as input to numerical physical-process modeling within the context of a relationship developed in the AMR between hydraulic conductivity, bound water and zeolitic zones for use in the unsaturated zone model. In accordance with procedure AP-3.15Q, the porosity output is not used in the direct calculation of Principal Factors for post-closure safety or disruptive events. The original source for DTN M09910POROCALC.000 is a Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M&O) report, ''Combined Porosity from Geophysical Logs'' (CRWMS M&O 1999a and hereafter referred to as Rael 1999). That report recalculated porosity results for both the historical boreholes covered in Nelson (1996), and the modern boreholes reported in CRWMS M&O (1996a,b). The porosity computations in Rael (1999) are based on density-porosity mathematical relationships requiring various input parameters, including bulk density, matrix density and air and/or fluid density and volumetric water content. The main output is computed total porosity and effective porosity reported on a foot-by-foot basis for each borehole, although volumetric water content is derived from neutron data as an interim output. This qualification

  11. Data Qualification Report: Calculated Porosity and Porosity-Derived Values for Lithostratigraphic Units for use on the Yucca Mountain Project

    International Nuclear Information System (INIS)

    P. Sanchez

    2001-01-01

    The qualification is being completed in accordance with the Data Qualification Plan DQP-NBS-GS-000006, Rev. 00 (CRWMS M and O 2001). The purpose of this data qualification activity is to evaluate for qualification the unqualified developed input and porosity output included in Data Tracking Number (DTN) M09910POROCALC.000. The main output of the analyses documented in DTN M09910POROCALC.000 is the calculated total porosity and effective porosity for 40 Yucca Mountain Project boreholes. The porosity data are used as input to Analysis Model Report (AMR) 10040, ''Rock Properties Model'' (MDL-NBS-GS-000004, Rev. 00), Interim Change Notice [ICN] 02 (CRWMS M and O 2000b). The output from the rock properties model is used as input to numerical physical-process modeling within the context of a relationship developed in the AMR between hydraulic conductivity, bound water and zeolitic zones for use in the unsaturated zone model. In accordance with procedure AP-3.15Q, the porosity output is not used in the direct calculation of Principal Factors for post-closure safety or disruptive events. The original source for DTN M09910POROCALC.000 is a Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M and O) report, ''Combined Porosity from Geophysical Logs'' (CRWMS M and O 1999a and hereafter referred to as Rael 1999). That report recalculated porosity results for both the historical boreholes covered in Nelson (1996), and the modern boreholes reported in CRWMS M and O (1996a,b). The porosity computations in Rael (1999) are based on density-porosity mathematical relationships requiring various input parameters, including bulk density, matrix density and air and/or fluid density and volumetric water content. The main output is computed total porosity and effective porosity reported on a foot-by-foot basis for each borehole, although volumetric water content is derived from neutron data as an interim output. This qualification report uses

  12. Goal based mesh adaptivity for fixed source radiation transport calculations

    International Nuclear Information System (INIS)

    Baker, C.M.J.; Buchan, A.G.; Pain, C.C.; Tollit, B.S.; Goffin, M.A.; Merton, S.R.; Warner, P.

    2013-01-01

    Highlights: ► Derives an anisotropic goal based error measure for shielding problems. ► Reduces the error in the detector response by optimizing the finite element mesh. ► Anisotropic adaptivity captures material interfaces using fewer elements than AMR. ► A new residual based on the numerical scheme chosen forms the error measure. ► The error measure also combines the forward and adjoint metrics in a novel way. - Abstract: In this paper, the application of goal based error measures for anisotropic adaptivity applied to shielding problems in which a detector is present is explored. Goal based adaptivity is important when the response of a detector is required to ensure that dose limits are adhered to. To achieve this, a dual (adjoint) problem is solved which solves the neutron transport equation in terms of the response variables, in this case the detector response. The methods presented can be applied to general finite element solvers, however, the derivation of the residuals are dependent on the underlying finite element scheme which is also discussed in this paper. Once error metrics for the forward and adjoint solutions have been formed they are combined using a novel approach. The two metrics are combined by forming the minimum ellipsoid that covers both the error metrics rather than taking the maximum ellipsoid that is contained within the metrics. Another novel approach used within this paper is the construction of the residual. The residual, used to form the goal based error metrics, is calculated from the subgrid scale correction which is inherent in the underlying spatial discretisation employed

  13. Value-Based Communication Preservation for Mobile Robots

    National Research Council Canada - National Science Library

    Powers, Matthew; Balch, Tucker

    2006-01-01

    Value-Based Communication Preservation (VBCP) is a behavior-based, computationally efficient approach to maintaining line-of-sight radiofrequency communication between members of robot teams in the context of other tasks...

  14. Value and Vision-based Methodology in Integrated Design

    DEFF Research Database (Denmark)

    Tollestrup, Christian

    on empirical data from workshop where the Value and Vision-based methodology has been taught. The research approach chosen for this investigation is Action Research, where the researcher plays an active role in generating the data and gains a deeper understanding of the investigated phenomena. The result...... of this thesis is the value transformation from an explicit set of values to a product concept using a vision based concept development methodology based on the Pyramid Model (Lerdahl, 2001) in a design team context. The aim of this thesis is to examine how the process of value transformation is occurring within...... is divided in three; the systemic unfolding of the Value and Vision-based methodology, the structured presentation of practical implementation of the methodology and finally the analysis and conclusion regarding the value transformation, phenomena and learning aspects of the methodology....

  15. THE IMPLEMENTATION OF STRATEGIC MANAGEMENTACCOUNTING BASED ON VALUE CHAIN ANALYSIS: VALUE CHAINACCOUNTING

    Directory of Open Access Journals (Sweden)

    Mustafa KIRLI

    2011-01-01

    Full Text Available To compete successfully in today’s highly competitive global environment,companies have made customer satisfaction an overriding priority. They have alsoadopted new management approaches, changed their manufacturing systems andinvested in new technologies. Strategic managementaccounting examines thedecision-making linked with the business operationsand strategic work offinancial administration as support for the same. Strategic managementaccounting is a theory and practice of accounting that looks at an organization'scost position, cost advantages and product differentiation in order to make marketdecisions. The value chain is a systematic approachto examining the developmentof competitive advantage. The chain consists of a series of activities that createand build value. Value chain analysis refers to a structured method of analyzingthe effects of all core activities on cost and/or differentiation of the valuechain.With the growing division of labour and the global dispersion of theproduction ofcomponents, systemic competitiveness and so value chain analysishave become increasingly important. Value chain accounting is the combinationof value chain analysis and accounting theory.Valuechain accounting is animportant part of value chain management and a further development of strategicmanagement accounting. Value chain accounting is anew approach onaccounting subject which is combined by the theories of value chain management,supply chain management, accounting management andinformation technology.From the analysis about value chain theory and strategic management accountingtheory,this paper proposes an accounting managementframework based on valuechain analysis called value chain accounting.

  16. Ethics education for health professionals: a values based approach.

    Science.gov (United States)

    Godbold, Rosemary; Lees, Amanda

    2013-11-01

    It is now widely accepted that ethics is an essential part of educating health professionals. Despite a clear mandate to educators, there are differing approaches, in particular, how and where ethics is positioned in training programmes, underpinning philosophies and optimal modes of assessment. This paper explores varying practices and argues for a values based approach to ethics education. It then explores the possibility of using a web-based technology, the Values Exchange, to facilitate a values based approach. It uses the findings of a small scale study to signal the potential of the Values Exchange for engaging, meaningful and applied ethics education. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Values based practice: a framework for thinking with.

    Science.gov (United States)

    Mohanna, Kay

    2017-07-01

    Values are those principles that govern behaviours, and values-based practice has been described as a theory and skills base for effective healthcare decision-making where different (and hence potentially conflicting) values are in play. The emphasis is on good process rather than pre-set right outcomes, aiming to achieve balanced decision-making. In this article we will consider the utility of this model by looking at leadership development, a current area of much interest and investment in healthcare. Copeland points out that 'values based leadership behaviors are styles with a moral, authentic and ethical dimension', important qualities in healthcare decision-making.

  18. Calculating acid-base and oxygenation status during COPD exacerbation using mathematically arterialised venous blood

    DEFF Research Database (Denmark)

    Rees, Stephen Edward; Rychwicka-Kielek, Beate A; Andersen, Bjarne F

    2012-01-01

    Abstract Background: Repeated arterial puncture is painful. A mathematical method exists for transforming peripheral venous pH, PCO2 and PO2 to arterial eliminating the need for arterial sampling. This study evaluates this method to monitor acid-base and oxygenation during admission...... for exacerbation of chronic obstructive pulmonary disease (COPD). Methods: Simultaneous arterial and peripheral venous blood was analysed. Venous values were used to calculate arterial pH, PCO2 and PO2, with these compared to measured values using Bland-Altman analysis and scatter plots. Calculated values of PO2......H, PCO2 and PO2 were 7.432±0.047, 6.8±1.7 kPa and 9.2±1.5 kPa, respectively. Calculated and measured arterial pH and PCO2 agreed well, differences having small bias and SD (0.000±0.022 pH, -0.06±0.50 kPa PCO2), significantly better than venous blood alone. Calculated PO2 obeyed the clinical rules...

  19. Volume-based geometric modeling for radiation transport calculations

    International Nuclear Information System (INIS)

    Li, Z.; Williamson, J.F.

    1992-01-01

    Accurate theoretical characterization of radiation fields is a valuable tool in the design of complex systems, such as linac heads and intracavitary applicators, and for generation of basic dose calculation data that is inaccessible to experimental measurement. Both Monte Carlo and deterministic solutions to such problems require a system for accurately modeling complex 3-D geometries that supports ray tracing, point and segment classification, and 2-D graphical representation. Previous combinatorial approaches to solid modeling, which involve describing complex structures as set-theoretic combinations of simple objects, are limited in their ease of use and place unrealistic constraints on the geometric relations between objects such as excluding common boundaries. A new approach to volume-based solid modeling has been developed which is based upon topologically consistent definitions of boundary, interior, and exterior of a region. From these definitions, FORTRAN union, intersection, and difference routines have been developed that allow involuted and deeply nested structures to be described as set-theoretic combinations of ellipsoids, elliptic cylinders, prisms, cones, and planes that accommodate shared boundaries. Line segments between adjacent intersections on a trajectory are assigned to the appropriate region by a novel sorting algorithm that generalizes upon Siddon's approach. Two 2-D graphic display tools are developed to help the debugging of a given geometric model. In this paper, the mathematical basis of our system is described, it is contrasted to other approaches, and examples are discussed

  20. What is the value of Values Based Recruitment for nurse education programmes?

    Science.gov (United States)

    Groothuizen, Johanna E; Callwood, Alison; Gallagher, Ann

    2018-05-01

    A discussion of issues associated with Values Based Recruitment (VBR) for nurse education programmes. Values Based Recruitment is a mandatory element in selection processes of students for Higher Education healthcare courses in England, including all programmes across nursing. Students are selected on the basis that their individual values align with those presented in the Constitution of the National Health Service. However, there are issues associated with the use of values as selection criteria that have been insufficiently addressed. These are discussed. Discussion paper. This article is based on documents published on the website of the executive body responsible for the implementation of a policy regarding VBR in Higher Education Institutions up until June 2017 and our evaluation of the conceptualisation of VBR, underpinned by contemporary theory and literature. Values Based Recruitment influences who is accepted onto a nurse education programme, but there has been limited critical evaluation regarding the effectiveness of employing values as selection criteria. Values are subject to interpretation and evidence regarding whether or how VBR will improve practice and care is lacking. The issues discussed in this article show that Higher Education Institutions offering nursing courses, whether in England or in other countries, should be critical and reflective regarding the implementation of VBR methods. We call for a debate regarding the meaning and implications of VBR and further research regarding its validity and effectiveness. © 2017 John Wiley & Sons Ltd.

  1. NHS constitution values for values-based recruitment: a virtue ethics perspective.

    Science.gov (United States)

    Groothuizen, Johanna Elise; Callwood, Alison; Gallagher, Ann

    2018-05-17

    Values-based recruitment is used in England to select healthcare staff, trainees and students on the basis that their values align with those stated in the Constitution of the UK National Health Service (NHS). However, it is unclear whether the extensive body of existing literature within the field of moral philosophy was taken into account when developing these values. Although most values have a long historical tradition, a tendency to assume that they have just been invented, and to approach them uncritically, exists within the healthcare sector. Reflection is necessary. We are of the opinion that selected virtue ethics writings, which are underpinned by historical literature as well as practical analysis of the healthcare professions, provide a helpful framework for evaluation of the NHS Constitution values, to determine whether gaps exist and improvements can be made. Based on this evaluation, we argue that the definitions of certain NHS Constitution values are ambiguous. In addition to this, we argue that 'integrity' and 'practical wisdom', two important concepts in the virtue ethics literature, are not sufficiently represented within the NHS Constitution values. We believe that the NHS Constitution values could be strengthened by providing clearer definitions, and by integrating 'integrity' and 'practical wisdom'. This will benefit values-based recruitment strategies. Should healthcare policy-makers in other countries wish to develop a similar values-based recruitment framework, we advise that they proceed reflectively, and take previously published virtue ethics literature into consideration. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  2. Value-based resource management: a model for best value nursing care.

    Science.gov (United States)

    Caspers, Barbara A; Pickard, Beth

    2013-01-01

    With the health care environment shifting to a value-based payment system, Catholic Health Initiatives nursing leadership spearheaded an initiative with 14 hospitals to establish best nursing care at a lower cost. The implementation of technology-enabled business processes at point of care led to a new model for best value nursing care: Value-Based Resource Management. The new model integrates clinical patient data from the electronic medical record and embeds the new information in care team workflows for actionable real-time decision support and predictive forecasting. The participating hospitals reported increased patient satisfaction and cost savings in the reduction of overtime and improvement in length of stay management. New data generated by the initiative on nursing hours and cost by patient and by population (Medicare severity diagnosis-related groups), and patient health status outcomes across the acute care continuum expanded business intelligence for a value-based population health system.

  3. S values at voxels level for 188Re and 90Y calculated with the MCNP-4C code

    International Nuclear Information System (INIS)

    Coca, M.A.; Torres, L.A.; Cornejo, N.; Martin, G.

    2008-01-01

    Full text: MIRD formalism at voxel level has been suggested as an optional methodology to perform internal radiation dosimetry calculation during internal radiation therapy in Nuclear Medicine. Voxel S values for Y 90 , 131 I, 32 P, 99m Tc and 89 Sr have been published to different sizes. Currently, 188 Re has been proposed as a promising radionuclide for therapy due to its physical features and availability from generators. The main objective of this work was to estimate the voxel S values for 188 Re at cubical geometry using the MCNP-4C code for the simulations of radiation transport and energy deposition. Mean absorbed dose to target voxels per radioactive decay in a source voxel were estimated and reported for 188 Re and Y 90 . A comparison of voxel S values computed with the MCNP code and the data reported in MIRD Pamphlet 17 for 90 Y was performed in order to evaluate our results. (author)

  4. An Approach for Calculating Student-Centered Value in Education - A Link between Quality, Efficiency, and the Learning Experience in the Health Professions.

    Directory of Open Access Journals (Sweden)

    Peter Nicklen

    Full Text Available Health professional education is experiencing a cultural shift towards student-centered education. Although we are now challenging our traditional training methods, our methods for evaluating the impact of the training on the learner remains largely unchanged. What is not typically measured is student-centered value; whether it was 'worth' what the learner paid. The primary aim of this study was to apply a method of calculating student-centered value, applied to the context of a change in teaching methods within a health professional program. This study took place over the first semester of the third year of the Bachelor of Physiotherapy at Monash University, Victoria, Australia, in 2014. The entire third year cohort (n = 78 was invited to participate. Survey based design was used to collect the appropriate data. A blended learning model was implemented; subsequently students were only required to attend campus three days per week, with the remaining two days comprising online learning. This was compared to the previous year's format, a campus-based face-to-face approach where students attended campus five days per week, with the primary outcome-Value to student. Value to student incorporates, user costs associated with transportation and equipment, the amount of time saved, the price paid and perceived gross benefit. Of the 78 students invited to participate, 76 completed the post-unit survey (non-participation rate 2.6%. Based on Value to student the blended learning approach provided a $1,314.93 net benefit to students. Another significant finding was that the perceived gross benefit for the blended learning approach was $4014.84 compared to the campus-based face-to-face approach of $3651.72, indicating that students would pay more for the blended learning approach. This paper successfully applied a novel method of calculating student-centered value. This is the first step in validating the value to student outcome. Measuring economic value

  5. An Approach for Calculating Student-Centered Value in Education - A Link between Quality, Efficiency, and the Learning Experience in the Health Professions.

    Science.gov (United States)

    Nicklen, Peter; Rivers, George; Ooi, Caryn; Ilic, Dragan; Reeves, Scott; Walsh, Kieran; Maloney, Stephen

    2016-01-01

    Health professional education is experiencing a cultural shift towards student-centered education. Although we are now challenging our traditional training methods, our methods for evaluating the impact of the training on the learner remains largely unchanged. What is not typically measured is student-centered value; whether it was 'worth' what the learner paid. The primary aim of this study was to apply a method of calculating student-centered value, applied to the context of a change in teaching methods within a health professional program. This study took place over the first semester of the third year of the Bachelor of Physiotherapy at Monash University, Victoria, Australia, in 2014. The entire third year cohort (n = 78) was invited to participate. Survey based design was used to collect the appropriate data. A blended learning model was implemented; subsequently students were only required to attend campus three days per week, with the remaining two days comprising online learning. This was compared to the previous year's format, a campus-based face-to-face approach where students attended campus five days per week, with the primary outcome-Value to student. Value to student incorporates, user costs associated with transportation and equipment, the amount of time saved, the price paid and perceived gross benefit. Of the 78 students invited to participate, 76 completed the post-unit survey (non-participation rate 2.6%). Based on Value to student the blended learning approach provided a $1,314.93 net benefit to students. Another significant finding was that the perceived gross benefit for the blended learning approach was $4014.84 compared to the campus-based face-to-face approach of $3651.72, indicating that students would pay more for the blended learning approach. This paper successfully applied a novel method of calculating student-centered value. This is the first step in validating the value to student outcome. Measuring economic value to the student may

  6. Calculation Scheme Based on a Weighted Primitive: Application to Image Processing Transforms

    Directory of Open Access Journals (Sweden)

    Gregorio de Miguel Casado

    2007-01-01

    Full Text Available This paper presents a method to improve the calculation of functions which specially demand a great amount of computing resources. The method is based on the choice of a weighted primitive which enables the calculation of function values under the scope of a recursive operation. When tackling the design level, the method shows suitable for developing a processor which achieves a satisfying trade-off between time delay, area costs, and stability. The method is particularly suitable for the mathematical transforms used in signal processing applications. A generic calculation scheme is developed for the discrete fast Fourier transform (DFT and then applied to other integral transforms such as the discrete Hartley transform (DHT, the discrete cosine transform (DCT, and the discrete sine transform (DST. Some comparisons with other well-known proposals are also provided.

  7. Accuracy of radiotherapy dose calculations based on cone-beam CT: comparison of deformable registration and image correction based methods

    Science.gov (United States)

    Marchant, T. E.; Joshi, K. D.; Moore, C. J.

    2018-03-01

    Radiotherapy dose calculations based on cone-beam CT (CBCT) images can be inaccurate due to unreliable Hounsfield units (HU) in the CBCT. Deformable image registration of planning CT images to CBCT, and direct correction of CBCT image values are two methods proposed to allow heterogeneity corrected dose calculations based on CBCT. In this paper we compare the accuracy and robustness of these two approaches. CBCT images for 44 patients were used including pelvis, lung and head & neck sites. CBCT HU were corrected using a ‘shading correction’ algorithm and via deformable registration of planning CT to CBCT using either Elastix or Niftyreg. Radiotherapy dose distributions were re-calculated with heterogeneity correction based on the corrected CBCT and several relevant dose metrics for target and OAR volumes were calculated. Accuracy of CBCT based dose metrics was determined using an ‘override ratio’ method where the ratio of the dose metric to that calculated on a bulk-density assigned version of the same image is assumed to be constant for each patient, allowing comparison to the patient’s planning CT as a gold standard. Similar performance is achieved by shading corrected CBCT and both deformable registration algorithms, with mean and standard deviation of dose metric error less than 1% for all sites studied. For lung images, use of deformed CT leads to slightly larger standard deviation of dose metric error than shading corrected CBCT with more dose metric errors greater than 2% observed (7% versus 1%).

  8. Multi-robot Cooperation Behavior Decision Based on Psychological Values

    Directory of Open Access Journals (Sweden)

    Jian JIANG

    2014-01-01

    Full Text Available The method based on psychology concept has been proved to be a successful tool used for human-robot interaction. But its related research in multi-robot cooperation has remained scarce until recent studies. To solve the problem, a decision-making mechanism based on psychological values is presented to be regarded as the basis of the multi-robot cooperation. Robots give birth to psychological values based on the estimations of environment, teammates and themselves. The mapping relationship between psychological values and cooperation tendency threshold values is set up with artificial neural network. Robots can make decision on the bases of these threshold values in cooperation scenes. Experiments show that the multi-robot cooperation method presented in the paper not only can ensure the rationality of robots’ decision-making, but also can ensure the speediness of robots’ decision-making.

  9. Negotiation of values as driver in community-based PD

    DEFF Research Database (Denmark)

    Gronvall, Erik; Malmborg, Lone; Messeter, Jörn

    2016-01-01

    Community-based PD projects are often characterized by the meeting of conflicting values among stakeholder groups, but in research there is no uncontested account of the relation between design and conflicting values. Through analysis of three community-based PD cases in Denmark and South Africa......, this paper identifies and discusses challenges for community-based PD that exist in these settings based on the emergence of contrasting and often conflicting values among participants and stakeholders. Discussions of participation are shaped through two theoretical perspectives: the notion of thinging...... and design things; and different accounts of values in design. Inspired by the concept of design things, and as a consequence of the need for continuous negotiation of values observed in all three cases, we suggest the concept of thinging as fruitful for creating productive agonistic spaces with a stronger...

  10. Scenario based approach to structural damage detection and its value in a risk and reliability perspective

    DEFF Research Database (Denmark)

    Hovgaard, Mads Knude; Hansen, Jannick Balleby; Brincker, Rune

    2013-01-01

    A scenario- and vibration based structural damage detection method is demonstrated though simulation. The method is Finite Element (FE) based. The value of the monitoring is calculated using structural reliability theory. A high cycle fatigue crack propagation model is assumed as the damage mecha......- and without monitoring. Monte Carlo Sampling (MCS) is used to estimate the probabilities and the tower of an onshore NREL 5MW wind turbine is given as a calculation case......A scenario- and vibration based structural damage detection method is demonstrated though simulation. The method is Finite Element (FE) based. The value of the monitoring is calculated using structural reliability theory. A high cycle fatigue crack propagation model is assumed as the damage...

  11. HbA1c values calculated from blood glucose levels using truncated Fourier series and implementation in standard SQL database language.

    Science.gov (United States)

    Temsch, W; Luger, A; Riedl, M

    2008-01-01

    This article presents a mathematical model to calculate HbA1c values based on self-measured blood glucose and past HbA1c levels, thereby enabling patients to monitor diabetes therapy between scheduled checkups. This method could help physicians to make treatment decisions if implemented in a system where glucose data are transferred to a remote server. The method, however, cannot replace HbA1c measurements; past HbA1c values are needed to gauge the method. The mathematical model of HbA1c formation was developed based on biochemical principles. Unlike an existing HbA1c formula, the new model respects the decreasing contribution of older glucose levels to current HbA1c values. About 12 standard SQL statements embedded in a php program were used to perform Fourier transform. Regression analysis was used to gauge results with previous HbA1c values. The method can be readily implemented in any SQL database. The predicted HbA1c values thus obtained were in accordance with measured values. They also matched the results of the HbA1c formula in the elevated range. By contrast, the formula was too "optimistic" in the range of better glycemic control. Individual analysis of two subjects improved the accuracy of values and reflected the bias introduced by different glucometers and individual measurement habits.

  12. Fragment-based quantum mechanical calculation of protein-protein binding affinities.

    Science.gov (United States)

    Wang, Yaqian; Liu, Jinfeng; Li, Jinjin; He, Xiao

    2018-04-29

    The electrostatically embedded generalized molecular fractionation with conjugate caps (EE-GMFCC) method has been successfully utilized for efficient linear-scaling quantum mechanical (QM) calculation of protein energies. In this work, we applied the EE-GMFCC method for calculation of binding affinity of Endonuclease colicin-immunity protein complex. The binding free energy changes between the wild-type and mutants of the complex calculated by EE-GMFCC are in good agreement with experimental results. The correlation coefficient (R) between the predicted binding energy changes and experimental values is 0.906 at the B3LYP/6-31G*-D level, based on the snapshot whose binding affinity is closest to the average result from the molecular mechanics/Poisson-Boltzmann surface area (MM/PBSA) calculation. The inclusion of the QM effects is important for accurate prediction of protein-protein binding affinities. Moreover, the self-consistent calculation of PB solvation energy is required for accurate calculations of protein-protein binding free energies. This study demonstrates that the EE-GMFCC method is capable of providing reliable prediction of relative binding affinities for protein-protein complexes. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  13. Generation of input parameters for OSPM calculations. Sensitivity analysis of a method based on a questionnaire

    Energy Technology Data Exchange (ETDEWEB)

    Vignati, E.; Hertel, O.; Berkowicz, R. [National Environmental Research Inst., Dept. of Atmospheric Enviroment (Denmark); Raaschou-Nielsen, O. [Danish Cancer Society, Division of Cancer Epidemiology (Denmark)

    1997-05-01

    The method for generation of the input data for the calculations with OSPM is presented in this report. The described method which is based on information provided from a questionnaire, will be used for model calculations of long term exposure for a large number of children in connection with an epidemiological study. A test of the calculation method has been performed on a few locations in which detailed measurements of air pollution, meteorological data and traffic were available. Comparisons between measured and calculated concentrations were made for hourly, monthly and yearly values. Beside the measured concentrations, the test results were compared to results obtained with the optimal street configuration data and measured traffic. The main conclusions drawn from this investigation are: (1) The calculation method works satisfactory well for long term averages, whereas the uncertainties are high when short term averages are considered. (2) The street width is one of the most crucial input parameters for the calculation of street pollution levels for both short and long term averages. Using H.C. Andersens Boulevard as an example, it was shown that estimation of street width based on traffic amount can lead to large overestimation of the concentration levels (in this case 50% for NO{sub x} and 30% for NO{sub 2}). (3) The street orientation and geometry is important for prediction of short term concentrations but this importance diminished for longer term averages. (4) The uncertainties in diurnal traffic profiles can influence the accuracy of short term averages, but are less important for long term averages. The correlation is good between modelled and measured concentrations when the actual background concentrations are replaced with the generated values. Even though extreme situations are difficult to reproduce with this method, the comparison between the yearly averaged modelled and measured concentrations is very good. (LN) 20 refs.

  14. Method of characteristics - Based sensitivity calculations for international PWR benchmark

    International Nuclear Information System (INIS)

    Suslov, I. R.; Tormyshev, I. V.; Komlev, O. G.

    2013-01-01

    Method to calculate sensitivity of fractional-linear neutron flux functionals to transport equation coefficients is proposed. Implementation of the method on the basis of MOC code MCCG3D is developed. Sensitivity calculations for fission intensity for international PWR benchmark are performed. (authors)

  15. ESR concept paper on value-based radiology.

    Science.gov (United States)

    2017-10-01

    The European Society of Radiology (ESR) established a Working Group on Value-Based Imaging (VBI WG) in August 2016 in response to developments in European healthcare systems in general, and the trend within radiology to move from volume- to value-based practice in particular. The value-based healthcare (VBH) concept defines "value" as health outcomes achieved for patients relative to the costs of achieving them. Within this framework, value measurements start at the beginning of therapy; the whole diagnostic process is disregarded, and is considered only if it is the cause of errors or complications. Making the case for a new, multidisciplinary organisation of healthcare delivery centred on the patient, this paper establishes the diagnosis of disease as a first outcome in the interrelated activities of the healthcare chain. Metrics are proposed for measuring the quality of radiologists' diagnoses and the various ways in which radiologists provide value to patients, other medical specialists and healthcare systems at large. The ESR strongly believes value-based radiology (VBR) is a necessary complement to existing VBH concepts. The Society is determined to establish a holistic VBR programme to help European radiologists deal with changes in the evolution from volume- to value-based evaluation of radiological activities. Main Messages • Value-based healthcare defines value as patient's outcome over costs. • The VBH framework disregards the diagnosis as an outcome. • VBH considers diagnosis only if wrong or a cause of complications. • A correct diagnosis is the first outcome that matters to patients. • Metrics to measure radiologists' impacts on patient outcomes are key. • The value provided by radiology is multifaceted, going beyond exam volumes.

  16. Absorbed fractions in a voxel-based phantom calculated with the MCNP-4B code.

    Science.gov (United States)

    Yoriyaz, H; dos Santos, A; Stabin, M G; Cabezas, R

    2000-07-01

    A new approach for calculating internal dose estimates was developed through the use of a more realistic computational model of the human body. The present technique shows the capability to build a patient-specific phantom with tomography data (a voxel-based phantom) for the simulation of radiation transport and energy deposition using Monte Carlo methods such as in the MCNP-4B code. MCNP-4B absorbed fractions for photons in the mathematical phantom of Snyder et al. agreed well with reference values. Results obtained through radiation transport simulation in the voxel-based phantom, in general, agreed well with reference values. Considerable discrepancies, however, were found in some cases due to two major causes: differences in the organ masses between the phantoms and the occurrence of organ overlap in the voxel-based phantom, which is not considered in the mathematical phantom.

  17. Modelling lateral beam quality variations in pencil kernel based photon dose calculations

    International Nuclear Information System (INIS)

    Nyholm, T; Olofsson, J; Ahnesjoe, A; Karlsson, M

    2006-01-01

    Standard treatment machines for external radiotherapy are designed to yield flat dose distributions at a representative treatment depth. The common method to reach this goal is to use a flattening filter to decrease the fluence in the centre of the beam. A side effect of this filtering is that the average energy of the beam is generally lower at a distance from the central axis, a phenomenon commonly referred to as off-axis softening. The off-axis softening results in a relative change in beam quality that is almost independent of machine brand and model. Central axis dose calculations using pencil beam kernels show no drastic loss in accuracy when the off-axis beam quality variations are neglected. However, for dose calculated at off-axis positions the effect should be considered, otherwise errors of several per cent can be introduced. This work proposes a method to explicitly include the effect of off-axis softening in pencil kernel based photon dose calculations for arbitrary positions in a radiation field. Variations of pencil kernel values are modelled through a generic relation between half value layer (HVL) thickness and off-axis position for standard treatment machines. The pencil kernel integration for dose calculation is performed through sampling of energy fluence and beam quality in sectors of concentric circles around the calculation point. The method is fully based on generic data and therefore does not require any specific measurements for characterization of the off-axis softening effect, provided that the machine performance is in agreement with the assumed HVL variations. The model is verified versus profile measurements at different depths and through a model self-consistency check, using the dose calculation model to estimate HVL values at off-axis positions. A comparison between calculated and measured profiles at different depths showed a maximum relative error of 4% without explicit modelling of off-axis softening. The maximum relative error

  18. Adult echocardiographic nomograms: overview, critical review and creation of a software for automatic, fast and easy calculation of normal values.

    Science.gov (United States)

    Cantinotti, Massimiliano; Giordano, Raffaele; Paterni, Marco; Saura, Daniel; Scalese, Marco; Franchi, Eliana; Assanta, Nadia; Koestenberg, Martin; Dulgheru, Raluca; Sugimoto, Tadafumi; Bernard, Anne; Caballero, Luis; Lancellotti, Patrizio

    2017-12-01

    There is a crescent interest on normal adult echocardiographic values and the introduction of new deformation imaging and 3D parameters pose the issue of normative data. A multitude of nomograms has been recently published, however data are often fragmentary, difficult to find, and their strengths/limitations have been never evaluated. (I) to provide a review of current echocardiographic nomograms; (II) to generate a tool for easy and fast access to these data. A literature search was conducted accessing the National Library of Medicine using the keywords: 2D/3D echocardiography, strain, left/right ventricle, atrial, mitral/tricuspid valve, aorta, reference values/nomograms/normal values. Adding the following keywords, the results were further refined: range/intervals, myocardial velocity, strain rate and speckle tracking. Forty one published studies were included. Our study reveals that for several of 2D/3D parameters sufficient normative data exist, however, a few limitations still persist. For some basic parameters (i.e., mitral/tricuspid/pulmonary valves, great vessels) and for 3D valves data are scarce. There is a lack of studies evaluating ethnic differences. Data have been generally expressed as mean values normalised for gender and age instead of computing models incorporating different variables (age/gender/body sizes) to calculate z scores. To summarize results a software ( Echocardio-Normal Values ) who automatically calculate range of normality for a broad range of echocardiographic measurements according to age/gender/weight/height, has been generated. We provide an up-to-date and critical review of strengths/limitation of current adult echocardiographic nomograms. Furthermore we generated a software for automatic, easy and fast access to multiple echocardiographic normative data.

  19. UAV-based NDVI calculation over grassland: An alternative approach

    Science.gov (United States)

    Mejia-Aguilar, Abraham; Tomelleri, Enrico; Asam, Sarah; Zebisch, Marc

    2016-04-01

    The Normalised Difference Vegetation Index (NDVI) is one of the most widely used indicators for monitoring and assessing vegetation in remote sensing. The index relies on the reflectance difference between the near infrared (NIR) and red light and is thus able to track variations of structural, phenological, and biophysical parameters for seasonal and long-term monitoring. Conventionally, NDVI is inferred from space-borne spectroradiometers, such as MODIS, with moderate resolution up to 250 m ground resolution. In recent years, a new generation of miniaturized radiometers and integrated hyperspectral sensors with high resolution became available. Such small and light instruments are particularly adequate to be mounted on airborne unmanned aerial vehicles (UAV) used for monitoring services reaching ground sampling resolution in the order of centimetres. Nevertheless, such miniaturized radiometers and hyperspectral sensors are still very expensive and require high upfront capital costs. Therefore, we propose an alternative, mainly cheaper method to calculate NDVI using a camera constellation consisting of two conventional consumer-grade cameras: (i) a Ricoh GR modified camera that acquires the NIR spectrum by removing the internal infrared filter. A mounted optical filter additionally obstructs all wavelengths below 700 nm. (ii) A Ricoh GR in RGB configuration using two optical filters for blocking wavelengths below 600 nm as well as NIR and ultraviolet (UV) light. To assess the merit of the proposed method, we carry out two comparisons: First, reflectance maps generated by the consumer-grade camera constellation are compared to reflectance maps produced with a hyperspectral camera (Rikola). All imaging data and reflectance maps are processed using the PIX4D software. In the second test, the NDVI at specific points of interest (POI) generated by the consumer-grade camera constellation is compared to NDVI values obtained by ground spectral measurements using a

  20. Health technology assessment, value-based decision making, and innovation.

    Science.gov (United States)

    Henshall, Chris; Schuller, Tara

    2013-10-01

    Identifying treatments that offer value and value for money is becoming increasingly important, with interest in how health technology assessment (HTA) and decision makers can take appropriate account of what is of value to patients and to society, and in the relationship between innovation and assessments of value. This study summarizes points from an Health Technology Assessment International (HTAi) Policy Forum discussion, drawing on presentations, discussions among attendees, and background papers. Various perspectives on value were considered; most place patient health at the core of value. Wider elements of value comprise other benefits for: patients; caregivers; the health and social care systems; and society. Most decision-making systems seek to take account of similar elements of value, although they are assessed and combined in different ways. Judgment in decisions remains important and cannot be replaced by mathematical approaches. There was discussion of the value of innovation and of the effects of value assessments on innovation. Discussion also included moving toward "progressive health system decision making," an ongoing process whereby evidence-based decisions on use would be made at various stages in the technology lifecycle. Five actions are identified: (i) development of a general framework for the definition and assessment of value; development by HTA/coverage bodies and regulators of (ii) disease-specific guidance and (iii) further joint scientific advice for industry on demonstrating value; (iv) development of a framework for progressive licensing, usage, and reimbursement; and (v) promoting work to better adapt HTA, coverage, and procurement approaches to medical devices.

  1. Redefining Health: Implication for Value-Based Healthcare Reform

    OpenAIRE

    Putera, Ikhwanuliman

    2017-01-01

    Health definition consists of three domains namely, physical, mental, and social health that should be prioritized in delivering healthcare. The emergence of chronic diseases in aging populations has been a barrier to the realization of a healthier society. The value-based healthcare concept seems in line with the true health objective: increasing value. Value is created from health outcomes which matter to patients relative to the cost of achieving those outcomes. The health outcomes should ...

  2. CT-based dose calculations and in vivo dosimetry for lung cancer treatment

    International Nuclear Information System (INIS)

    Essers, M.; Lanson, J.H.; Leunens, G.; Schnabel, T.; Mijnheer, B.J.

    1995-01-01

    Reliable CT-based dose calculations and dosimetric quality control are essential for the introduction of new conformal techniques for the treatment of lung cancer. The first aim of this study was therefore to check the accuracy of dose calculations based on CT-densities, using a simple inhomogeneity correction model, for lung cancer patients irradiated with an AP-PA treatment technique. Second, the use of diodes for absolute exit dose measurements and an Electronic Portal Imaging Device (EPID) for relative transmission dose verification was investigated for 22 and 12 patients, respectively. The measured dose values were compared with calculations performed using our 3-dimensional treatment planning system, using CT-densities or assuming the patient to be water-equivalent. Using water-equivalent calculations, the actual exit dose value under lung was, on average, underestimated by 30%, with an overall spread of 10% (1 SD). Using inhomogeneity corrections, the exit dose was, on average, overestimated by 4%, with an overall spread of 6% (1 SD). Only 2% of the average deviation was due to the inhomogeneity correction model. An uncertainty in exit dose calculation of 2.5% (1 SD) could be explained by organ motion, resulting from the ventilatory or cardiac cycle. The most important reason for the large overall spread was, however, the uncertainty involved in performing point measurements: about 4% (1 SD). This difference resulted from the systematic and random deviation in patient set-up and therefore in diode position with respect to patient anatomy. Transmission and exit dose values agreed with an average difference of 1.1%. Transmission dose profiles also showed good agreement with calculated exit dose profiles. Our study shows that, for this treatment technique, the dose in the thorax region is quite accurately predicted using CT-based dose calculations, even if a simple inhomogeneity correction model is used. Point detectors such as diodes are not suitable for exit

  3. Challenges for bio-based products in sustainable value chains

    NARCIS (Netherlands)

    Cardon, L.; Lin, J.W.; De Groote, M.; Ragaert, K.; Kopecka, J.A.; Koster, R.P.

    2011-01-01

    This work concerns studies related to strategic development of products in which bio-based plastics are or will be applied, referred to as bio-based products. The studies cover (1) current and potential benefits of bio-based products in extended value chains including activities after end-of-life of

  4. Multi-step Monte Carlo calculations applied to nuclear reactor instrumentation - source definition and renormalization to physical values

    Energy Technology Data Exchange (ETDEWEB)

    Radulovic, Vladimir; Barbot, Loic; Fourmentel, Damien; Villard, Jean-Francois [CEA, DEN, DER, Instrumentation Sensors and Dosimetry Laboratory, Cadarache, F-13108 St Paul-Lez-Durance, (France); Snoj, Luka; Zerovnik, Gasper [Jozef Stefan Institute, Reactor Physics Department, Jamova cesta 39, SI-1000 Ljubljana, (Slovenia); Trkov, Andrej [IAEA, Vienna International Centre, PO Box 100, A-1400 Vienna, (Austria)

    2015-07-01

    were recently irradiated in the Jozef Stefan Institute TRIGA Mark II reactor in Ljubljana, Slovenia, and provides recommendations on how they can be overcome. The paper concludes with a discussion on the renormalization of the results from the second step calculations, to obtain accurate physical values. (authors)

  5. Value-Based Delivery of Education: MOOCs as Messengers

    Science.gov (United States)

    Gilfoil, David M.; Focht, Jeffrey W.

    2015-01-01

    Value-based delivery of healthcare has been discussed in the literature for almost a decade. The concept focuses on the patient and defines value as the improvement of patient outcomes divided by healthcare costs. Further refinements, called the Triple Aim model, focus on improving patient outcomes, reducing treatment costs, and improving patient…

  6. Influence of magnification on the calculated value of left ventricular ejection fraction and volumes using quantitative gated perfusion SPECT

    International Nuclear Information System (INIS)

    Nunez, M.; Beretta, M.; Alonso, O.; Alvarez, B.; Canepa, J.; Mut, F.

    2002-01-01

    Aim: To compare left ventricular ejection fraction (LVEF), end-diastolic volumes (EDV) and end-systolic volumes (ESV) measured by quantitative gated SPECT (QGSPECT) in studies acquired with and without magnification factor (zoom). Material and Methods: We studied 30 consecutive patients (17 men, ages 61±14 years) referred for myocardial perfusion evaluation with a 2-day protocol. Studies were performed after injection of 925 MBq (25 mCi) of 99mTc-MIBI in the resting state. Gated SPECT was first acquired using a x2 zoom factor and immediately repeated with x1 zoom (no magnification), using a 64x64 matrix and 8 frames/cardiac cycle. Patients with arrhythmia were not included in the investigation. According to the median EDV calculated with the x2 zoom acquisition, the population was further divided in two sub-groups regarding the size of the LV cavity. Average LVEF, EDV, ESV and difference between values (delta) were then calculated for the total population and for each sub-group (a and b). Results: For the total population, results are expressed.Pearson correlation showed r=0.954 between LVEF with and without zoom (p<0.0001), but linear regression analysis did not fit a specific model (p=0.18). Median EDV with zoom was 92.5 ml, allowing to separate 15 cases with EDV above (a) and 15 below that value (b). Results for both sub-groups are presented. Conclusion: Calculated LVEF is higher with no zoom, at the expense of decreasing both EDV and ESV. Although differences were very significant for all parameters, ESV changes were specially relevant with no zoom, particularly in patients with smaller hearts. Although good correlation was found between LVEF with and without zoom, no specific correction factor was found to convert one value into the other. Magnification factor should be kept constant in gated SPECT if calculated LVEF values QGSPECT are expected to be reliable, and validation of the method using different zoom factors should be considered

  7. When do we need more data? A primer on calculating the value of information for applied ecologists

    Science.gov (United States)

    Canessa, Stefano; Guillera-Arroita, Gurutzeta; Lahoz-Monfort, José J.; Southwell, Darren M; Armstrong, Doug P.; Chadès, Iadine; Lacy, Robert C; Converse, Sarah J.

    2015-01-01

    Applied ecologists continually advocate further research, under the assumption that obtaining more information will lead to better decisions. Value of information (VoI) analysis can be used to quantify how additional information may improve management outcomes: despite its potential, this method is still underused in environmental decision-making. We provide a primer on how to calculate the VoI and assess whether reducing uncertainty will change a decision. Our aim is to facilitate the application of VoI by managers who are not familiar with decision-analytic principles and notation, by increasing the technical accessibility of the tool.

  8. Validation of KENO-based criticality calculations at Rocky Flats

    International Nuclear Information System (INIS)

    Felsher, P.D.; McKamy, J.N.; Monahan, S.P.

    1992-01-01

    In the absence of experimental data, it is necessary to rely on computer-based computational methods in evaluating the criticality condition of a nuclear system. The validity of the computer codes is established in a two-part procedure as outlined in ANSI/ANS 8.1. The first step, usually the responsibility of the code developer, involves verification that the algorithmic structure of the code is performing the intended mathematical operations correctly. The second step involves an assessment of the code's ability to realistically portray the governing physical processes in question. This is accomplished by determining the code's bias, or systematic error, through a comparison of computational results to accepted values obtained experimentally. In this paper, the authors discuss the validation process for KENO and the Hansen-Roach cross sections in use at EG and G Rocky Flats. The validation process at Rocky Flats consists of both global and local techniques. The global validation resulted in a maximum k eff limit of 0.95 for the limiting-accident scanarios of a criticality evaluation

  9. Glass viscosity calculation based on a global statistical modelling approach

    Energy Technology Data Exchange (ETDEWEB)

    Fluegel, Alex

    2007-02-01

    A global statistical glass viscosity model was developed for predicting the complete viscosity curve, based on more than 2200 composition-property data of silicate glasses from the scientific literature, including soda-lime-silica container and float glasses, TV panel glasses, borosilicate fiber wool and E type glasses, low expansion borosilicate glasses, glasses for nuclear waste vitrification, lead crystal glasses, binary alkali silicates, and various further compositions from over half a century. It is shown that within a measurement series from a specific laboratory the reported viscosity values are often over-estimated at higher temperatures due to alkali and boron oxide evaporation during the measurement and glass preparation, including data by Lakatos et al. (1972) and the recently published High temperature glass melt property database for process modeling by Seward et al. (2005). Similarly, in the glass transition range many experimental data of borosilicate glasses are reported too high due to phase separation effects. The developed global model corrects those errors. The model standard error was 9-17°C, with R^2 = 0.985-0.989. The prediction 95% confidence interval for glass in mass production largely depends on the glass composition of interest, the composition uncertainty, and the viscosity level. New insights in the mixed-alkali effect are provided.

  10. Kowledge-based dynamic network safety calculations. Wissensbasierte dynamische Netzsicherheitsberechnungen

    Energy Technology Data Exchange (ETDEWEB)

    Kulicke, B [Inst. fuer Hochspannungstechnik und Starkstromanlagen, Berlin (Germany); Schlegel, S [Inst. fuer Hochspannungstechnik und Starkstromanlagen, Berlin (Germany)

    1993-06-28

    An important part of network operation management is the estimation and maintenance of the security of supply. So far the control personnel has only been supported by static network analyses and safety calculations. The authors describe an expert system, which is coupled to a real time simulation program on a transputer basis, for dynamic network safety calculations. They also introduce the system concept and the most important functions of the expert system. (orig.)

  11. Hybrid Electric Vehicle Control Strategy Based on Power Loss Calculations

    OpenAIRE

    Boyd, Steven J

    2006-01-01

    Defining an operation strategy for a Split Parallel Architecture (SPA) Hybrid Electric Vehicle (HEV) is accomplished through calculating powertrain component losses. The results of these calculations define how the vehicle can decrease fuel consumption while maintaining low vehicle emissions. For a HEV, simply operating the vehicle's engine in its regions of high efficiency does not guarantee the most efficient vehicle operation. The results presented are meant only to define a literal str...

  12. Application of γ field theory based calculation method to the monitoring of mine nuclear radiation environment

    International Nuclear Information System (INIS)

    Du Yanjun; Liu Qingcheng; Liu Hongzhang; Qin Guoxiu

    2009-01-01

    In order to find the feasibility of calculating mine radiation dose based on γ field theory, this paper calculates the γ radiation dose of a mine by means of γ field theory based calculation method. The results show that the calculated radiation dose is of small error and can be used to monitor mine environment of nuclear radiation. (authors)

  13. Determinination of plasma osmolality and agreement between measured and calculated values in healthy adult Hispaniolan Amazon parrots (Amazona ventralis).

    Science.gov (United States)

    Acierno, Mark J; Mitchell, Mark A; Freeman, Diana M; Schuster, Patricia J; Guzman, David Sanchez-Migallon; Tully, Thomas N

    2009-09-01

    To determine plasma osmolality in healthy adult Hispaniolan Amazon parrots (Amazona ventralis) and validate osmolality equations in these parrots. 20 healthy adult Hispaniolan Amazon parrots. A blood sample (0.5 mL) was collected from the right jugular vein of each parrot and placed into a lithium heparin microtainer tube. Samples were centrifuged, and plasma was harvested and frozen at -30 degrees C. Samples were thawed, and plasma osmolality was measured in duplicate with a freezing-point depression osmometer. The mean value was calculated for the 2 osmolality measurements. Plasma osmolality values were normally distributed, with a mean +/- SD of 326.0 +/- 6.878 mOsm/kg. The equations (2 x [Na(+) + K(+)]) + (glucose/18), which resulted in bias of 2.3333 mOsm/kg and limits of agreement of -7.0940 to 11.7606 mOsm/kg, and (2 x [Na(+) + K(+)]) + (uric acid concentration/16.8) + (glucose concentration/18), which resulted in bias of 5.8117 mOsm/kg and limits of agreement of -14.6640 to 3.0406 mOsm/kg, yielded calculated values that were in good agreement with the measured osmolality. IV administration of large amounts of hypotonic fluids can have catastrophic consequences. Osmolality of the plasma from parrots in this study was significantly higher than that of commercially available prepackaged fluids. Therefore, such fluids should be used with caution in Hispaniolan Amazon parrots as well as other psittacines. Additional studies are needed to determine whether the estimation of osmolality has the same clinical value in psittacines as it does in other animals.

  14. Fundamentals of Value Based Management in practice of Quality management

    Directory of Open Access Journals (Sweden)

    Katarzyna Szczepańska

    2010-03-01

    Full Text Available The article discusses the practical aspects of using the theory of value management in quality management. Presents the essence of value based management (VBM as a background of reflection on its links with quality management. Coherence of the concept in practice, been reviewed in the author’s own studies. The discovery of absence of sufficient procedural structure of the metrics of an economic – financial, to measure the value of the quality management system, points to a gap between the theoretical and practical considerations in managing the value of the company quality management system.  

  15. Calculation of generalized Lorenz-Mie theory based on the localized beam models

    International Nuclear Information System (INIS)

    Jia, Xiaowei; Shen, Jianqi; Yu, Haitao

    2017-01-01

    It has been proved that localized approximation (LA) is the most efficient way to evaluate the beam shape coefficients (BSCs) in generalized Lorenz-Mie theory (GLMT). The numerical calculation of relevant physical quantities is a challenge for its practical applications due to the limit of computer resources. The study presents an improved algorithm of the GLMT calculation based on the localized beam models. The BSCs and the angular functions are calculated by multiplying them with pre-factors so as to keep their values in a reasonable range. The algorithm is primarily developed for the original localized approximation (OLA) and is further extended to the modified localized approximation (MLA). Numerical results show that the algorithm is efficient, reliable and robust. - Highlights: • In this work, we introduce the proper pre-factors to the Bessel functions, BSCs and the angular functions. With this improvement, all the quantities involved in the numerical calculation are scaled into a reasonable range of values so that the algorithm can be used for computing the physical quantities of the GLMT. • The algorithm is not only an improvement in numerical technique, it also implies that the set of basic functions involved in the electromagnetic scattering (and sonic scattering) can be reasonably chosen. • The algorithms of the GLMT computations introduced in previous references suggested that the order of the n and m sums is interchanged. In this work, the sum of azimuth modes is performed for each partial wave. This offers the possibility to speed up the computation, since the sum of partial waves can be optimized according to the illumination conditions and the sum of azimuth modes can be truncated by selecting a criterion discussed in . • Numerical results show that the algorithm is efficient, reliable and robust, even in very exotic cases. The algorithm presented in this paper is based on the original localized approximation and it can also be used for the

  16. Method to Calculate the Financial Value of the Commercial Brands. Case “Cubita y diseño”, Brand Ownership of the CIMEX Group Companies

    Directory of Open Access Journals (Sweden)

    María Esperanza González–del Foyo

    2015-12-01

    Full Text Available The intangible assets constitute an income generation source for companies in which, the brand, is one of the most important and commercial impact. However, the formal determination or the scientific base of the value of this type of assets is a not well known in Cubans companies, like the case of the CIMEX Group of Companies, which has a wide brand market capital. The paper has the theoretic fundaments about brands, making emphasis in this definition from a financial perspective, and so like a diagnosis about the background and the present of studies of this kind, made in the Group of Companies. Several international models are presented that from this financial approach, are used to calculate the value of a brand, and finally a method is propose, that adapted to the company characteristics and the development environment of its activity, allows to determinate its value

  17. A value-based taxonomy of improvement approaches in healthcare.

    Science.gov (United States)

    Colldén, Christian; Gremyr, Ida; Hellström, Andreas; Sporraeus, Daniella

    2017-06-19

    Purpose The concept of value is becoming increasingly fashionable in healthcare and various improvement approaches (IAs) have been introduced with the aim of increasing value. The purpose of this paper is to construct a taxonomy that supports the management of parallel IAs in healthcare. Design/methodology/approach Based on previous research, this paper proposes a taxonomy that includes the dimensions of view on value and organizational focus; three contemporary IAs - lean, value-based healthcare, and patient-centered care - are related to the taxonomy. An illustrative qualitative case study in the context of psychiatric (psychosis) care is then presented that contains data from 23 interviews and focuses on the value concept, IAs, and the proposed taxonomy. Findings Respondents recognized the dimensions of the proposed taxonomy and indicated its usefulness as support for choosing and combining different IAs into a coherent management model, and for facilitating dialog about IAs. The findings also suggested that the view of value as "health outcomes" is widespread, but healthcare professionals are less likely than managers to also view value as a process. Originality/value The conceptual contribution of this paper is to delineate some important characteristics of IAs in relation to the emerging "value era". It also highlights the coexistence of different IAs in healthcare management practice. A taxonomy is proposed that can help managers choose, adapt, and combine IAs in local management models.

  18. Redefining Health: Implication for Value-Based Healthcare Reform.

    Science.gov (United States)

    Putera, Ikhwanuliman

    2017-03-02

    Health definition consists of three domains namely, physical, mental, and social health that should be prioritized in delivering healthcare. The emergence of chronic diseases in aging populations has been a barrier to the realization of a healthier society. The value-based healthcare concept seems in line with the true health objective: increasing value. Value is created from health outcomes which matter to patients relative to the cost of achieving those outcomes. The health outcomes should include all domains of health in a full cycle of care. To implement value-based healthcare, transformations need to be done by both health providers and patients: establishing true health outcomes, strengthening primary care, building integrated health systems, implementing appropriate health payment schemes that promote value and reduce moral hazards, enabling health information technology, and creating a policy that fits well with a community.

  19. An elastic elements calculation in the construction of electrical connectors based on flexible printed cables

    Directory of Open Access Journals (Sweden)

    Yefimenko A. A.

    2016-05-01

    connectors. We got an analytic dependence that can be used to find the Young's modulus for a known value of hardness on a scale Shore A. We gave examples of the amount of compression calculation in the elastomeric liner to provide a reliable contact for specified values of the transition resistance for the removable and permanent connectors based on flexible printed cable.

  20. Independent calculation-based verification of IMRT plans using a 3D dose-calculation engine

    International Nuclear Information System (INIS)

    Arumugam, Sankar; Xing, Aitang; Goozee, Gary; Holloway, Lois

    2013-01-01

    Independent monitor unit verification of intensity-modulated radiation therapy (IMRT) plans requires detailed 3-dimensional (3D) dose verification. The aim of this study was to investigate using a 3D dose engine in a second commercial treatment planning system (TPS) for this task, facilitated by in-house software. Our department has XiO and Pinnacle TPSs, both with IMRT planning capability and modeled for an Elekta-Synergy 6 MV photon beam. These systems allow the transfer of computed tomography (CT) data and RT structures between them but do not allow IMRT plans to be transferred. To provide this connectivity, an in-house computer programme was developed to convert radiation therapy prescription (RTP) files as generated by many planning systems into either XiO or Pinnacle IMRT file formats. Utilization of the technique and software was assessed by transferring 14 IMRT plans from XiO and Pinnacle onto the other system and performing 3D dose verification. The accuracy of the conversion process was checked by comparing the 3D dose matrices and dose volume histograms (DVHs) of structures for the recalculated plan on the same system. The developed software successfully transferred IMRT plans generated by 1 planning system into the other. Comparison of planning target volume (TV) DVHs for the original and recalculated plans showed good agreement; a maximum difference of 2% in mean dose, − 2.5% in D95, and 2.9% in V95 was observed. Similarly, a DVH comparison of organs at risk showed a maximum difference of +7.7% between the original and recalculated plans for structures in both high- and medium-dose regions. However, for structures in low-dose regions (less than 15% of prescription dose) a difference in mean dose up to +21.1% was observed between XiO and Pinnacle calculations. A dose matrix comparison of original and recalculated plans in XiO and Pinnacle TPSs was performed using gamma analysis with 3%/3 mm criteria. The mean and standard deviation of pixels passing

  1. Electric field calculations in brain stimulation based on finite elements

    DEFF Research Database (Denmark)

    Windhoff, Mirko; Opitz, Alexander; Thielscher, Axel

    2013-01-01

    The need for realistic electric field calculations in human noninvasive brain stimulation is undisputed to more accurately determine the affected brain areas. However, using numerical techniques such as the finite element method (FEM) is methodologically complex, starting with the creation...... of accurate head models to the integration of the models in the numerical calculations. These problems substantially limit a more widespread application of numerical methods in brain stimulation up to now. We introduce an optimized processing pipeline allowing for the automatic generation of individualized...... the successful usage of the pipeline in six subjects, including field calculations for transcranial magnetic stimulation and transcranial direct current stimulation. The quality of the head volume meshes is validated both in terms of capturing the underlying anatomy and of the well-shapedness of the mesh...

  2. A New Displacement-based Approach to Calculate Stress Intensity Factors With the Boundary Element Method

    Directory of Open Access Journals (Sweden)

    Marco Gonzalez

    Full Text Available Abstract The analysis of cracked brittle mechanical components considering linear elastic fracture mechanics is usually reduced to the evaluation of stress intensity factors (SIFs. The SIF calculation can be carried out experimentally, theoretically or numerically. Each methodology has its own advantages but the use of numerical methods has become very popular. Several schemes for numerical SIF calculations have been developed, the J-integral method being one of the most widely used because of its energy-like formulation. Additionally, some variations of the J-integral method, such as displacement-based methods, are also becoming popular due to their simplicity. In this work, a simple displacement-based scheme is proposed to calculate SIFs, and its performance is compared with contour integrals. These schemes are all implemented with the Boundary Element Method (BEM in order to exploit its advantages in crack growth modelling. Some simple examples are solved with the BEM and the calculated SIF values are compared against available solutions, showing good agreement between the different schemes.

  3. Uncertainty modelling and analysis of volume calculations based on a regular grid digital elevation model (DEM)

    Science.gov (United States)

    Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi

    2018-05-01

    The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.

  4. Application of CFD based wave loads in aeroelastic calculations

    DEFF Research Database (Denmark)

    Schløer, Signe; Paulsen, Bo Terp; Bredmose, Henrik

    2014-01-01

    Two fully nonlinear irregular wave realizations with different significant wave heights are considered. The wave realizations are both calculated in the potential flow solver Ocean-Wave3D and in a coupled domain decomposed potential-flow CFD solver. The surface elevations of the calculated wave...... domain decomposed potentialflow CFD solver result in different dynamic forces in the tower and monopile, despite that the static forces on a fixed monopile are similar. The changes are due to differences in the force profiles and wave steepness in the two solvers. The results indicate that an accurate...

  5. Value-based medicine and interventions for macular degeneration.

    Science.gov (United States)

    Brown, Melissa M; Brown, Gary C; Brown, Heidi

    2007-05-01

    The aim of this article is to review the patient value conferred by interventions for neovascular macular degeneration. Value-based medicine is the practice of medicine based upon the patient value (improvement in quality of life and length of life) conferred by an intervention. For ophthalmologic interventions, in which length-of-life is generally unaffected, the value gain is equivalent to the improvement in quality of life. Photodynamic therapy delivers a value gain (improvement in quality of life) of 8.1% for the average person with classic subfoveal choroidal neovascularization, while laser photocoagulation for the same entity confers a 4.4% improvement in quality of life. Preliminary data suggest the value gain for the treatment of occult/minimally classic choroidal neovascularization with ranibizumab is greater than 15%. The average value gain for statins for the treatment of hyperlipidemia is 3.9%, while that for the use of biphosphonates for the treatment of osteoporosis is 1.1% and that for drugs to treat benign prostatic hyperplasia is 1-2%. Interventions, especially ranibizumab therapy, for neovascular macular degeneration appear to deliver an extraordinary degree of value compared with many other interventions across healthcare.

  6. A matrix model for valuing anesthesia service with the resource-based relative value system.

    Science.gov (United States)

    Sinclair, David R; Lubarsky, David A; Vigoda, Michael M; Birnbach, David J; Harris, Eric A; Behrens, Vicente; Bazan, Richard E; Williams, Steve M; Arheart, Kristopher; Candiotti, Keith A

    2014-01-01

    The purpose of this study was to propose a new crosswalk using the resource-based relative value system (RBRVS) that preserves the time unit component of the anesthesia service and disaggregates anesthesia billing into component parts (preoperative evaluation, intraoperative management, and postoperative evaluation). The study was designed as an observational chart and billing data review of current and proposed payments, in the setting of a preoperative holing area, intraoperative suite, and post anesthesia care unit. In total, 1,195 charts of American Society of Anesthesiology (ASA) physical status 1 through 5 patients were reviewed. No direct patient interventions were undertaken. Spearman correlations between the proposed RBRVS billing matrix payments and the current ASA relative value guide methodology payments were strong (r=0.94-0.96, Pbilling matrix yielded payments that were 3.0%±1.34% less than would have been expected from commercial insurers, using standard rates for commercial ASA relative value units and RBRVS relative value units. Compared with current Medicare reimbursement under the ASA relative value guide, reimbursement would almost double when converting to an RBRVS billing model. The greatest increases in Medicare reimbursement between the current system and proposed billing model occurred as anesthetic management complexity increased. The new crosswalk correlates with existing evaluation and management and intensive care medicine codes in an essentially revenue neutral manner when applied to the market-based rates of commercial insurers. The new system more highly values delivery of care to more complex patients undergoing more complex surgery and better represents the true value of anesthetic case management.

  7. Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets

    Science.gov (United States)

    Cifter, Atilla

    2011-06-01

    This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.

  8. IT-based Value Creation in Serial Acquisitions

    DEFF Research Database (Denmark)

    Henningsson, Stefan; Yetton, Philip

    2013-01-01

    serial acquirers realize IT-based value, we integrate and model the findings on individual acquisitions from the extant literature, and extend that model to explain the effects of sequential acquisitions in a growth-by-acquisition strategy. This extended model, drawing on the Resource-Based Theory......The extant research on post-acquisition IT integration analyzes how acquirers realize IT-based value in individual acquisitions. However, serial acquirers make 60% of acquisitions. These acquisitions are not isolated events, but are components in growth-by-acquisition programs. To explain how...

  9. Method for calculating thermal properties of lightweight floor heating panels based on an experimental setup

    DEFF Research Database (Denmark)

    Weitzmann, Peter; Svendsen, Svend

    2005-01-01

    , radiation and conduction of the heat transfer between pipe and surrounding materials. The European Standard for floor heating, EN1264, does not cover lightweight systems, while the supplemental Nordtest Method VVS127 is aimed at lightweight systems. The thermal properties can be found using tabulated values...... simulation model. It has been shown that the method is accurate with an error on the heat fluxes of less than 5% for different supply temperatures. An error of around 5% is also recorded when comparing measurements to calculated heat flows using the Nordtest VVS 127 method based on the experimental setup...

  10. Comprehensive calculation of the energy per ion pair or W values for five major planetary upper atmospheres

    Directory of Open Access Journals (Sweden)

    C. Simon Wedlund

    2011-01-01

    Full Text Available The mean energy W expended in a collision of electrons with atmospheric gases is a useful parameter for fast aeronomy computations. Computing this parameter in transport kinetic models with experimental values can tell us more about the number of processes that have to be taken into account and the uncertainties of the models. We present here computations for several atmospheric gases of planetological interest (CO2, CO, N2, O2, O, CH4, H, He using a family of multi-stream kinetic transport codes. Results for complete atmospheres for Venus, Earth, Mars, Jupiter and Titan are also shown for the first time. A simple method is derived to calculate W of gas mixtures from single-component gases and is conclusively checked against the W values of these planetary atmospheres. Discrepancies between experimental and theoretical values show where improvements can be made in the measurement of excitation and dissociation cross-sections of specific neutral species, such as CO2 and CO.

  11. 31 CFR 351.13 - What do I need to know about the savings bond rate to understand redemption value calculations in...

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false What do I need to know about the savings bond rate to understand redemption value calculations in this subpart? 351.13 Section 351.13 Money... What do I need to know about the savings bond rate to understand redemption value calculations in this...

  12. Value-Based Medicine and Integration of Tumor Biology.

    Science.gov (United States)

    Brooks, Gabriel A; Bosserman, Linda D; Mambetsariev, Isa; Salgia, Ravi

    2017-01-01

    Clinical oncology is in the midst of a genomic revolution, as molecular insights redefine our understanding of cancer biology. Greater awareness of the distinct aberrations that drive carcinogenesis is also contributing to a growing armamentarium of genomically targeted therapies. Although much work remains to better understand how to combine and sequence these therapies, improved outcomes for patients are becoming manifest. As we welcome this genomic revolution in cancer care, oncologists also must grapple with a number of practical problems. Costs of cancer care continue to grow, with targeted therapies responsible for an increasing proportion of spending. Rising costs are bringing the concept of value into sharper focus and challenging the oncology community with implementation of value-based cancer care. This article explores the ways that the genomic revolution is transforming cancer care, describes various frameworks for considering the value of genomically targeted therapies, and outlines key challenges for delivering on the promise of personalized cancer care. It highlights practical solutions for the implementation of value-based care, including investment in biomarker development and clinical trials to improve the efficacy of targeted therapy, the use of evidence-based clinical pathways, team-based care, computerized clinical decision support, and value-based payment approaches.

  13. Poster - 08: Preliminary Investigation into Collapsed-Cone based Dose Calculations for COMS Eye Plaques

    International Nuclear Information System (INIS)

    Morrison, Hali; Menon, Geetha; Sloboda, Ron

    2016-01-01

    Purpose: To investigate the accuracy of model-based dose calculations using a collapsed-cone algorithm for COMS eye plaques loaded with I-125 seeds. Methods: The Nucletron SelectSeed 130.002 I-125 seed and the 12 mm COMS eye plaque were incorporated into a research version of the Oncentra® Brachy v4.5 treatment planning system which uses the Advanced Collapsed-cone Engine (ACE) algorithm. Comparisons of TG-43 and high-accuracy ACE doses were performed for a single seed in a 30×30×30 cm 3 water box, as well as with one seed in the central slot of the 12 mm COMS eye plaque. The doses along the plaque central axis (CAX) were used to calculate the carrier correction factor, T(r), and were compared to tabulated and MCNP6 simulated doses for both the SelectSeed and IsoAid IAI-125A seeds. Results: The ACE calculated dose for the single seed in water was on average within 0.62 ± 2.2% of the TG-43 dose, with the largest differences occurring near the end-welds. The ratio of ACE to TG-43 calculated doses along the CAX (T(r)) of the 12 mm COMS plaque for the SelectSeed was on average within 3.0% of previously tabulated data, and within 2.9% of the MCNP6 simulated values. The IsoAid and SelectSeed T(r) values agreed within 0.3%. Conclusions: Initial comparisons show good agreement between ACE and MC doses for a single seed in a 12 mm COMS eye plaque; more complicated scenarios are being investigated to determine the accuracy of this calculation method.

  14. Poster - 08: Preliminary Investigation into Collapsed-Cone based Dose Calculations for COMS Eye Plaques

    Energy Technology Data Exchange (ETDEWEB)

    Morrison, Hali; Menon, Geetha; Sloboda, Ron [Cross Cancer Institute, Edmonton, AB, and University of Alberta, Edmonton, AB, Cross Cancer Institute, Edmonton, AB, and University of Alberta, Edmonton, AB, Cross Cancer Institute, Edmonton, AB, and University of Alberta, Edmonton, AB (Canada)

    2016-08-15

    Purpose: To investigate the accuracy of model-based dose calculations using a collapsed-cone algorithm for COMS eye plaques loaded with I-125 seeds. Methods: The Nucletron SelectSeed 130.002 I-125 seed and the 12 mm COMS eye plaque were incorporated into a research version of the Oncentra® Brachy v4.5 treatment planning system which uses the Advanced Collapsed-cone Engine (ACE) algorithm. Comparisons of TG-43 and high-accuracy ACE doses were performed for a single seed in a 30×30×30 cm{sup 3} water box, as well as with one seed in the central slot of the 12 mm COMS eye plaque. The doses along the plaque central axis (CAX) were used to calculate the carrier correction factor, T(r), and were compared to tabulated and MCNP6 simulated doses for both the SelectSeed and IsoAid IAI-125A seeds. Results: The ACE calculated dose for the single seed in water was on average within 0.62 ± 2.2% of the TG-43 dose, with the largest differences occurring near the end-welds. The ratio of ACE to TG-43 calculated doses along the CAX (T(r)) of the 12 mm COMS plaque for the SelectSeed was on average within 3.0% of previously tabulated data, and within 2.9% of the MCNP6 simulated values. The IsoAid and SelectSeed T(r) values agreed within 0.3%. Conclusions: Initial comparisons show good agreement between ACE and MC doses for a single seed in a 12 mm COMS eye plaque; more complicated scenarios are being investigated to determine the accuracy of this calculation method.

  15. Calculation laboratory: game based learning in exact discipline

    Directory of Open Access Journals (Sweden)

    André Felipe de Almeida Xavier

    2017-12-01

    Full Text Available The Calculation Laboratory appeared with the need to give meaning to the learning of students entering the courses of Engineering, in the discipline of Differential Calculus, in the semester 1/2016. After obtaining good results, the activity was also extended to the classes of Analytical Geometry and Linear Algebra (GAAL and Integral Calculus, so that these incoming students could continue the process. Historically, students present some difficulty in these contents, and it is necessary to give meaning to their learning. Given the table presented, the Calculation Laboratory aims to give meaning to the contents worked, giving students autonomy, having the teacher as the tutor, as intermediary between the student and the knowledge, creating various practical, playful and innovative activities to assist in this process. Through this article, it is intended to report a little about the activities created to facilitate this process of execution of the Calculation Laboratory, in addition to demonstrating the results obtained and measured after its application. Through these proposed activities, it is noticed that the student is gradually gaining autonomy in the search for knowledge.

  16. Neutron spectra calculation and doses in a subcritical nuclear reactor based on thorium

    International Nuclear Information System (INIS)

    Medina C, D.; Hernandez A, P. L.; Hernandez D, V. M.; Vega C, H. R.; Sajo B, L.

    2015-10-01

    This paper describes a heterogeneous subcritical nuclear reactor with molten salts based on thorium, with graphite moderator and a source of 252 Cf, whose dose levels in the periphery allows its use in teaching and research activities. The design was done by the Monte Carlo method with the code MCNP5 where the geometry, dimensions and fuel was varied in order to obtain the best design. The result is a cubic reactor of 110 cm side with graphite moderator and reflector. In the central part they have 9 ducts that were placed in the direction of axis Y. The central duct contains the source of 252 Cf, of 8 other ducts, are two irradiation ducts and the other six contain a molten salt ( 7 LiF - BeF 2 - ThF 4 - UF 4 ) as fuel. For design the k eff , neutron spectra and ambient dose equivalent was calculated. In the first instance the above calculation for a virgin fuel was called case 1, then a percentage of 233 U was used and the percentage of Th was decreased and was called case 2. This with the purpose to compare two different fuels working inside the reactor. In the case 1 a value was obtained for the k eff of 0.13 and case 2 of 0.28, maintaining the subcriticality in both cases. In the dose levels the higher value is in case 2 in the axis Y with a value of 3.31 e-3 ±1.6% p Sv/Q this value is reported in for one. With this we can calculate the exposure time of personnel working in the reactor. (Author)

  17. Calculation of the backscattering in water and compared to the values in air; Calculo del factor de retrodispersion en agua y comparativa con los valores en aire

    Energy Technology Data Exchange (ETDEWEB)

    Minano Herrero, J. A.; Sarasa Rubio, A.; Roldan Arjona, J. M.

    2011-07-01

    The purpose of this paper is to calculate values of BSF in water and comparison with data on air 11SF found in the literature. For this simulations have been performed by the Monte Carlo method for calculating values ??kerma water in the presence of a manikin of this material and in the absence thereof. The simulations were performed for monoenergetic beams in order to facilitate the calculation of the BSF for any spectral distribution of those found in the field of radiology.

  18. Optimal policy for value-based decision-making.

    Science.gov (United States)

    Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre

    2016-08-18

    For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down.

  19. Desired emotions across cultures: A value-based account.

    Science.gov (United States)

    Tamir, Maya; Schwartz, Shalom H; Cieciuch, Jan; Riediger, Michaela; Torres, Claudio; Scollon, Christie; Dzokoto, Vivian; Zhou, Xiaolu; Vishkin, Allon

    2016-07-01

    Values reflect how people want to experience the world; emotions reflect how people actually experience the world. Therefore, we propose that across cultures people desire emotions that are consistent with their values. Whereas prior research focused on the desirability of specific affective states or 1 or 2 target emotions, we offer a broader account of desired emotions. After reporting initial evidence for the potential causal effects of values on desired emotions in a preliminary study (N = 200), we tested the predictions of our proposed model in 8 samples (N = 2,328) from distinct world cultural regions. Across cultural samples, we found that people who endorsed values of self-transcendence (e.g., benevolence) wanted to feel more empathy and compassion, people who endorsed values of self-enhancement (e.g., power) wanted to feel more anger and pride, people who endorsed values of openness to change (e.g., self-direction) wanted to feel more interest and excitement, and people who endorsed values of conservation (e.g., tradition) wanted to feel more calmness and less fear. These patterns were independent of differences in emotional experience. We discuss the implications of our value-based account of desired emotions for understanding emotion regulation, culture, and other individual differences. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. Research on the Value Evaluation of Used Pure Electric Car Based on the Replacement Cost Method

    Science.gov (United States)

    Tan, zhengping; Cai, yun; Wang, yidong; Mao, pan

    2018-03-01

    In this paper, the value evaluation of the used pure electric car is carried out by the replacement cost method, which fills the blank of the value evaluation of the electric vehicle. The basic principle of using the replacement cost method, combined with the actual cost of pure electric cars, puts forward the calculation method of second-hand electric car into a new rate based on the use of AHP method to construct the weight matrix comprehensive adjustment coefficient of related factors, the improved method of value evaluation system for second-hand car

  1. A matrix model for valuing anesthesia service with the resource-based relative value system

    Directory of Open Access Journals (Sweden)

    Sinclair DR

    2014-10-01

    Full Text Available David R Sinclair,1 David A Lubarsky,1 Michael M Vigoda,1 David J Birnbach,1 Eric A Harris,1 Vicente Behrens,1 Richard E Bazan,1 Steve M Williams,1 Kristopher Arheart,2 Keith A Candiotti1 1Department of Anesthesiology, Perioperative Medicine and Pain Management, 2Department of Public Health Sciences, Division of Biostatistics, University of Miami Miller School of Medicine, Miami, FL, USA Background: The purpose of this study was to propose a new crosswalk using the resource-based relative value system (RBRVS that preserves the time unit component of the anesthesia service and disaggregates anesthesia billing into component parts (preoperative evaluation, intraoperative management, and postoperative evaluation. The study was designed as an observational chart and billing data review of current and proposed payments, in the setting of a preoperative holing area, intraoperative suite, and post anesthesia care unit. In total, 1,195 charts of American Society of Anesthesiology (ASA physical status 1 through 5 patients were reviewed. No direct patient interventions were undertaken. Results: Spearman correlations between the proposed RBRVS billing matrix payments and the current ASA relative value guide methodology payments were strong (r=0.94–0.96, P<0.001 for training, test, and overall. The proposed RBRVS-based billing matrix yielded payments that were 3.0%±1.34% less than would have been expected from commercial insurers, using standard rates for commercial ASA relative value units and RBRVS relative value units. Compared with current Medicare reimbursement under the ASA relative value guide, reimbursement would almost double when converting to an RBRVS billing model. The greatest increases in Medicare reimbursement between the current system and proposed billing model occurred as anesthetic management complexity increased. Conclusion: The new crosswalk correlates with existing evaluation and management and intensive care medicine codes in an

  2. Calculation of crack stress density of cement base materials

    Directory of Open Access Journals (Sweden)

    Chun-e Sui

    2018-01-01

    Full Text Available In this paper, the fracture load of cement paste with different water cement ratio, different mineral admixtures, including fly ash, silica fume and slag, is obtained through experiments. the three-dimensional fracture surface is reconstructed and the three-dimensional effective area of the fracture surface is calculated. the effective fracture stress density of different cement paste is obtained. The results show that the polynomial function can accurately describe the relationship between the three-dimensional total area and the tensile strength

  3. Status of CINDER and ENDF/B-V based libraries for transmutation calculations

    International Nuclear Information System (INIS)

    Wilson, W.B.; England, T.R.; LaBauve, R.J.; Battat, M.E.; Wessol, D.E.; Perry, R.T.

    1980-01-01

    The CINDER codes and their data libraries are described, and their range of calculational capabilities are described using documented applications. The importance of ENDF/B data and the features of the ENDF/B-IV and ENDF/B-V fission-product and actinide data files are emphasized. The actinide decay data of ENDF/B-V, augmented by additional data from available sources, are used to produce average decay energy values and neutron source values from sponteneous fission, (α,n) and delayed neutron emission for 144 actinide nuclides that are formed in reactor fuel. The status and characteristics of the CINDER-2 code is described, along with a brief description of more well known code versions; a review of the status of new ENDF/B-V based libraries for all versions is presented

  4. Evidence-based medicine: the value of vision screening.

    Science.gov (United States)

    Beauchamp, George R; Ellepola, Chalani; Beauchamp, Cynthia L

    2010-01-01

    To review the literature for evidence-based medicine (EBM), to assess the evidence for effectiveness of vision screening, and to propose moving toward value-based medicine (VBM) as a preferred basis for comparative effectiveness research. Literature based evidence is applied to five core questions concerning vision screening: (1) Is vision valuable (an inherent good)?; (2) Is screening effective (finding amblyopia)?; (3) What are the costs of screening?; (4) Is treatment effective?; and (5) Is amblyopia detection beneficial? Based on EBM literature and clinical experience, the answers to the five questions are: (1) yes; (2) based on literature, not definitively so; (3) relatively inexpensive, although some claim benefits for more expensive options such as mandatory exams; (4) yes, for compliant care, although treatment processes may have negative aspects such as "bullying"; and (5) economic productive values are likely very high, with returns of investment on the order of 10:1, while human value returns need further elucidation. Additional evidence is required to ascertain the degree to which vision screening is effective. The processes of screening are multiple, sequential, and complicated. The disease is complex, and good visual outcomes require compliance. The value of outcomes is appropriately analyzed in clinical, human, and economic terms.

  5. Freeway travel speed calculation model based on ETC transaction data.

    Science.gov (United States)

    Weng, Jiancheng; Yuan, Rongliang; Wang, Ru; Wang, Chang

    2014-01-01

    Real-time traffic flow operation condition of freeway gradually becomes the critical information for the freeway users and managers. In fact, electronic toll collection (ETC) transaction data effectively records operational information of vehicles on freeway, which provides a new method to estimate the travel speed of freeway. First, the paper analyzed the structure of ETC transaction data and presented the data preprocess procedure. Then, a dual-level travel speed calculation model was established under different levels of sample sizes. In order to ensure a sufficient sample size, ETC data of different enter-leave toll plazas pairs which contain more than one road segment were used to calculate the travel speed of every road segment. The reduction coefficient α and reliable weight θ for sample vehicle speed were introduced in the model. Finally, the model was verified by the special designed field experiments which were conducted on several freeways in Beijing at different time periods. The experiments results demonstrated that the average relative error was about 6.5% which means that the freeway travel speed could be estimated by the proposed model accurately. The proposed model is helpful to promote the level of the freeway operation monitoring and the freeway management, as well as to provide useful information for the freeway travelers.

  6. A thermodynamic data base for Tc to calculate equilibrium solubilities at temperatures up to 300 deg C

    International Nuclear Information System (INIS)

    Puigdomenech, I.; Bruno, J.

    1995-04-01

    Thermodynamic data has been selected for solids and aqueous species of technetium. Equilibrium constants have been calculated in the temperature range 0 to 300 deg C at a pressure of 1 bar for T r Cdeg pm values for mononuclear hydrolysis reactions. The formation constants for chloro complexes of Tc(V) and Tc(IV), whose existence is well established, have been estimated. The majority of entropy and heat capacity values in the data base have also been estimated, and therefore temperature extrapolations are largely based on estimations. The uncertainties derived from these calculations are described. Using the data base developed in this work, technetium solubilities have been calculated as a function of temperature for different chemical conditions. The implications for the mobility of Tc under nuclear repository conditions are discussed. 70 refs

  7. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions. II. A simplified implementation.

    Science.gov (United States)

    Tao, Guohua; Miller, William H

    2012-09-28

    An efficient time-dependent (TD) Monte Carlo (MC) importance sampling method has recently been developed [G. Tao and W. H. Miller, J. Chem. Phys. 135, 024104 (2011)] for the evaluation of time correlation functions using the semiclassical (SC) initial value representation (IVR) methodology. In this TD-SC-IVR method, the MC sampling uses information from both time-evolved phase points as well as their initial values, and only the "important" trajectories are sampled frequently. Even though the TD-SC-IVR was shown in some benchmark examples to be much more efficient than the traditional time-independent sampling method (which uses only initial conditions), the calculation of the SC prefactor-which is computationally expensive, especially for large systems-is still required for accepted trajectories. In the present work, we present an approximate implementation of the TD-SC-IVR method that is completely prefactor-free; it gives the time correlation function as a classical-like magnitude function multiplied by a phase function. Application of this approach to flux-flux correlation functions (which yield reaction rate constants) for the benchmark H + H(2) system shows very good agreement with exact quantum results. Limitations of the approximate approach are also discussed.

  8. Building a values-based culture in nurse education.

    Science.gov (United States)

    Tetley, Josie; Dobson, Fiona; Jack, Kirsten; Pearson, Beryl; Walker, Elaine

    2016-01-01

    Nurse education has found itself challenged to select and educate nurses who on completion of? of their programme? have: excellent technical skills, an ability to critically analyse care and work compassionately in ways that support the values of care that are important to service users. Recent reports of care suggest that nursing still needs to develop the values base of its student selection and education processes. Against this backdrop, this paper presents two examples from pre registration nurse education that illustrate how a values based approach is used as part of the selection process in one university and used to inform the development of a reflective poetry initiative in another university. Having presented the two examples the authors debate some of the wider benefits and challenges linked to these ways of working. For example, the importance of connecting nurses' personal beliefs, attitudes and assumptions to service user values in recruitment are discussed. The use of poetry as a way of thinking about practice that moves beyond traditional models of reflection in nursing are also considered. However, the authors recognise that if developments in nurse education are to have a real impact on nursing practice and patient care, there is the need for values based initiatives to be more directly connected to the delivery of healthcare. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Neural Signature of Value-Based Sensorimotor Prioritization in Humans.

    Science.gov (United States)

    Blangero, Annabelle; Kelly, Simon P

    2017-11-01

    In situations in which impending sensory events demand fast action choices, we must be ready to prioritize higher-value courses of action to avoid missed opportunities. When such a situation first presents itself, stimulus-action contingencies and their relative value must be encoded to establish a value-biased state of preparation for an impending sensorimotor decision. Here, we sought to identify neurophysiological signatures of such processes in the human brain (both female and male). We devised a task requiring fast action choices based on the discrimination of a simple visual cue in which the differently valued sensory alternatives were presented 750-800 ms before as peripheral "targets" that specified the stimulus-action mapping for the upcoming decision. In response to the targets, we identified a discrete, transient, spatially selective signal in the event-related potential (ERP), which scaled with relative value and strongly predicted the degree of behavioral bias in the upcoming decision both across and within subjects. This signal is not compatible with any hitherto known ERP signature of spatial selection and also bears novel distinctions with respect to characterizations of value-sensitive, spatially selective activity found in sensorimotor areas of nonhuman primates. Specifically, a series of follow-up experiments revealed that the signal was reliably invoked regardless of response laterality, response modality, sensory feature, and reward valence. It was absent, however, when the response deadline was relaxed and the strategic need for biasing removed. Therefore, more than passively representing value or salience, the signal appears to play a versatile and active role in adaptive sensorimotor prioritization. SIGNIFICANCE STATEMENT In many situations such as fast-moving sports, we must be ready to act fast in response to sensory events and, in our preparation, prioritize courses of action that lead to greater rewards. Although behavioral effects of

  10. Value of information-based inspection planning for offshore structures

    DEFF Research Database (Denmark)

    Irman, Arifian Agusta; Thöns, Sebastian; Leira, Bernt J.

    2017-01-01

    with each inspection strategy. A simplified and generic risk-based inspection planning utilizing pre- posterior Bayesian decision analysis had been proposed by Faber et al. [1] and Straub [2]. This paper provides considerations on the theoretical background and a Value of Information analysis......-based inspection planning. The paper will start out with a review of the state-of-art RBI planning procedure based on Bayesian decision theory and its application in offshore structure integrity management. An example of the Value of Information approach is illustrated and it is pointed to further research......Asset integrity and management is an important part of the oil and gas industry especially for existing offshore structures. With declining oil price, the production rate is an important factor to be maintained that makes integrity of the structures one of the main concerns. Reliability based...

  11. Green Net Value Added as a Sustainability Metric Based on ...

    Science.gov (United States)

    Sustainability measurement in economics involves evaluation of environmental and economic impact in an integrated manner. In this study, system level economic data are combined with environmental impact from a life cycle assessment (LCA) of a common product. We are exploring a costing approach that captures traditional costs but also incorporates externality costs to provide a convenient, easily interpretable metric. Green Net Value Added (GNVA) is a type of full cost accounting that incorporates total revenue, the cost of materials and services, depreciation, and environmental externalities. Two, but not all, of the potential environmental impacts calculated by the standard LCIA method (TRACI) could be converted to externality cost values. We compute externality costs disaggregated by upstream sectors, full cost, and GNVA to evaluate the relative sustainability of Bounty® paper towels manufactured at two production facilities. We found that the longer running, more established line had a higher GNVA than the newer line. The dominant factors contributing to externality costs are calculated to come from the stationary sources in the supply chain: electricity generation (27-35%), refineries (20-21%), pulp and paper making (15-23%). Health related externalities from Particulate Matter (PM2.5) and Carbon Dioxide equivalent (CO2e) emissions appear largely driven by electricity usage and emissions by the facilities, followed by pulp processing and transport. Supply

  12. A simplified calculation procedure for mass isotopomer distribution analysis (MIDA) based on multiple linear regression.

    Science.gov (United States)

    Fernández-Fernández, Mario; Rodríguez-González, Pablo; García Alonso, J Ignacio

    2016-10-01

    We have developed a novel, rapid and easy calculation procedure for Mass Isotopomer Distribution Analysis based on multiple linear regression which allows the simultaneous calculation of the precursor pool enrichment and the fraction of newly synthesized labelled proteins (fractional synthesis) using linear algebra. To test this approach, we used the peptide RGGGLK as a model tryptic peptide containing three subunits of glycine. We selected glycine labelled in two 13 C atoms ( 13 C 2 -glycine) as labelled amino acid to demonstrate that spectral overlap is not a problem in the proposed methodology. The developed methodology was tested first in vitro by changing the precursor pool enrichment from 10 to 40% of 13 C 2 -glycine. Secondly, a simulated in vivo synthesis of proteins was designed by combining the natural abundance RGGGLK peptide and 10 or 20% 13 C 2 -glycine at 1 : 1, 1 : 3 and 3 : 1 ratios. Precursor pool enrichments and fractional synthesis values were calculated with satisfactory precision and accuracy using a simple spreadsheet. This novel approach can provide a relatively rapid and easy means to measure protein turnover based on stable isotope tracers. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Prediction of fission mass-yield distributions based on cross section calculations

    International Nuclear Information System (INIS)

    Hambsch, F.-J.; G.Vladuca; Tudora, Anabella; Oberstedt, S.; Ruskov, I.

    2005-01-01

    For the first time, fission mass-yield distributions have been predicted based on an extended statistical model for fission cross section calculations. In this model, the concept of the multi-modality of the fission process has been incorporated. The three most dominant fission modes, the two asymmetric standard I (S1) and standard II (S2) modes and the symmetric superlong (SL) mode are taken into account. De-convoluted fission cross sections for S1, S2 and SL modes for 235,238 U(n, f) and 237 Np(n, f), based on experimental branching ratios, were calculated for the first time in the incident neutron energy range from 0.01 to 5.5 MeV providing good agreement with the experimental fission cross section data. The branching ratios obtained from the modal fission cross section calculations have been used to deduce the corresponding fission yield distributions, including mean values also for incident neutron energies hitherto not accessible to experiment

  14. Random Valued Impulse Noise Removal Using Region Based Detection Approach

    Directory of Open Access Journals (Sweden)

    S. Banerjee

    2017-12-01

    Full Text Available Removal of random valued noisy pixel is extremely challenging when the noise density is above 50%. The existing filters are generally not capable of eliminating such noise when density is above 70%. In this paper a region wise density based detection algorithm for random valued impulse noise has been proposed. On the basis of the intensity values, the pixels of a particular window are sorted and then stored into four regions. The higher density based region is considered for stepwise detection of noisy pixels. As a result of this detection scheme a maximum of 75% of noisy pixels can be detected. For this purpose this paper proposes a unique noise removal algorithm. It was experimentally proved that the proposed algorithm not only performs exceptionally when it comes to visual qualitative judgment of standard images but also this filter combination outsmarts the existing algorithm in terms of MSE, PSNR and SSIM comparison even up to 70% noise density level.

  15. Value redefined for inflammatory bowel disease patients: a choice-based conjoint analysis of patients' preferences.

    Science.gov (United States)

    van Deen, Welmoed K; Nguyen, Dominic; Duran, Natalie E; Kane, Ellen; van Oijen, Martijn G H; Hommes, Daniel W

    2017-02-01

    Value-based healthcare is an upcoming field. The core idea is to evaluate care based on achieved outcomes divided by the costs. Unfortunately, the optimal way to evaluate outcomes is ill-defined. In this study, we aim to develop a single, preference based, outcome metric, which can be used to quantify overall health value in inflammatory bowel disease (IBD). IBD patients filled out a choice-based conjoint (CBC) questionnaire in which patients chose preferable outcome scenarios with different levels of disease control (DC), quality of life (QoL), and productivity (Pr). A CBC analysis was performed to estimate the relative value of DC, QoL, and Pr. A patient-centered composite score was developed which was weighted based on the stated preferences. We included 210 IBD patients. Large differences in stated preferences were observed. Increases from low to intermediate outcome levels were valued more than increases from intermediate to high outcome levels. Overall, QoL was more important to patients than DC or Pr. Individual outcome scores were calculated based on the stated preferences. This score was significantly different from a score not weighted based on patient preferences in patients with active disease. We showed the feasibility of creating a single outcome metric in IBD which incorporates patients' values using a CBC. Because this metric changes significantly when weighted according to patients' values, we propose that success in healthcare should be measured accordingly.

  16. Study of activity based costing implementation for palm oil production using value-added and non-value-added activity consideration in PT XYZ palm oil mill

    Science.gov (United States)

    Sembiring, M. T.; Wahyuni, D.; Sinaga, T. S.; Silaban, A.

    2018-02-01

    Cost allocation at manufacturing industry particularly in Palm Oil Mill still widely practiced based on estimation. It leads to cost distortion. Besides, processing time determined by company is not in accordance with actual processing time in work station. Hence, the purpose of this study is to eliminates non-value-added activities therefore processing time could be shortened and production cost could be reduced. Activity Based Costing Method is used in this research to calculate production cost with Value Added and Non-Value-Added Activities consideration. The result of this study is processing time decreasing for 35.75% at Weighting Bridge Station, 29.77% at Sorting Station, 5.05% at Loading Ramp Station, and 0.79% at Sterilizer Station. Cost of Manufactured for Crude Palm Oil are IDR 5.236,81/kg calculated by Traditional Method, IDR 4.583,37/kg calculated by Activity Based Costing Method before implementation of Activity Improvement and IDR 4.581,71/kg after implementation of Activity Improvement Meanwhile Cost of Manufactured for Palm Kernel are IDR 2.159,50/kg calculated by Traditional Method, IDR 4.584,63/kg calculated by Activity Based Costing Method before implementation of Activity Improvement and IDR 4.582,97/kg after implementation of Activity Improvement.

  17. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    International Nuclear Information System (INIS)

    Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees

    2015-01-01

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts

  18. Many-body calculations with deuteron based single-particle bases and their associated natural orbits

    Science.gov (United States)

    Puddu, G.

    2018-06-01

    We use the recently introduced single-particle states obtained from localized deuteron wave-functions as a basis for nuclear many-body calculations. We show that energies can be substantially lowered if the natural orbits (NOs) obtained from this basis are used. We use this modified basis for {}10{{B}}, {}16{{O}} and {}24{{Mg}} employing the bare NNLOopt nucleon–nucleon interaction. The lowering of the energies increases with the mass. Although in principle NOs require a full scale preliminary many-body calculation, we found that an approximate preliminary many-body calculation, with a marginal increase in the computational cost, is sufficient. The use of natural orbits based on an harmonic oscillator basis leads to a much smaller lowering of the energies for a comparable computational cost.

  19. 26 CFR 1.664-4T - Calculation of the fair market value of the remainder interest in a charitable remainder unitrust...

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 8 2010-04-01 2010-04-01 false Calculation of the fair market value of the... of the fair market value of the remainder interest in a charitable remainder unitrust (temporary). (a...) through (c). (d) Valuation. The fair market value of a remainder interest in a charitable remainder...

  20. Mathematical Based Calculation of Drug Penetration Depth in Solid Tumors

    Directory of Open Access Journals (Sweden)

    Hamidreza Namazi

    2016-01-01

    Full Text Available Cancer is a class of diseases characterized by out-of-control cells’ growth which affect cells and make them damaged. Many treatment options for cancer exist. Chemotherapy as an important treatment option is the use of drugs to treat cancer. The anticancer drug travels to the tumor and then diffuses in it through capillaries. The diffusion of drugs in the solid tumor is limited by penetration depth which is different in case of different drugs and cancers. The computation of this depth is important as it helps physicians to investigate about treatment of infected tissue. Although many efforts have been made on studying and measuring drug penetration depth, less works have been done on computing this length from a mathematical point of view. In this paper, first we propose phase lagging model for diffusion of drug in the tumor. Then, using this model on one side and considering the classic diffusion on the other side, we compute the drug penetration depth in the solid tumor. This computed value of drug penetration depth is corroborated by comparison with the values measured by experiments.

  1. Episodic memories predict adaptive value-based decision-making

    Science.gov (United States)

    Murty, Vishnu; FeldmanHall, Oriel; Hunter, Lindsay E.; Phelps, Elizabeth A; Davachi, Lila

    2016-01-01

    Prior research illustrates that memory can guide value-based decision-making. For example, previous work has implicated both working memory and procedural memory (i.e., reinforcement learning) in guiding choice. However, other types of memories, such as episodic memory, may also influence decision-making. Here we test the role for episodic memory—specifically item versus associative memory—in supporting value-based choice. Participants completed a task where they first learned the value associated with trial unique lotteries. After a short delay, they completed a decision-making task where they could choose to re-engage with previously encountered lotteries, or new never before seen lotteries. Finally, participants completed a surprise memory test for the lotteries and their associated values. Results indicate that participants chose to re-engage more often with lotteries that resulted in high versus low rewards. Critically, participants not only formed detailed, associative memories for the reward values coupled with individual lotteries, but also exhibited adaptive decision-making only when they had intact associative memory. We further found that the relationship between adaptive choice and associative memory generalized to more complex, ecologically valid choice behavior, such as social decision-making. However, individuals more strongly encode experiences of social violations—such as being treated unfairly, suggesting a bias for how individuals form associative memories within social contexts. Together, these findings provide an important integration of episodic memory and decision-making literatures to better understand key mechanisms supporting adaptive behavior. PMID:26999046

  2. Integrated Power Flow and Short Circuit Calculation Method for Distribution Network with Inverter Based Distributed Generation

    OpenAIRE

    Yang, Shan; Tong, Xiangqian

    2016-01-01

    Power flow calculation and short circuit calculation are the basis of theoretical research for distribution network with inverter based distributed generation. The similarity of equivalent model for inverter based distributed generation during normal and fault conditions of distribution network and the differences between power flow and short circuit calculation are analyzed in this paper. Then an integrated power flow and short circuit calculation method for distribution network with inverte...

  3. Precision phase estimation based on weak-value amplification

    Science.gov (United States)

    Qiu, Xiaodong; Xie, Linguo; Liu, Xiong; Luo, Lan; Li, Zhaoxue; Zhang, Zhiyou; Du, Jinglei

    2017-02-01

    In this letter, we propose a precision method for phase estimation based on the weak-value amplification (WVA) technique using a monochromatic light source. The anomalous WVA significantly suppresses the technical noise with respect to the intensity difference signal induced by the phase delay when the post-selection procedure comes into play. The phase measured precision of this method is proportional to the weak-value of a polarization operator in the experimental range. Our results compete well with the wide spectrum light phase weak measurements and outperform the standard homodyne phase detection technique.

  4. The curvature calculation mechanism based on simple cell model.

    Science.gov (United States)

    Yu, Haiyang; Fan, Xingyu; Song, Aiqi

    2017-07-20

    A conclusion has not yet been reached on how exactly the human visual system detects curvature. This paper demonstrates how orientation-selective simple cells can be used to construct curvature-detecting neural units. Through fixed arrangements, multiple plurality cells were constructed to simulate curvature cells with a proportional output to their curvature. In addition, this paper offers a solution to the problem of narrow detection range under fixed resolution by selecting an output value under multiple resolution. Curvature cells can be treated as concrete models of an end-stopped mechanism, and they can be used to further understand "curvature-selective" characteristics and to explain basic psychophysical findings and perceptual phenomena in current studies.

  5. AN EXAMPLE - BASED, DIAGNOSTIC INVESTIGATION OF VALUE CREATION AND VALUE DESTRUCTION BY CORPORATE ACTIVISTS

    Directory of Open Access Journals (Sweden)

    GABURICI Matei

    2014-06-01

    Full Text Available This paper investigates, through an example-based scenario, the extent to which corporate activists create or destroy shareholder value; there are five high-profile campaigns analyzed related to four major players. The foundation of the analysis is a variant of DCF model which examines the cash flows to equity. In 4 out of 5 cases the financial metrics are computed in order to assess the performance of the subject company ex-ante and ex-post activists’ involvement.

  6. Computing Moment-Based Probability Tables for Self-Shielding Calculations in Lattice Codes

    International Nuclear Information System (INIS)

    Hebert, Alain; Coste, Mireille

    2002-01-01

    As part of the self-shielding model used in the APOLLO2 lattice code, probability tables are required to compute self-shielded cross sections for coarse energy groups (typically with 99 or 172 groups). This paper describes the replacement of the multiband tables (typically with 51 subgroups) with moment-based tables in release 2.5 of APOLLO2. An improved Ribon method is proposed to compute moment-based probability tables, allowing important savings in CPU resources while maintaining the accuracy of the self-shielding algorithm. Finally, a validation is presented where the absorption rates obtained with each of these techniques are compared with exact values obtained using a fine-group elastic slowing-down calculation in the resolved energy domain. Other results, relative to the Rowland's benchmark and to three assembly production cases, are also presented

  7. Grid-based electronic structure calculations: The tensor decomposition approach

    Energy Technology Data Exchange (ETDEWEB)

    Rakhuba, M.V., E-mail: rakhuba.m@gmail.com [Skolkovo Institute of Science and Technology, Novaya St. 100, 143025 Skolkovo, Moscow Region (Russian Federation); Oseledets, I.V., E-mail: i.oseledets@skoltech.ru [Skolkovo Institute of Science and Technology, Novaya St. 100, 143025 Skolkovo, Moscow Region (Russian Federation); Institute of Numerical Mathematics, Russian Academy of Sciences, Gubkina St. 8, 119333 Moscow (Russian Federation)

    2016-05-01

    We present a fully grid-based approach for solving Hartree–Fock and all-electron Kohn–Sham equations based on low-rank approximation of three-dimensional electron orbitals. Due to the low-rank structure the total complexity of the algorithm depends linearly with respect to the one-dimensional grid size. Linear complexity allows for the usage of fine grids, e.g. 8192{sup 3} and, thus, cheap extrapolation procedure. We test the proposed approach on closed-shell atoms up to the argon, several molecules and clusters of hydrogen atoms. All tests show systematical convergence with the required accuracy.

  8. Influence of OSEM and segmented attenuation correction in the calculation of standardised uptake values for [18F]FDG PET

    International Nuclear Information System (INIS)

    Visvikis, D.; Costa, D.C.; Bomanji, J.; Gacinovic, S.; Ell, P.J.; Cheze-LeRest, C.

    2001-01-01

    Standardised Uptake Values (SUVs) are widely used in positron emission tomography (PET) as a semi-quantitative index of fluorine-18 labelled fluorodeoxyglucose uptake. The objective of this study was to investigate any bias introduced in the calculation of SUVs as a result of employing ordered subsets-expectation maximisation (OSEM) image reconstruction and segmented attenuation correction (SAC). Variable emission and transmission time durations were investigated. Both a phantom and a clinical evaluation of the bias were carried out. The software implemented in the GE Advance PET scanner was used. Phantom studies simulating tumour imaging conditions were performed. Since a variable count rate may influence the results obtained using OSEM, similar acquisitions were performed at total count rates of 34 kcps and 12 kcps. Clinical data consisted of 100 patient studies. Emission datasets of 5 and 15 min duration were combined with 15-, 3-, 2- and 1-min transmission datasets for the reconstruction of both phantom and patient studies. Two SUVs were estimated using the average (SUV avg ) and the maximum (SUV max ) count density from regions of interest placed well inside structures of interest. The percentage bias of these SUVs compared with the values obtained using a reference image was calculated. The reference image was considered to be the one produced by filtered backprojection (FBP) image reconstruction with measured attenuation correction using the 15-min emission and transmission datasets for each phantom and patient study. A bias of 5%-20% was found for the SUV avg and SUV max in the case of FBP with SAC using variable transmission times. In the case of OSEM with SAC, the bias increased to 10%-30%. An overall increase of 5%-10% was observed with the use of SUV max . The 5-min emission dataset led to an increase in the bias of 25%-100%, with the larger increase recorded for the SUV max . The results suggest that OSEM and SAC with 3 and 2 min transmission may be

  9. Research on Customer Value Based on Extension Data Mining

    Science.gov (United States)

    Chun-Yan, Yang; Wei-Hua, Li

    Extenics is a new discipline for dealing with contradiction problems with formulize model. Extension data mining (EDM) is a product combining Extenics with data mining. It explores to acquire the knowledge based on extension transformations, which is called extension knowledge (EK), taking advantage of extension methods and data mining technology. EK includes extensible classification knowledge, conductive knowledge and so on. Extension data mining technology (EDMT) is a new data mining technology that mining EK in databases or data warehouse. Customer value (CV) can weigh the essentiality of customer relationship for an enterprise according to an enterprise as a subject of tasting value and customers as objects of tasting value at the same time. CV varies continually. Mining the changing knowledge of CV in databases using EDMT, including quantitative change knowledge and qualitative change knowledge, can provide a foundation for that an enterprise decides the strategy of customer relationship management (CRM). It can also provide a new idea for studying CV.

  10. Calculation of regimes of electro-erosion polishing with roughness value Rsub(a) less 0,2. mu. m

    Energy Technology Data Exchange (ETDEWEB)

    Zolotykh, B.N.; Zolotykh, V.B.

    1984-01-01

    Calculation technique of ''polishing'' regimes of electroerosion treatment (EET) is considered and calculation results for a number of metals are presented. It is shown, that the calculation technique of EET polishing regimes provides results close to the experimental ones and can be used for the calculation of special pulse generators, as well as in the systems of automated projecting of EET technological processes.

  11. MARKET SIGNALS IN VALUE-BASED PRICING PREMIUMS AND DISCOUNTS

    OpenAIRE

    Feuz, Dillon M.

    1999-01-01

    There is concern in the beef industry that present marketing practices may be impending the transmission of economic signals from consumers to producers. Presently, fed cattle may be sold on a show list, pen-by-pen, or on an individual head basis, and may be priced using live weight, dressed weight, or grid or formula pricing. Market signals are more likely to reach producers if cattle are priced individually. Current value-based pricing are discussed. Three grid pricing systems are evaluated...

  12. Default values

    International Nuclear Information System (INIS)

    1987-08-01

    In making calculations for the purposes of radiation protection, numerical values for parameters used in the calculations are selected. In some cases, data directly applicable to the set of conditions for which the calculations are to be made are unavailable. Therefore, the selection of the values for these parameters may be based on more general data available from the literature or other sources. These values may be referred to as 'default values', that is, values used in default of those based on directly applicable data. The following policy will be applied by Atomic Energy Control Board (AECB) staff in reviewing the radiation protection aspects of submissions associated with licensing, in participating with other organizations in the development of codes and standards, and in any other work which relies to some extent on using default values

  13. Value-based distributed generator placements for service quality improvements

    Energy Technology Data Exchange (ETDEWEB)

    Teng, Jen-Hao; Chen, Chi-Fa [Department of Electrical Engineering, I-Shou University, No. 1, Section 1, Syuecheng Road, Dashu Township, Kaohsiung Country 840 (Taiwan); Liu, Yi-Hwa [Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei (Taiwan); Chen, Chia-Yen [Department of Computer Science, The University of Auckland (New Zealand)

    2007-03-15

    Distributed generator (DG) resources are small, self-contained electric generating plants that can provide power to homes, businesses or industrial facilities in distribution feeders. They can be used to reduce power loss and improve service reliability. However, the values of DGs are largely dependent on their types, sizes and locations as they were installed in distribution feeders. A value-based method is proposed in this paper to enhance the reliability and obtain the benefits for DG placement. The benefits of DG placement described in this paper include power cost saving, power loss reduction, and reliability enhancement. The costs of DG placement include the investment, maintenance and operating costs. The proposed value-based method tries to find the best tradeoff between the costs and benefits of DG placement and then find the optimal types of DG and their corresponding locations and sizes in distribution feeders. The derived formulations are solved by a genetic algorithm based method. Test results show that with proper types, sizes and installation site selection, DG placement can be used to improve system reliability, reduce customer interruption costs and save power cost; as well as enabling electric utilities to obtain the maximal economical benefits. (author)

  14. Dissemination of Cultural Norms and Values: Agent-Based Modeling

    Directory of Open Access Journals (Sweden)

    Denis Andreevich Degterev

    2016-12-01

    Full Text Available This article shows how agent-based modeling allows us to explore the mechanisms of the dissemination of cultural norms and values both within one country and in the whole world. In recent years, this type of simulation is particularly prevalent in the analysis of international relations, becoming more popular than the system dynamics and discrete event simulation. The use of agent-based modeling in the analysis of international relations is connected with the agent-structure problem in international relations. Structure and agents act as interdependent and dynamically changing in the process of interaction between entities. Agent-structure interaction could be modeled by means of the theory of complex adaptive systems with the use of agent-based modeling techniques. One of the first examples of the use of agent-based modeling in political science is a model of racial segregation T. Shellinga. On the basis of this model, the author shows how the change in behavioral patterns at micro-level impacts on the macro-level. Patterns are changing due to the dynamics of cultural norms and values, formed by mass-media and other social institutes. The author shows the main areas of modern application of agent-based modeling in international studies including the analysis of ethnic conflicts, the formation of international coalitions. Particular attention is paid to Robert Axelrod approach based on the use of genetic algorithms to the spread of cultural norms and values. Agent-based modeling shows how to how to create such conditions that the norms that originally are not shared by a significant part of the population, eventually spread everywhere. Practical application of these algorithms is shown by the author of the article on the example of the situation in Ukraine in 2015-2016. The article also reveals the mechanisms of international spread of cultural norms and values. The main think-tanks using agent-based modeling in international studies are

  15. Simulation and analysis of main steam control system based on heat transfer calculation

    Science.gov (United States)

    Huang, Zhenqun; Li, Ruyan; Feng, Zhongbao; Wang, Songhan; Li, Wenbo; Cheng, Jiwei; Jin, Yingai

    2018-05-01

    In this paper, after thermal power plant 300MW boiler was studied, mat lab was used to write calculation program about heat transfer process between the main steam and boiler flue gas and amount of water was calculated to ensure the main steam temperature keeping in target temperature. Then heat transfer calculation program was introduced into Simulink simulation platform based on control system multiple models switching and heat transfer calculation. The results show that multiple models switching control system based on heat transfer calculation not only overcome the large inertia of main stream temperature, a large hysteresis characteristic of main stream temperature, but also adapted to the boiler load changing.

  16. StakeMeter: value-based stakeholder identification and quantification framework for value-based software systems.

    Science.gov (United States)

    Babar, Muhammad Imran; Ghazali, Masitah; Jawawi, Dayang N A; Bin Zaheer, Kashif

    2015-01-01

    Value-based requirements engineering plays a vital role in the development of value-based software (VBS). Stakeholders are the key players in the requirements engineering process, and the selection of critical stakeholders for the VBS systems is highly desirable. Based on the stakeholder requirements, the innovative or value-based idea is realized. The quality of the VBS system is associated with the concrete set of valuable requirements, and the valuable requirements can only be obtained if all the relevant valuable stakeholders participate in the requirements elicitation phase. The existing value-based approaches focus on the design of the VBS systems. However, the focus on the valuable stakeholders and requirements is inadequate. The current stakeholder identification and quantification (SIQ) approaches are neither state-of-the-art nor systematic for the VBS systems. The existing approaches are time-consuming, complex and inconsistent which makes the initiation process difficult. Moreover, the main motivation of this research is that the existing SIQ approaches do not provide the low level implementation details for SIQ initiation and stakeholder metrics for quantification. Hence, keeping in view the existing SIQ problems, this research contributes in the form of a new SIQ framework called 'StakeMeter'. The StakeMeter framework is verified and validated through case studies. The proposed framework provides low-level implementation guidelines, attributes, metrics, quantification criteria and application procedure as compared to the other methods. The proposed framework solves the issues of stakeholder quantification or prioritization, higher time consumption, complexity, and process initiation. The framework helps in the selection of highly critical stakeholders for the VBS systems with less judgmental error.

  17. StakeMeter: value-based stakeholder identification and quantification framework for value-based software systems.

    Directory of Open Access Journals (Sweden)

    Muhammad Imran Babar

    Full Text Available Value-based requirements engineering plays a vital role in the development of value-based software (VBS. Stakeholders are the key players in the requirements engineering process, and the selection of critical stakeholders for the VBS systems is highly desirable. Based on the stakeholder requirements, the innovative or value-based idea is realized. The quality of the VBS system is associated with the concrete set of valuable requirements, and the valuable requirements can only be obtained if all the relevant valuable stakeholders participate in the requirements elicitation phase. The existing value-based approaches focus on the design of the VBS systems. However, the focus on the valuable stakeholders and requirements is inadequate. The current stakeholder identification and quantification (SIQ approaches are neither state-of-the-art nor systematic for the VBS systems. The existing approaches are time-consuming, complex and inconsistent which makes the initiation process difficult. Moreover, the main motivation of this research is that the existing SIQ approaches do not provide the low level implementation details for SIQ initiation and stakeholder metrics for quantification. Hence, keeping in view the existing SIQ problems, this research contributes in the form of a new SIQ framework called 'StakeMeter'. The StakeMeter framework is verified and validated through case studies. The proposed framework provides low-level implementation guidelines, attributes, metrics, quantification criteria and application procedure as compared to the other methods. The proposed framework solves the issues of stakeholder quantification or prioritization, higher time consumption, complexity, and process initiation. The framework helps in the selection of highly critical stakeholders for the VBS systems with less judgmental error.

  18. StakeMeter: Value-Based Stakeholder Identification and Quantification Framework for Value-Based Software Systems

    Science.gov (United States)

    Babar, Muhammad Imran; Ghazali, Masitah; Jawawi, Dayang N. A.; Zaheer, Kashif Bin

    2015-01-01

    Value-based requirements engineering plays a vital role in the development of value-based software (VBS). Stakeholders are the key players in the requirements engineering process, and the selection of critical stakeholders for the VBS systems is highly desirable. Based on the stakeholder requirements, the innovative or value-based idea is realized. The quality of the VBS system is associated with the concrete set of valuable requirements, and the valuable requirements can only be obtained if all the relevant valuable stakeholders participate in the requirements elicitation phase. The existing value-based approaches focus on the design of the VBS systems. However, the focus on the valuable stakeholders and requirements is inadequate. The current stakeholder identification and quantification (SIQ) approaches are neither state-of-the-art nor systematic for the VBS systems. The existing approaches are time-consuming, complex and inconsistent which makes the initiation process difficult. Moreover, the main motivation of this research is that the existing SIQ approaches do not provide the low level implementation details for SIQ initiation and stakeholder metrics for quantification. Hence, keeping in view the existing SIQ problems, this research contributes in the form of a new SIQ framework called ‘StakeMeter’. The StakeMeter framework is verified and validated through case studies. The proposed framework provides low-level implementation guidelines, attributes, metrics, quantification criteria and application procedure as compared to the other methods. The proposed framework solves the issues of stakeholder quantification or prioritization, higher time consumption, complexity, and process initiation. The framework helps in the selection of highly critical stakeholders for the VBS systems with less judgmental error. PMID:25799490

  19. The MiAge Calculator: a DNA methylation-based mitotic age calculator of human tissue types.

    Science.gov (United States)

    Youn, Ahrim; Wang, Shuang

    2018-01-01

    Cell division is important in human aging and cancer. The estimation of the number of cell divisions (mitotic age) of a given tissue type in individuals is of great interest as it allows not only the study of biological aging (using a new molecular aging target) but also the stratification of prospective cancer risk. Here, we introduce the MiAge Calculator, a mitotic age calculator based on a novel statistical framework, the MiAge model. MiAge is designed to quantitatively estimate mitotic age (total number of lifetime cell divisions) of a tissue using the stochastic replication errors accumulated in the epigenetic inheritance process during cell divisions. With the MiAge model, the MiAge Calculator was built using the training data of DNA methylation measures of 4,020 tumor and adjacent normal tissue samples from eight TCGA cancer types and was tested using the testing data of DNA methylation measures of 2,221 tumor and adjacent normal tissue samples of five other TCGA cancer types. We showed that within each of the thirteen cancer types studied, the estimated mitotic age is universally accelerated in tumor tissues compared to adjacent normal tissues. Across the thirteen cancer types, we showed that worse cancer survivals are associated with more accelerated mitotic age in tumor tissues. Importantly, we demonstrated the utility of mitotic age by showing that the integration of mitotic age and clinical information leads to improved survival prediction in six out of the thirteen cancer types studied. The MiAge Calculator is available at http://www.columbia.edu/∼sw2206/softwares.htm .

  20. An approach to value-based simulator selection: The creation and evaluation of the simulator value index tool.

    Science.gov (United States)

    Rooney, Deborah M; Hananel, David M; Covington, Benjamin J; Dionise, Patrick L; Nykamp, Michael T; Pederson, Melvin; Sahloul, Jamal M; Vasquez, Rachael; Seagull, F Jacob; Pinsky, Harold M; Sweier, Domenica G; Cooke, James M

    2018-04-01

    Currently there is no reliable, standardized mechanism to support health care professionals during the evaluation of and procurement processes for simulators. A tool founded on best practices could facilitate simulator purchase processes. In a 3-phase process, we identified top factors considered during the simulator purchase process through expert consensus (n = 127), created the Simulator Value Index (SVI) tool, evaluated targeted validity evidence, and evaluated the practical value of this SVI. A web-based survey was sent to simulation professionals. Participants (n = 79) used the SVI and provided feedback. We evaluated the practical value of 4 tool variations by calculating their sensitivity to predict a preferred simulator. Seventeen top factors were identified and ranked. The top 2 were technical stability/reliability of the simulator and customer service, with no practical differences in rank across institution or stakeholder role. Full SVI variations predicted successfully the preferred simulator with good (87%) sensitivity, whereas the sensitivity of variations in cost and customer service and cost and technical stability decreased (≤54%). The majority (73%) of participants agreed that the SVI was helpful at guiding simulator purchase decisions, and 88% agreed the SVI tool would help facilitate discussion with peers and leadership. Our findings indicate the SVI supports the process of simulator purchase using a standardized framework. Sensitivity of the tool improved when factors extend beyond traditionally targeted factors. We propose the tool will facilitate discussion amongst simulation professionals dealing with simulation, provide essential information for finance and procurement professionals, and improve the long-term value of simulation solutions. Limitations and application of the tool are discussed. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Methods of developing core collections based on the predicted genotypic value of rice ( Oryza sativa L.).

    Science.gov (United States)

    Li, C T; Shi, C H; Wu, J G; Xu, H M; Zhang, H Z; Ren, Y L

    2004-04-01

    The selection of an appropriate sampling strategy and a clustering method is important in the construction of core collections based on predicted genotypic values in order to retain the greatest degree of genetic diversity of the initial collection. In this study, methods of developing rice core collections were evaluated based on the predicted genotypic values for 992 rice varieties with 13 quantitative traits. The genotypic values of the traits were predicted by the adjusted unbiased prediction (AUP) method. Based on the predicted genotypic values, Mahalanobis distances were calculated and employed to measure the genetic similarities among the rice varieties. Six hierarchical clustering methods, including the single linkage, median linkage, centroid, unweighted pair-group average, weighted pair-group average and flexible-beta methods, were combined with random, preferred and deviation sampling to develop 18 core collections of rice germplasm. The results show that the deviation sampling strategy in combination with the unweighted pair-group average method of hierarchical clustering retains the greatest degree of genetic diversities of the initial collection. The core collections sampled using predicted genotypic values had more genetic diversity than those based on phenotypic values.

  2. Statistical lamb wave localization based on extreme value theory

    Science.gov (United States)

    Harley, Joel B.

    2018-04-01

    Guided wave localization methods based on delay-and-sum imaging, matched field processing, and other techniques have been designed and researched to create images that locate and describe structural damage. The maximum value of these images typically represent an estimated damage location. Yet, it is often unclear if this maximum value, or any other value in the image, is a statistically significant indicator of damage. Furthermore, there are currently few, if any, approaches to assess the statistical significance of guided wave localization images. As a result, we present statistical delay-and-sum and statistical matched field processing localization methods to create statistically significant images of damage. Our framework uses constant rate of false alarm statistics and extreme value theory to detect damage with little prior information. We demonstrate our methods with in situ guided wave data from an aluminum plate to detect two 0.75 cm diameter holes. Our results show an expected improvement in statistical significance as the number of sensors increase. With seventeen sensors, both methods successfully detect damage with statistical significance.

  3. GIS supported calculations of 137Cs deposition in Sweden based on precipitation data

    International Nuclear Information System (INIS)

    Almgren, Sara; Nilsson, Elisabeth; Erlandsson, Bengt; Isaksson, Mats

    2006-01-01

    It is of interest to know the spatial variation and the amount of 137 Cs e.g. in case of an accident with a radioactive discharge. In this study, the spatial distribution of the quarterly 137 Cs deposition over Sweden due to nuclear weapons fallout (NWF) during the period 1962-1966 was determined by relating the measured deposition density at a reference site to the amount of precipitation. Measured quarterly values of 137 Cs deposition density per unit precipitation at three reference sites and quarterly precipitation at 62 weather stations distributed over Sweden were used in the calculations. The reference sites were assumed to represent areas with different quarterly mean precipitation. The extent of these areas was determined from the distribution of the mean measured precipitation between 1961 and 1990 and varied according to seasonal variations in the mean precipitation pattern. Deposition maps were created by interpolation within a geographical information system (GIS). Both integrated (total) and cumulative (decay corrected) deposition densities were calculated. The lowest levels of NWF 137 Cs deposition density were noted in north-eastern and eastern parts of Sweden and the highest levels in the western parts of Sweden. Furthermore the deposition density of 137 Cs, resulting from the Chernobyl accident was determined for an area in western Sweden based on precipitation data. The highest levels of Chernobyl 137 Cs in western Sweden were found in the western parts of the area along the coast and the lowest in the east. The sum of the deposition densities from NWF and Chernobyl in western Sweden was then compared to the total activity measured in soil samples at 27 locations. Comparisons between the predicted values of this study show a good agreement with measured values and other studies

  4. What is a free customer worth? Armchair calculations of nonpaying customers' value can lead to flawed strategies.

    Science.gov (United States)

    Gupta, Sunil; Mela, Carl F

    2008-11-01

    Free customers who are subsidized by paying customers are essential to a vast array of businesses, such as media companies, employment services, and even IT providers. But because they generate revenue only indirectly, figuring out the true value of those customers--and how much attention to devote them--has always been a challenge. Traditional customer-valuation models don't help; they focus exclusively on paying customers and largely ignore network effects, or how customers help draw other customers to a business. Now a new model, devised by professors Gupta, of Harvard Business School, and Mela, of Fuqua School of Business, takes into account not only direct network effects (where buyers attract more buyers or sellers more sellers) but also indirect network effects (where buyers attract more sellers or vice versa) . The model calculates the precise long-term impact of each additional free customer on a company's profits, factoring in the degree to which he or she brings in other customers--whether free or paying--and the ripple effect of those customers. The model helped an online auction house make several critical decisions. The business made its money on fees charged to sellers but recognized that its free customers--its buyers--were valuable, too. As competition heated up, the company worried that it wasn't wooing enough buyers. Using the model, the business discovered that the network effects of buyers were indeed large and that those customers were worth over $1,000 each--much more than had been assumed. Armed with that information, the firm increased its research on buyers, invested more in targeting them with ads, and improved their experience. The model also helped the company identify the effects of various pricing strategies on sellers, showing that they became less price-sensitive over time. As a result, the company raised the fees it charged them as well.

  5. Lost in interpretation: should the highest VC value be used to calculate the FEV1/VC ratio?

    Directory of Open Access Journals (Sweden)

    Fortis S

    2016-09-01

    Full Text Available Spyridon FortisDepartment of Medicine, Division of Pulmonary, Critical Care and Occupational Medicine, University of Iowa, Iowa City, IA, USAAirflow obstruction or obstructive ventilatory defect (OVD is defined as low forced expiratory volume in 1 second (FEV1 to vital capacity (VC ratio. VC can be measured in various ways, and the definition of “low FEV1/VC” ratio varies.     VC can be measured during forced expiration before bronchodilators (forced vital capacity [FVC] and after bronchodilators (post-FVC, and during slow expiration (slow vital capacity [SVC] and during inspiration (inspiratory vital capacity [IVC]. Theoretically, in a healthy person, VC values should be the same regardless of the maneuver used. Nevertheless, SVC is usually larger than FVC except in patients with no OVD and body mass index <25 kg/m2.1 In obstructive lung diseases, FVC may be reduced, which may result in an increase of FEV1/FVC ratio and misdiagnosis.2 For that reason, American Thoracic Society–European Respiratory Society recommends using SVC or IVC to calculate the FEV1/VC ratio.2 Approximately, 10% of smokers have FEV1% predicted <80% and FEV1/FVC >70%, a pattern known as preserved ratio impaired spirometry.3 Of all the subjects with FVC below the lower limit of normal (LLN and FEV1/FVC > LLN, only 64% have restriction in lung volumes. The rest 36% have a nonspecific Pulmonary Function Test pattern.4 Approximately, 15% of patients with this nonspecific PFT pattern develop OVD in follow-up PFTs.4 It is possible that a portion of patients with obstructive lung disease remain underdiagnosed when FVC is used to compute FEV1/FVC ratio.View the original paper by Torén and colleagues.

  6. Environment-based pin-power reconstruction method for homogeneous core calculations

    International Nuclear Information System (INIS)

    Leroyer, H.; Brosselard, C.; Girardi, E.

    2012-01-01

    Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOX assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)

  7. How to calculate median Pregnancy-Associated Plasma Protein-A values to predict preeclampsia? Do We Need a Newer Formula?

    Directory of Open Access Journals (Sweden)

    Burçin Karamustafaoğlu Balcı

    2016-12-01

    Full Text Available Objective: Preeclampsia is one of the major issues in maternal–fetal medicine. Early risk stratification may be beneficial, so is the aim of several researches. Our goal is to investigate whether PAPP-A MoM calculated for first trimester Down's syndrome screening or MoM calculated according to Ong’s formula can be used to predict the risk of preeclampsia or do we need another method to calculate PAPP-A MoM derived from non preeclamptic cases. Study Design: For this retrospective study, data of randomly selected 150 singleton pregnant women who did not develop preeclampsia are used to create a formula to calculate median value of PAPP-A. PAPP-A values of this subgroup are plotted against gestational age and curve fit analysis is done to determine best fitted regression line to get a formula to calculate median value of our cases. PAPP-A MoM values are calculated for each subject according to Ong’s formula and our formula. We already had MoM values derived from first trimester screening. ROC curve and Delong’s pairwise comparison analyses are used to investigate which MoM value is more predictive for preeclampsia. Results: Although the area under curve value of MoM values derived from this study was the highest, DeLong’s pairwise comparison analysis showed no statistically significant difference between the three curves. Conclusion: PAPP-A MoM calculation specific to preeclampsia does not seem to be necessary; PAPP-A MoM obtained from first trimester aneuploidy scan can be used to predict preeclampsia.

  8. Value-based recruitment in midwifery: do the values align with what women say is important to them?

    OpenAIRE

    Callwood, Alison; Cooke, Debbie; Allan, Helen T.

    2016-01-01

    Aim: To discuss theoretical conceptualisation and definition of values and values-based recruitment in the context of women’s views about what they would like from their midwife. \\ud \\ud Background: Values-based recruitment received headline status in the UK government’s response to pervasive deficiencies in compassionate care identified in the health service. Core values which aim to inform service user’s experience are defined in the National Health Service Constitution but clarity about wh...

  9. Sample size calculation to externally validate scoring systems based on logistic regression models.

    Directory of Open Access Journals (Sweden)

    Antonio Palazón-Bru

    Full Text Available A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence. Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.

  10. Assessment of the Swedish EQ-5D experience-based value sets in a total hip replacement population.

    Science.gov (United States)

    Nemes, Szilárd; Burström, Kristina; Zethraeus, Niklas; Eneqvist, Ted; Garellick, Göran; Rolfson, Ola

    2015-12-01

    All patients undergoing elective total hip replacement (THR) in Sweden are asked to complete a survey, including the EQ-5D. Thus far, EQ-5D values have been presented using the UK TTO value set based on hypothetical values. Shift to the use of the recently introduced Swedish experience-based value set, derived from a representative Swedish population, is an appealing alternative. To investigate how accurate the Swedish experience-based VAS value set predicts observed EQ VAS values and to compare correlations between Swedish and UK value sets including two provisional value sets derived from the THR population. Pre- and one-year postoperative data from 56,062 THR patients from the Swedish Hip Arthroplasty Register were used. Agreement between the observed and the predicted EQ VAS values was assessed with correlation. Based on pre- and postoperative data, we constructed two provisional VAS value sets. Correlations between observed and calculated values using the Swedish VAS value set were moderate (r = 0.46) in preoperative data and high (r = 0.72) in postoperative data. Correlations between UK and register-based value sets were constantly lower compared to Swedish value sets. Register-based values and Swedish values were highly correlated. The Swedish value sets are more accurate in terms of representation of the Swedish THR patients than the currently used UK TTO value set. We find it feasible to use the experience-based Swedish value sets for further presentation of EQ-5D values in the Swedish THR population.

  11. Consolidating duodenal and small bowel toxicity data via isoeffective dose calculations based on compiled clinical data.

    Science.gov (United States)

    Prior, Phillip; Tai, An; Erickson, Beth; Li, X Allen

    2014-01-01

    To consolidate duodenum and small bowel toxicity data from clinical studies with different dose fractionation schedules using the modified linear quadratic (MLQ) model. A methodology of adjusting the dose-volume (D,v) parameters to different levels of normal tissue complication probability (NTCP) was presented. A set of NTCP model parameters for duodenum toxicity were estimated by the χ(2) fitting method using literature-based tolerance dose and generalized equivalent uniform dose (gEUD) data. These model parameters were then used to convert (D,v) data into the isoeffective dose in 2 Gy per fraction, (D(MLQED2),v) and convert these parameters to an isoeffective dose at another NTCP (D(MLQED2'),v). The literature search yielded 5 reports useful in making estimates of duodenum and small bowel toxicity. The NTCP model parameters were found to be TD50(1)(model) = 60.9 ± 7.9 Gy, m = 0.21 ± 0.05, and δ = 0.09 ± 0.03 Gy(-1). Isoeffective dose calculations and toxicity rates associated with hypofractionated radiation therapy reports were found to be consistent with clinical data having different fractionation schedules. Values of (D(MLQED2'),v) between different NTCP levels remain consistent over a range of 5%-20%. MLQ-based isoeffective calculations of dose-response data corresponding to grade ≥2 duodenum toxicity were found to be consistent with one another within the calculation uncertainty. The (D(MLQED2),v) data could be used to determine duodenum and small bowel dose-volume constraints for new dose escalation strategies. Copyright © 2014 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  12. Research on Calculation of the IOL Tilt and Decentration Based on Surface Fitting

    Directory of Open Access Journals (Sweden)

    Lin Li

    2013-01-01

    Full Text Available The tilt and decentration of intraocular lens (IOL result in defocussing, astigmatism, and wavefront aberration after operation. The objective is to give a method to estimate the tilt and decentration of IOL more accurately. Based on AS-OCT images of twelve eyes from eight cases with subluxation lens after operation, we fitted spherical equation to the data obtained from the images of the anterior and posterior surfaces of the IOL. By the established relationship between IOL tilt (decentration and the scanned angle, at which a piece of AS-OCT image was taken by the instrument, the IOL tilt and decentration were calculated. IOL tilt angle and decentration of each subject were given. Moreover, the horizontal and vertical tilt was also obtained. Accordingly, the possible errors of IOL tilt and decentration existed in the method employed by AS-OCT instrument. Based on 6–12 pieces of AS-OCT images at different directions, the tilt angle and decentration values were shown, respectively. The method of the surface fitting to the IOL surface can accurately analyze the IOL’s location, and six pieces of AS-OCT images at three pairs symmetrical directions are enough to get tilt angle and decentration value of IOL more precisely.

  13. Research on calculation of the IOL tilt and decentration based on surface fitting.

    Science.gov (United States)

    Li, Lin; Wang, Ke; Yan, Yan; Song, Xudong; Liu, Zhicheng

    2013-01-01

    The tilt and decentration of intraocular lens (IOL) result in defocussing, astigmatism, and wavefront aberration after operation. The objective is to give a method to estimate the tilt and decentration of IOL more accurately. Based on AS-OCT images of twelve eyes from eight cases with subluxation lens after operation, we fitted spherical equation to the data obtained from the images of the anterior and posterior surfaces of the IOL. By the established relationship between IOL tilt (decentration) and the scanned angle, at which a piece of AS-OCT image was taken by the instrument, the IOL tilt and decentration were calculated. IOL tilt angle and decentration of each subject were given. Moreover, the horizontal and vertical tilt was also obtained. Accordingly, the possible errors of IOL tilt and decentration existed in the method employed by AS-OCT instrument. Based on 6-12 pieces of AS-OCT images at different directions, the tilt angle and decentration values were shown, respectively. The method of the surface fitting to the IOL surface can accurately analyze the IOL's location, and six pieces of AS-OCT images at three pairs symmetrical directions are enough to get tilt angle and decentration value of IOL more precisely.

  14. The Risks and Rewards of Value-Based Reimbursement.

    Science.gov (United States)

    Henkel, Robert J; Maryland, Patricia A

    2015-01-01

    As healthcare systems across the country shift to value-based care, they face an enormous challenge. Not only must they reimagine how they identify, engage, and manage the care of patients, they also need to determine new ways of engaging and aligning physicians and other caregivers in creating better-coordinated care across the continuum. This article explores how healthcare systems making the transition from volume to value can maximize their reward while managing their risk. As the largest not-for-profit healthcare system in the United States and the largest Catholic healthcare system in the world, Ascension is committed to making its own transition, marked by broad-based innovation. We call this goal the Quadruple Aim: improving health outcomes, patient experiences, and provider experiences while lowering the overall cost of care. Healthcare systems and providers have many value-based models to choose from, including pay for performance (P4P), shared savings, bundled payments, shared risk, global capitation, and provider-sponsored health plans. Analysis of these options should include an evaluation of market readiness (i.e., the ability of a health system to align with the needs of employers or commercial insurers in a given market). Healthcare systems also must be prepared to invest in resources that facilitate effective transitions and continuity of care--for example, care management. In addition, they need to recognize that as they focus on wellness, inpatient volumes will decline, requiring cost-structure adjustments and added ancillary services to compensate for this decline. Some healthcare systems are even exploring the possibility of becoming their own payer, taking on more risk and responsibility for the health of patients and populations.

  15. Values for digestible indispensable amino acid scores (DIAAS) for some dairy and plant proteins may better describe protein quality than values calculated using the concept for protein digestibility-corrected amino acid scores (PDCAAS).

    Science.gov (United States)

    Mathai, John K; Liu, Yanhong; Stein, Hans H

    2017-02-01

    An experiment was conducted to compare values for digestible indispensable amino acid scores (DIAAS) for four animal proteins and four plant proteins with values calculated as recommended for protein digestibility-corrected amino acid scores (PDCAAS), but determined in pigs instead of in rats. Values for standardised total tract digestibility (STTD) of crude protein (CP) and standardised ileal digestibility (SID) of amino acids (AA) were calculated for whey protein isolate (WPI), whey protein concentrate (WPC), milk protein concentrate (MPC), skimmed milk powder (SMP), pea protein concentrate (PPC), soya protein isolate (SPI), soya flour and whole-grain wheat. The PDCAAS-like values were calculated using the STTD of CP to estimate AA digestibility and values for DIAAS were calculated from values for SID of AA. Results indicated that values for SID of most indispensable AA in WPI, WPC and MPC were greater (P<0·05) than for SMP, PPC, SPI, soya flour and wheat. With the exception of arginine and tryptophan, the SID of all indispensable AA in SPI was greater (P<0·05) than in soya flour, and with the exception of threonine, the SID of all indispensable AA in wheat was less (P<0·05) than in all other ingredients. If the same scoring pattern for children between 6 and 36 months was used to calculate PDCAAS-like values and DIAAS, PDCAAS-like values were greater (P<0·05) than DIAAS values for SMP, PPC, SPI, soya flour and wheat indicating that PDCAAS-like values estimated in pigs may overestimate the quality of these proteins.

  16. A Weak Value Based QKD Protocol Robust Against Detector Attacks

    Science.gov (United States)

    Troupe, James

    2015-03-01

    We propose a variation of the BB84 quantum key distribution protocol that utilizes the properties of weak values to insure the validity of the quantum bit error rate estimates used to detect an eavesdropper. The protocol is shown theoretically to be secure against recently demonstrated attacks utilizing detector blinding and control and should also be robust against all detector based hacking. Importantly, the new protocol promises to achieve this additional security without negatively impacting the secure key generation rate as compared to that originally promised by the standard BB84 scheme. Implementation of the weak measurements needed by the protocol should be very feasible using standard quantum optical techniques.

  17. Value-based insurance design: benefits beyond cost and utilization.

    Science.gov (United States)

    Gibson, Teresa B; Maclean, Ross J; Chernew, Michael E; Fendrick, A Mark; Baigel, Colin

    2015-01-01

    As value-based insurance design (VBID) programs proliferate, evidence is emerging on the impact of VBID. To date, studies have largely measured VBID impact on utilization, and a few studies have assessed its impact on quality, outcomes, and cost. In this commentary we discuss these domains, summarize evidence, and propose the extension of measurement of VBID impact into areas including workplace productivity and quality of life, employee and patient engagement, and talent attraction and retention. We contend that VBID evaluations should consider a broad variety of programmatic dividends on both humanistic and health-related outcomes.

  18. Interactive value-based curriculum: a pilot study.

    Science.gov (United States)

    Bowman Peterson, Jill M; Duffy, Briar; Duran, Alisa; Gladding, Sophia P

    2018-03-06

    Current health care costs are unsustainable, with a large percentage of waste attributed to doctor practices. Medical educators are developing curricula to address value-based care (VBC) in education. There is, however, a paucity of curricula and assessments addressing levels higher than 'knows' at the base of Miller's pyramid of assessment. Our objective was to: (1) teach residents the principles of VBC using active learning strategies; and (2) develop and pilot a tool to assess residents' ability to apply principles of VBC at the higher level of 'knows how' on Miller's pyramid. Residents in medicine, medicine-paediatrics and medicine-dermatology participated in a 5-week VBC morning report curriculum using active learning techniques. Early sessions targeted knowledge and later sessions emphasised the application of VBC principles. Medical educators are developing curricula to address value-based care in education RESULTS: Thirty residents attended at least one session and completed both pre- and post-intervention tests, using a newly developed case-based assessment tool featuring a 'waste score' balanced with 'standard of care'. Residents, on average, reduced their waste score from pre-intervention to post-intervention [mean 8.8 (SD 6.3) versus mean 4.7 (SD 4.6), p = 0.001]. For those who reduced their waste score, most maintained or improved their standard of care. Our results suggest that residents may be able to decrease health care waste, with the majority maintaining or improving their management of care in a case-based assessment after participation in the curriculum. We are working to further incorporate VBC principles into more morning reports, and to develop further interventions and assessments to evaluate our residents at higher levels on Miller's pyramid of assessment. © 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  19. Value-based health care in inflammatory bowel diseases : creating the value quotient

    NARCIS (Netherlands)

    Deen, Welmoed Kirsten van

    2016-01-01

    The essence of VBHC is to improve patients’ outcomes at lower costs. This thesis attempts to construct the value quotient (vQ) for IBD: a metric for value which incorporates patient value, defined as a combination of disease control, quality of life, and productivity in the numerator, and divides it

  20. Metrix Matrix: A Cloud-Based System for Tracking Non-Relative Value Unit Value-Added Work Metrics.

    Science.gov (United States)

    Kovacs, Mark D; Sheafor, Douglas H; Thacker, Paul G; Hardie, Andrew D; Costello, Philip

    2018-03-01

    In the era of value-based medicine, it will become increasingly important for radiologists to provide metrics that demonstrate their value beyond clinical productivity. In this article the authors describe their institution's development of an easy-to-use system for tracking value-added but non-relative value unit (RVU)-based activities. Metrix Matrix is an efficient cloud-based system for tracking value-added work. A password-protected home page contains links to web-based forms created using Google Forms, with collected data populating Google Sheets spreadsheets. Value-added work metrics selected for tracking included interdisciplinary conferences, hospital committee meetings, consulting on nonbilled outside studies, and practice-based quality improvement. Over a period of 4 months, value-added work data were collected for all clinical attending faculty members in a university-based radiology department (n = 39). Time required for data entry was analyzed for 2 faculty members over the same time period. Thirty-nine faculty members (equivalent to 36.4 full-time equivalents) reported a total of 1,223.5 hours of value-added work time (VAWT). A formula was used to calculate "value-added RVUs" (vRVUs) from VAWT. VAWT amounted to 5,793.6 vRVUs or 6.0% of total work performed (vRVUs plus work RVUs [wRVUs]). Were vRVUs considered equivalent to wRVUs for staffing purposes, this would require an additional 2.3 full-time equivalents, on the basis of average wRVU calculations. Mean data entry time was 56.1 seconds per day per faculty member. As health care reimbursement evolves with an emphasis on value-based medicine, it is imperative that radiologists demonstrate the value they add to patient care beyond wRVUs. This free and easy-to-use cloud-based system allows the efficient quantification of value-added work activities. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  1. 30 CFR 206.105 - What records must I keep to support my calculations of value under this subpart?

    Science.gov (United States)

    2010-07-01

    ... requirements are found at part 207 of this chapter. (c) MMS may review and audit your data, and MMS will direct you to use a different value if it determines that the reported value is inconsistent with the...

  2. Value-based management: Theoretical base, shareholders' request and the concept

    Directory of Open Access Journals (Sweden)

    Kaličanin Đorđe M.

    2005-01-01

    Full Text Available The pressure of financial markets, which is a consequence of shareholder revolution, directly affects the solution to the following dilemma: is the mission of corporations to maximize shareholders' wealth or to satisfy interests of other stakeholders? The domination of shareholder theory has caused the appearance of the valuebased management concept. Value-based management is a relevant concept and a process of management in modern environment. The importance of shareholder value requires transformation of traditional enterprise into value driven enterprise. This paper addresses theoretical base, shareholder revolution and the main characteristics of value-based management.

  3. Diagnostic accuracy of calculated serum osmolarity to predict dehydration in older people: adding value to pathology laboratory reports.

    Science.gov (United States)

    Hooper, Lee; Abdelhamid, Asmaa; Ali, Adam; Bunn, Diane K; Jennings, Amy; John, W Garry; Kerry, Susan; Lindner, Gregor; Pfortmueller, Carmen A; Sjöstrand, Fredrik; Walsh, Neil P; Fairweather-Tait, Susan J; Potter, John F; Hunter, Paul R; Shepstone, Lee

    2015-10-21

    To assess which osmolarity equation best predicts directly measured serum/plasma osmolality and whether its use could add value to routine blood test results through screening for dehydration in older people. Diagnostic accuracy study. Older people (≥65 years) in 5 cohorts: Dietary Strategies for Healthy Ageing in Europe (NU-AGE, living in the community), Dehydration Recognition In our Elders (DRIE, living in residential care), Fortes (admitted to acute medical care), Sjöstrand (emergency room) or Pfortmueller cohorts (hospitalised with liver cirrhosis). Directly measured serum/plasma osmolality: current dehydration (serum osmolality>300 mOsm/kg), impending/current dehydration (≥295 mOsm/kg). 39 osmolarity equations calculated using serum indices from the same blood draw as directly measured osmolality. Across 5 cohorts 595 older people were included, of whom 19% were dehydrated (directly measured osmolality>300 mOsm/kg). Of 39 osmolarity equations, 5 showed reasonable agreement with directly measured osmolality and 3 had good predictive accuracy in subgroups with diabetes and poor renal function. Two equations were characterised by narrower limits of agreement, low levels of differential bias and good diagnostic accuracy in receiver operating characteristic plots (areas under the curve>0.8). The best equation was osmolarity=1.86×(Na++K+)+1.15×glucose+urea+14 (all measured in mmol/L). It appeared useful in people aged ≥65 years with and without diabetes, poor renal function, dehydration, in men and women, with a range of ages, health, cognitive and functional status. Some commonly used osmolarity equations work poorly, and should not be used. Given costs and prevalence of dehydration in older people we suggest use of the best formula by pathology laboratories using a cutpoint of 295 mOsm/L (sensitivity 85%, specificity 59%), to report dehydration risk opportunistically when serum glucose, urea and electrolytes are measured for other reasons in

  4. Value-based assessment of robotic pancreas and liver surgery.

    Science.gov (United States)

    Patti, James C; Ore, Ana Sofia; Barrows, Courtney; Velanovich, Vic; Moser, A James

    2017-08-01

    Current healthcare economic evaluations are based only on the perspective of a single stakeholder to the healthcare delivery process. A true value-based decision incorporates all of the outcomes that could be impacted by a single episode of surgical care. We define the value proposition for robotic surgery using a stakeholder model incorporating the interests of all groups participating in the provision of healthcare services: patients, surgeons, hospitals and payers. One of the developing and expanding fields that could benefit the most from a complete value-based analysis is robotic hepatopancreaticobiliary (HPB) surgery. While initial robot purchasing costs are high, the benefits over laparoscopic surgery are considerable. Performing a literature search we found a total of 18 economic evaluations for robotic HPB surgery. We found a lack of evaluations that were carried out from a perspective that incorporates all of the impacts of a single episode of surgical care and that included a comprehensive hospital cost assessment. For distal pancreatectomies, the two most thorough examinations came to conflicting results regarding total cost savings compared to laparoscopic approaches. The most thorough pancreaticoduodenectomy evaluation found non-significant savings for total hospital costs. Robotic hepatectomies showed no cost savings over laparoscopic and only modest savings over open techniques. Lastly, robotic cholecystectomies were found to be more expensive than the gold-standard laparoscopic approach. Existing cost accounting data associated with robotic HPB surgery is incomplete and unlikely to reflect the state of this field in the future. Current data combines the learning curves for new surgical procedures being undertaken by HPB surgeons with costs derived from a market dominated by a single supplier of robotic instruments. As a result, the value proposition for stakeholders in this process cannot be defined. In order to solve this problem, future studies

  5. Calculation of color difference and measurement of the spectrum of aerosol based on human visual system

    Science.gov (United States)

    Dai, Mengyan; Liu, Jianghai; Cui, Jianlin; Chen, Chunsheng; Jia, Peng

    2017-10-01

    In order to solve the problem of the quantitative test of spectrum and color of aerosol, the measurement method of spectrum of aerosol based on human visual system was proposed. The spectrum characteristics and color parameters of three different aerosols were tested, and the color differences were calculated according to the CIE1976-L*a*b* color difference formula. Three tested powders (No 1# No 2# and No 3# ) were dispersed in a plexglass box and turned into aerosol. The powder sample was released by an injector with different dosages in each experiment. The spectrum and color of aerosol were measured by the PRO 6500 Fiber Optic Spectrometer. The experimental results showed that the extinction performance of aerosol became stronger and stronger with the increase of concentration of aerosol. While the chromaticity value differences of aerosols in the experiment were so small, luminance was verified to be the main influence factor of human eye visual perception and contributed most in the three factors of the color difference calculation. The extinction effect of No 3# aerosol was the strongest of all and caused the biggest change of luminance and color difference which would arouse the strongest human visual perception. According to the sensation level of chromatic color by Chinese, recognition color difference would be produced when the dosage of No 1# powder was more than 0.10 gram, the dosage of No 2# powder was more than 0.15 gram, and the dosage of No 3# powder was more than 0.05 gram.

  6. Can value-based insurance impose societal costs?

    Science.gov (United States)

    Koenig, Lane; Dall, Timothy M; Ruiz, David; Saavoss, Josh; Tongue, John

    2014-09-01

    Among policy alternatives considered to reduce health care costs and improve outcomes, value-based insurance design (VBID) has emerged as a promising option. Most applications of VBID, however, have not used higher cost sharing to discourage specific services. In April 2011, the state of Oregon introduced a policy for public employees that required additional cost sharing for high-cost procedures such as total knee arthroplasty (TKA). Our objectives were to estimate the societal impact of higher co-pays for TKA using Oregon as a case study and building on recent work demonstrating the effects of knee osteoarthritis and surgical treatment on employment and disability outcomes. We used a Markov model to estimate the societal impact in terms of quality of life, direct costs, and indirect costs of higher co-pays for TKA using Oregon as a case study. We found that TKA for a working population can generate societal benefits that offset the direct medical costs of the procedure. Delay in receiving surgical care, because of higher co-payment or other reasons, reduced the societal savings from TKA. We conclude that payers moving toward value-based cost sharing should consider consequences beyond direct medical expenses. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  7. Fractal Image Compression Based on High Entropy Values Technique

    Directory of Open Access Journals (Sweden)

    Douaa Younis Abbaas

    2018-04-01

    Full Text Available There are many attempts tried to improve the encoding stage of FIC because it consumed time. These attempts worked by reducing size of the search pool for pair range-domain matching but most of them led to get a bad quality, or a lower compression ratio of reconstructed image. This paper aims to present a method to improve performance of the full search algorithm by combining FIC (lossy compression and another lossless technique (in this case entropy coding is used. The entropy technique will reduce size of the domain pool (i. e., number of domain blocks based on the entropy value of each range block and domain block and then comparing the results of full search algorithm and proposed algorithm based on entropy technique to see each of which give best results (such as reduced the encoding time with acceptable values in both compression quali-ty parameters which are C. R (Compression Ratio and PSNR (Image Quality. The experimental results of the proposed algorithm proven that using the proposed entropy technique reduces the encoding time while keeping compression rates and reconstruction image quality good as soon as possible.

  8. An analytical method for calculating stresses and strains of ATF cladding based on thick walled theory

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dong Hyun; Kim, Hak Sung [Hanyang University, Seoul (Korea, Republic of); Kim, Hyo Chan; Yang, Yong Sik; In, Wang kee [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    In this paper, an analytical method based on thick walled theory has been studied to calculate stress and strain of ATF cladding. In order to prescribe boundary conditions of the analytical method, two algorithms were employed which are called subroutine 'Cladf' and 'Couple' of FRACAS, respectively. To evaluate the developed method, equivalent model using finite element method was established and stress components of the method were compared with those of equivalent FE model. One of promising ATF concepts is the coated cladding, which take advantages such as high melting point, a high neutron economy, and low tritium permeation rate. To evaluate the mechanical behavior and performance of the coated cladding, we need to develop the specified model to simulate the ATF behaviors in the reactor. In particular, the model for simulation of stress and strain for the coated cladding should be developed because the previous model, which is 'FRACAS', is for one body model. The FRACAS module employs the analytical method based on thin walled theory. According to thin-walled theory, radial stress is defined as zero but this assumption is not suitable for ATF cladding because value of the radial stress is not negligible in the case of ATF cladding. Recently, a structural model for multi-layered ceramic cylinders based on thick-walled theory was developed. Also, FE-based numerical simulation such as BISON has been developed to evaluate fuel performance. An analytical method that calculates stress components of ATF cladding was developed in this study. Thick-walled theory was used to derive equations for calculating stress and strain. To solve for these equations, boundary and loading conditions were obtained by subroutine 'Cladf' and 'Couple' and applied to the analytical method. To evaluate the developed method, equivalent FE model was established and its results were compared to those of analytical model. Based on the

  9. AMCP Partnership Forum: Advancing Value-Based Contracting.

    Science.gov (United States)

    2017-11-01

    During the past decade, payment models for the delivery of health care have undergone a dramatic shift from focusing on volume to focusing on value. This shift began with the Affordable Care Act and was reinforced by the Medicare Access and CHIP Reauthorization Act of 2015 (MACRA), which increased the emphasis on payment for delivery of quality care. Today, value-based care is a primary strategy for improving patient care while managing costs. This shift in payment models is expanding beyond the delivery of health care services to encompass models of compensation between payers and biopharmaceutical manufacturers. Value-based contracts (VBCs) have emerged as a mechanism that payers may use to better align their contracting structures with broader changes in the health care system. While pharmaceuticals represent a small share of total health care spending, it is one of the fastest-growing segments of the health care marketplace, and the increasing costs of pharmaceuticals necessitate more flexibility to contract in new ways based on the value of these products. Although not all products or services are appropriate for these types of contracts, VBCs could be a part of the solution to address increasing drug prices and overall drug spending. VBCs encompass a variety of different contracting strategies for biopharmaceutical products that do not base payment rates on volume. These contracts instead may include payment on the achievement of specific goals in a predetermined patient population and offer innovative solutions for quantifying and rewarding positive outcomes or otherwise reducing payer risk associated with pharmaceutical costs. To engage national stakeholders in a discussion of current practices, barriers, and potential benefits of VBCs, the Academy of Managed Care Pharmacy (AMCP) convened a Partnership Forum on Advancing Value-Based Contracting in Arlington, Virginia, on June 20-21, 2017. The goals of the VBC forum were as follows: (a) agree to a definition

  10. Development of sustainable water treatment technology using scientifically based calculated indexes of source water quality indicators

    Directory of Open Access Journals (Sweden)

    А. С. Трякина

    2017-10-01

    Full Text Available The article describes selection process of sustainable technological process flow chart for water treatment procedure developed on scientifically based calculated indexes of quality indicators for water supplied to water treatment facilities. In accordance with the previously calculated values of the indicators of the source water quality, the main purification facilities are selected. A more sustainable flow chart for the modern water quality of the Seversky Donets-Donbass channel is a two-stage filtering with contact prefilters and high-rate filters. The article proposes a set of measures to reduce such an indicator of water quality as permanganate oxidation. The most suitable for these purposes is sorption purification using granular activated carbon for water filtering. The increased water hardness is also quite topical. The method of ion exchange on sodium cation filters was chosen to reduce the water hardness. We also evaluated the reagents for decontamination of water. As a result, sodium hypochlorite is selected for treatment of water, which has several advantages over chlorine and retains the necessary aftereffect, unlike ozone. A technological flow chart with two-stage purification on contact prefilters and two-layer high-rate filters (granular activated carbon - quartz sand with disinfection of sodium hypochlorite and softening of a part of water on sodium-cation exchangers filters is proposed. This technological flow chart of purification with any fluctuations in the quality of the source water is able to provide purified water that meets the requirements of the current sanitary-hygienic standards. In accordance with the developed flow chart, guidelines and activities for the reconstruction of the existing Makeevka Filtering Station were identified. The recommended flow chart uses more compact and less costly facilities, as well as additional measures to reduce those water quality indicators, the values of which previously were in

  11. A proposed selection index for feedlot profitability based on estimated breeding values.

    Science.gov (United States)

    van der Westhuizen, R R; van der Westhuizen, J

    2009-04-22

    It is generally accepted that feed intake and growth (gain) are the most important economic components when calculating profitability in a growth test or feedlot. We developed a single post-weaning growth (feedlot) index based on the economic values of different components. Variance components, heritabilities and genetic correlations for and between initial weight (IW), final weight (FW), feed intake (FI), and shoulder height (SHD) were estimated by multitrait restricted maximum likelihood procedures. The estimated breeding values (EBVs) and the economic values for IW, FW and FI were used in a selection index to estimate a post-weaning or feedlot profitability value. Heritabilities for IW, FW, FI, and SHD were 0.41, 0.40, 0.33, and 0.51, respectively. The highest genetic correlations were 0.78 (between IW and FW) and 0.70 (between FI and FW). EBVs were used in a selection index to calculate a single economical value for each animal. This economic value is an indication of the gross profitability value or the gross test value (GTV) of the animal in a post-weaning growth test. GTVs varied between -R192.17 and R231.38 with an average of R9.31 and a standard deviation of R39.96. The Pearson correlations between EBVs (for production and efficiency traits) and GTV ranged from -0.51 to 0.68. The lowest correlation (closest to zero) was 0.26 between the Kleiber ratio and GTV. Correlations of 0.68 and -0.51 were estimated between average daily gain and GTV and feed conversion ratio and GTV, respectively. These results showed that it is possible to select for GTV. The selection index can benefit feedlotting in selecting offspring of bulls with high GTVs to maximize profitability.

  12. Mobile application-based Seoul National University Prostate Cancer Risk Calculator: development, validation, and comparative analysis with two Western risk calculators in Korean men.

    Directory of Open Access Journals (Sweden)

    Chang Wook Jeong

    Full Text Available OBJECTIVES: We developed a mobile application-based Seoul National University Prostate Cancer Risk Calculator (SNUPC-RC that predicts the probability of prostate cancer (PC at the initial prostate biopsy in a Korean cohort. Additionally, the application was validated and subjected to head-to-head comparisons with internet-based Western risk calculators in a validation cohort. Here, we describe its development and validation. PATIENTS AND METHODS: As a retrospective study, consecutive men who underwent initial prostate biopsy with more than 12 cores at a tertiary center were included. In the development stage, 3,482 cases from May 2003 through November 2010 were analyzed. Clinical variables were evaluated, and the final prediction model was developed using the logistic regression model. In the validation stage, 1,112 cases from December 2010 through June 2012 were used. SNUPC-RC was compared with the European Randomized Study of Screening for PC Risk Calculator (ERSPC-RC and the Prostate Cancer Prevention Trial Risk Calculator (PCPT-RC. The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC. The clinical value was evaluated using decision curve analysis. RESULTS: PC was diagnosed in 1,240 (35.6% and 417 (37.5% men in the development and validation cohorts, respectively. Age, prostate-specific antigen level, prostate size, and abnormality on digital rectal examination or transrectal ultrasonography were significant factors of PC and were included in the final model. The predictive accuracy in the development cohort was 0.786. In the validation cohort, AUC was significantly higher for the SNUPC-RC (0.811 than for ERSPC-RC (0.768, p<0.001 and PCPT-RC (0.704, p<0.001. Decision curve analysis also showed higher net benefits with SNUPC-RC than with the other calculators. CONCLUSIONS: SNUPC-RC has a higher predictive accuracy and clinical benefit than Western risk calculators. Furthermore, it is easy

  13. Cost Based Value Stream Mapping as a Sustainable Construction Tool for Underground Pipeline Construction Projects

    Directory of Open Access Journals (Sweden)

    Murat Gunduz

    2017-11-01

    Full Text Available This paper deals with application of Value Stream Mapping (VSM as a sustainable construction tool on a real construction project of installation of underground pipelines. VSM was adapted to reduce the high percentage of non-value-added activities and time wastes during each construction stage and the paper searched for an effective way to consider the cost for studied construction of underground pipeline. This paper is unique in its way that it adopts cost implementation of VSM to improve the productivity in underground pipeline projects. The data was observed and collected from site during construction, indicating the cycle time, value added and non-value added of each construction stage. The current state was built based on these details. This was an eye-opening exercise and a process management tool as a trigger for improvement. After the current state assessment, a future state is attempted by Value Stream Mapping tool balancing the resources using a Line of Balance (LOB technique. Moreover, a sustainable cost estimation model was developed during current state and future state to calculate the cost of underground pipeline construction. The result shows a cost reduction of 20.8% between current and future states. This reflects the importance of the cost based Value Stream Mapping in construction as a sustainable measurement tool. This new tool could be utilized in construction industry to add the sustainability and effective cost management.

  14. A computer-based matrix for rapid calculation of pulmonary hemodynamic parameters in congenital heart disease

    International Nuclear Information System (INIS)

    Lopes, Antonio Augusto; Miranda, Rogerio dos Anjos; Goncalves, Rilvani Cavalcante; Thomaz, Ana Maria

    2009-01-01

    In patients with congenital heart disease undergoing cardiac catheterization for hemodynamic purposes, parameter estimation by the indirect Fick method using a single predicted value of oxygen consumption has been a matter of criticism. We developed a computer-based routine for rapid estimation of replicate hemodynamic parameters using multiple predicted values of oxygen consumption. Using Microsoft Excel facilities, we constructed a matrix containing 5 models (equations) for prediction of oxygen consumption, and all additional formulas needed to obtain replicate estimates of hemodynamic parameters. By entering data from 65 patients with ventricular septal defects, aged 1 month to 8 years, it was possible to obtain multiple predictions for oxygen consumption, with clear between-age groups ( P <.001) and between-methods ( P <.001) differences. Using these predictions in the individual patient, it was possible to obtain the upper and lower limits of a likely range for any given parameter, which made estimation more realistic. The organized matrix allows for rapid obtainment of replicate parameter estimates, without error due to exhaustive calculations. (author)

  15. The lower limiting values of collector properties based on core data

    Energy Technology Data Exchange (ETDEWEB)

    Mitrofanov, V P; Tul' bovich, B I

    1982-01-01

    There are numerous methods for determining the lower limiting values of collector properties; which is caused by complexity of studying objects, the utilization of different petrophysical parameters and characteristics of formation productiveness. Based on laboratory studies conducted at PermNIPIneft', two methods of determining the limited values of collector properties were examined with consideration of data from existing literature: 1) from the critical water saturation content K /SUB b/ *; 2) using the phase permeability for kerosene or oil K /SUB prk/. In the first case the value of K /SUB b/ * is determined from the presence of filtering of a two-face flow with the oil fraction not less than 2%. Knowing the value of K /SUB b/ *, the limiting values of collector properties are evaluated by using the petrophysical relationships, which reflect the connection between residual water saturation, permeability for a gas, porosity, the complex parameter ..sqrt.. K /SUB prg/ /K /SUB p/, and also by the effective porosity. In the second case determination of the phase permeability K /SUB prk/ for collectors with low permeability allows one to establish limiting values of collector properties of permeability. Transition to the porosity limit is achieved by the relationship of gas permeability to open or effective porosity. The examining methods for determining lower limiting values of collector properties are used in calculating the reserve of 9 deposits in the Permian region.

  16. A Value-based Approach to Unspelling Entrepreneurship

    Directory of Open Access Journals (Sweden)

    Baryshev Aleksey

    2016-01-01

    Full Text Available Entrepreneurship and the entrepreneur figure are social phenomena subjected to constant mutations. Extant definitions of entrepreneurship are always based on current characteristics of entrepreneur activity and, as a rule, absolutize them as essential. In the article main mystifications of entrepreneurship are analyzed due to the fact that it is attributed general properties of entrepreneurial activity in the way they are understood in a certain historical period. The paper suggests a way out of the succession of mystifications by rejecting the entrepreneurship definition in the frame of a subject-object interpretive scheme and the development of the entrepreneur activity concept as a kind of social activity. The social action theory and the theory of social objects fetishization are the methodological research foundation. The results of the research are the following. First, the interpretation of the entrepreneur and entrepreneurship on the basis of personal traits, functions and ways of the construction of the future are mystifications ascribing the monopoly on arising and developing human action characteristics in different historical periods to entrepreneurship. Second, the cause of mystifications is the transfer of the human action analysis according to the scheme «subject-process-object » to the research of entrepreneurship. Third, attempts to overcome mystifications while preserving the naturalistic approach result in the multiplication of entrepreneurial mystifications as the reflection of new arising general characteristics of human action. Fourth, a general approach to the definition of the entrepreneurial action as a kind of the social action, which has sense in the value production and is capable of everlasting enrichment with new modus of actions and created values, is suggested. Perspectives of further research include the investigation of synchronic and diachronic peculiarities of entrepreneurial action modus and produced values.

  17. A method for valuing architecture-based business transformation and measuring the value of solutions architecture

    NARCIS (Netherlands)

    Slot, R.G.

    2010-01-01

    Enterprise and Solution Architecture are key in today’s business environment. It is surprising that the foundation and business case for these activities are nonexistent; the financial value for the business of these activities is largely undetermined. To determine business value of enterprise and

  18. Decision of National and Provincial Highway Asphalt Pavement Structure Based on Value Engineering

    Directory of Open Access Journals (Sweden)

    Yingwei Ren

    2014-01-01

    Full Text Available It is important that decision of asphalt pavement structure requires overall considerations of the performance and financial investment. To have asphalt pavement structure fulfilling good reliability, the asphalt pavement structure decision was researched based on value engineering theory. According to the national and provincial highway investigation data in Shandong Province during the last decade, the asphalt pavement performance attenuation rules of traffic levels and asphalt layer thicknesses were developed, and then the road performance evaluation method was presented. In addition, the initial investments, the costs of road maintenance, and middle-scale repair in a period were analyzed. For the light traffic and medium traffic example, using the value engineering method, the pavement performance and costs of which thickness varies from 6 cm to 10 cm were calculated and compared. It was concluded that value engineering was an effective method in deciding the asphalt pavement structure.

  19. Graph-cut based discrete-valued image reconstruction.

    Science.gov (United States)

    Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim

    2015-05-01

    Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.

  20. Value-based benefit-cost of local DSM

    International Nuclear Information System (INIS)

    Stein, V.

    1995-01-01

    Value-based benefits and costs of demand-side management (DSM) were discussed in the context of local electricity resource planning in downtown Toronto. The analysis considered the effects on local customer interruption as a result of DSM, and the deferment in need for local transmission and distribution upgrades. The life cycle and cash flow benefits and costs of DSM were discussed from the perspectives of the electric utility, the DSM-participating and non-participating customers, and society as a whole. Cashflow and lifecycle analyses results were reconciled. The Toronto Integrated Electrical Service (TIES) study, the basis for this paper, was described. Two main conclusions were reached, i.e. since the savings in the generationg system as a whole were far greater than the local savings,the value of a specific DSM program would be similar across a utility's service area, and (2) while cashflow analysis illustrated the short and medium term benefits and costs in a way most people intuitively understand, in effect,the lifecycle-cost estimates produce a clearer indicator of long-run economics

  1. Value-based recruitment in midwifery: do the values align with what women say is important to them?

    Science.gov (United States)

    Callwood, Alison; Cooke, Debbie; Allan, Helen

    2016-10-01

    The aim of this study was to discuss theoretical conceptualization and definition of values and value-based recruitment in the context of women's views about what they would like from their midwife. Value-based recruitment received headline status in the UK government's response to pervasive deficiencies in compassionate care identified in the health service. Core values which aim to inform service user's experience are defined in the National Health Service Constitution but clarity about whether these encompass all that women say is important to them is needed. Discussion paper. A literature search included published papers written in English relating to values, VBR and women's views of a 'good' midwife with no date limiters. Definitions of values and value-based recruitment are examined. Congruence is explored between what women say is important to them and key government and professional regulatory documentation. The importance of a 'sustainable emotional' dimension in the midwife-mother relationship is suggested. Inconsistencies are identified between women's views, government, professional documentation and what women say they want. An omission of any reference to emotions or emotionality in value-based recruitment policy, professional recruitment and selection guidance documentation is identified. A review of key professional documentation, in relation to selection for 'values', is proposed. We argue for clarity and revision so that values embedded in value-based recruitment are consistent with health service users' views. An enhancement of the 'values' in the value-based recruitment framework is recommended to include the emotionality that women state is a fundamental part of their relationship with their midwife. © 2016 John Wiley & Sons Ltd.

  2. Calculating Program for Decommissioning Work Productivity based on Decommissioning Activity Experience Data

    Energy Technology Data Exchange (ETDEWEB)

    Song, Chan-Ho; Park, Seung-Kook; Park, Hee-Seong; Moon, Jei-kwon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    KAERI is performing research to calculate a coefficient for decommissioning work unit productivity to calculate the estimated time decommissioning work and estimated cost based on decommissioning activity experience data for KRR-2. KAERI used to calculate the decommissioning cost and manage decommissioning activity experience data through systems such as the decommissioning information management system (DECOMMIS), Decommissioning Facility Characterization DB System (DEFACS), decommissioning work-unit productivity calculation system (DEWOCS). In particular, KAERI used to based data for calculating the decommissioning cost with the form of a code work breakdown structure (WBS) based on decommissioning activity experience data for KRR-2.. Defined WBS code used to each system for calculate decommissioning cost. In this paper, we developed a program that can calculate the decommissioning cost using the decommissioning experience of KRR-2, UCP, and other countries through the mapping of a similar target facility between NPP and KRR-2. This paper is organized as follows. Chapter 2 discusses the decommissioning work productivity calculation method, and the mapping method of the decommissioning target facility will be described in the calculating program for decommissioning work productivity. At KAERI, research on various decommissioning methodologies of domestic NPPs will be conducted in the near future. In particular, It is difficult to determine the cost of decommissioning because such as NPP facility have the number of variables, such as the material of the target facility decommissioning, size, radiographic conditions exist.

  3. Calculating Program for Decommissioning Work Productivity based on Decommissioning Activity Experience Data

    International Nuclear Information System (INIS)

    Song, Chan-Ho; Park, Seung-Kook; Park, Hee-Seong; Moon, Jei-kwon

    2014-01-01

    KAERI is performing research to calculate a coefficient for decommissioning work unit productivity to calculate the estimated time decommissioning work and estimated cost based on decommissioning activity experience data for KRR-2. KAERI used to calculate the decommissioning cost and manage decommissioning activity experience data through systems such as the decommissioning information management system (DECOMMIS), Decommissioning Facility Characterization DB System (DEFACS), decommissioning work-unit productivity calculation system (DEWOCS). In particular, KAERI used to based data for calculating the decommissioning cost with the form of a code work breakdown structure (WBS) based on decommissioning activity experience data for KRR-2.. Defined WBS code used to each system for calculate decommissioning cost. In this paper, we developed a program that can calculate the decommissioning cost using the decommissioning experience of KRR-2, UCP, and other countries through the mapping of a similar target facility between NPP and KRR-2. This paper is organized as follows. Chapter 2 discusses the decommissioning work productivity calculation method, and the mapping method of the decommissioning target facility will be described in the calculating program for decommissioning work productivity. At KAERI, research on various decommissioning methodologies of domestic NPPs will be conducted in the near future. In particular, It is difficult to determine the cost of decommissioning because such as NPP facility have the number of variables, such as the material of the target facility decommissioning, size, radiographic conditions exist

  4. THEXSYST - a knowledge based system for the control and analysis of technical simulation calculations

    International Nuclear Information System (INIS)

    Burger, B.

    1991-07-01

    This system (THEXSYST) will be used for control, analysis and presentation of thermal hydraulic simulation calculations of light water reactors. THEXSYST is a modular system consisting of an expert shell with user interface, a data base, and a simulation program and uses techniques available in RSYST. A knowledge base, which was created to control the simulational calculation of pressurized water reactors, includes both the steady state calculation and the transient calculation in the domain of the depressurization, as a result of a small break loss of coolant accident. The methods developed are tested using a simulational calculation with RELAP5/Mod2. It will be seen that the application of knowledge base techniques may be a helpful tool to support existing solutions especially in graphical analysis. (orig./HP) [de

  5. P-value based visualization of codon usage data

    Directory of Open Access Journals (Sweden)

    Fricke Wolfgang

    2006-06-01

    Full Text Available Abstract Two important and not yet solved problems in bacterial genome research are the identification of horizontally transferred genes and the prediction of gene expression levels. Both problems can be addressed by multivariate analysis of codon usage data. In particular dimensionality reduction methods for visualization of multivariate data have shown to be effective tools for codon usage analysis. We here propose a multidimensional scaling approach using a novel similarity measure for codon usage tables. Our probabilistic similarity measure is based on P-values derived from the well-known chi-square test for comparison of two distributions. Experimental results on four microbial genomes indicate that the new method is well-suited for the analysis of horizontal gene transfer and translational selection. As compared with the widely-used correspondence analysis, our method did not suffer from outlier sensitivity and showed a better clustering of putative alien genes in most cases.

  6. A Comparison of Vertical Stiffness Values Calculated from Different Measures of Center of Mass Displacement in Single-Leg Hopping.

    Science.gov (United States)

    Mudie, Kurt L; Gupta, Amitabh; Green, Simon; Hobara, Hiroaki; Clothier, Peter J

    2017-02-01

    This study assessed the agreement between K vert calculated from 4 different methods of estimating vertical displacement of the center of mass (COM) during single-leg hopping. Healthy participants (N = 38) completed a 10-s single-leg hopping effort on a force plate, with 3D motion of the lower limb, pelvis, and trunk captured. Derived variables were calculated for a total of 753 hop cycles using 4 methods, including: double integration of the vertical ground reaction force, law of falling bodies, a marker cluster on the sacrum, and a segmental analysis method. Bland-Altman plots demonstrated that K vert calculated using segmental analysis and double integration methods have a relatively small bias (0.93 kN⋅m -1 ) and 95% limits of agreement (-1.89 to 3.75 kN⋅m -1 ). In contrast, a greater bias was revealed between sacral marker cluster and segmental analysis (-2.32 kN⋅m -1 ), sacral marker cluster and double integration (-3.25 kN⋅m -1 ), and the law of falling bodies compared with all methods (17.26-20.52 kN⋅m -1 ). These findings suggest the segmental analysis and double integration methods can be used interchangeably for the calculation of K vert during single-leg hopping. The authors propose the segmental analysis method to be considered the gold standard for the calculation of K vert during single-leg, on-the-spot hopping.

  7. Integrated Power Flow and Short Circuit Calculation Method for Distribution Network with Inverter Based Distributed Generation

    Directory of Open Access Journals (Sweden)

    Shan Yang

    2016-01-01

    Full Text Available Power flow calculation and short circuit calculation are the basis of theoretical research for distribution network with inverter based distributed generation. The similarity of equivalent model for inverter based distributed generation during normal and fault conditions of distribution network and the differences between power flow and short circuit calculation are analyzed in this paper. Then an integrated power flow and short circuit calculation method for distribution network with inverter based distributed generation is proposed. The proposed method let the inverter based distributed generation be equivalent to Iθ bus, which makes it suitable to calculate the power flow of distribution network with a current limited inverter based distributed generation. And the low voltage ride through capability of inverter based distributed generation can be considered as well in this paper. Finally, some tests of power flow and short circuit current calculation are performed on a 33-bus distribution network. The calculated results from the proposed method in this paper are contrasted with those by the traditional method and the simulation method, whose results have verified the effectiveness of the integrated method suggested in this paper.

  8. Calculation for Primary Combustion Characteristics of Boron-Based Fuel-Rich Propellant Based on BP Neural Network

    OpenAIRE

    Wan'e, Wu; Zuoming, Zhu

    2012-01-01

    A practical scheme for selecting characterization parameters of boron-based fuel-rich propellant formulation was put forward; a calculation model for primary combustion characteristics of boron-based fuel-rich propellant based on backpropagation neural network was established, validated, and then was used to predict primary combustion characteristics of boron-based fuel-rich propellant. The results show that the calculation error of burning rate is less than ± 7 . 3 %; in the formulation rang...

  9. Discontinuous financing based on market values and the value of tax shields

    OpenAIRE

    Arnold, Sven; Lahmann, Alexander; Schwetzler, Bernhard

    2018-01-01

    The tax shield as present value of debt-related tax savings plays an important role in firm valuation. Driving the risk of future debt levels, the firm's strategy to adjust the absolute debt level to future changes of the firm value, labeled as (re-) financing policy, affects the value of tax shields. Standard discounted cash flow (DCF) models offer two simplified (re-) financing policies originally introduced by Modigliani and Miller (MM) as well as Miles and Ezzell (ME). In this paper, we i...

  10. Usefulness of percentage enhancement washout value calculated on unenhanced, contrast-enhanced, and delayed enhanced CT in adrenal masses: adenoma versus metastasis

    International Nuclear Information System (INIS)

    Sohn, Kyung Myung; Lee, Sung Yong; Lee, Keun Ho

    2003-01-01

    To determine the usefulness of percentage enhancement washout value calculated on unenhanced, enhanced and delayed enhanced CT scans for the characterization of adrenal masses. Forty adrenal masses less than 5 cm in size were assessed using a protocol consisting of unenhanced CT, enhanced CT 60 seconds after intravenous administration of contrast material, and delayed enhanced CT at 10 minutes. The CT attenuation value of adrenal tumors was estimated on each scan, and percentage enhancement washout value was calculated as follows:[(attenuation value at enhanced CT-attenuation value at delayed CT)/ (attenuation value at enhanced CT-attenuation value at unenhanced CT)x100]. An adrenal mass was considered benign if its percentage enhancement washout value was at the threshold value, set to 60% and 50%, or higher. The accuracy of the procedure was determined by comparing its findings with the final clinical diagnosis. Twenty-nine massess were benign and 11 were malignant. The mean percentage enhancement washout value of the former was significantly higher than that of the latter (66.7% vs. 21.8%; p<0.01). All adenomas except one had a washout value of more than 50%. With a percentage washout threshold of 60%, 35 of 40 lesions were correctly characterized as benign or malignant [sensitivity 82.7% (24/29), specificity 100% (11/11), accuracy 87.5% (35/40)]; with a threshold of 50%, 39 of 40 lesions were correctly characterized [(sensitivity 96.5% (28/29), specificity 100% (11/11), accuracy 97.5% (39/40)]. Percentage enhancement washout values are useful for characterizing an adrenal mass as benign or malignant. For characterization, a threshold value of 50% was more accurate than one of 60%

  11. Making Value-Based Payment Work for Academic Health Centers.

    Science.gov (United States)

    Miller, Harold D

    2015-10-01

    Under fee-for-service payment systems, physicians and hospitals can be financially harmed by delivering higher-quality, more efficient care. The author describes how current "value-based purchasing" initiatives fail to address the underlying problems in fee-for-service payment and can be particularly problematic for academic health centers (AHCs). Bundled payments, warranties, and condition-based payments can correct the problems with fee-for-service payments and enable physicians and hospitals to redesign care delivery without causing financial problems for themselves. However, the author explains several specific actions that are needed to ensure that payment reforms can be a "win-win-win" for patients, purchasers, and AHCs: (1) disconnecting funding for teaching and research from payment for service delivery, (2) providing predictable payment for essential hospital services, (3) improving the quality and efficiency of care at AHCs, and (4) supporting collaborative relationships between AHCs and community providers by allowing each to focus on their unique strengths and by paying AHC specialists to assist community providers in diagnosis and treatment. With appropriate payment reforms and a commitment by AHCs to redesign care delivery, medical education, and research, AHCs could provide the leadership needed to improve care for patients, lower costs for health care purchasers, and maintain the financial viability of both AHCs and community providers.

  12. Value-based health care for inflammatory bowel diseases.

    Science.gov (United States)

    van Deen, Welmoed K; Esrailian, Eric; Hommes, Daniel W

    2015-05-01

    Increasing healthcare costs worldwide put the current healthcare systems under pressure. Although many efforts have aimed to contain costs in medicine, only a few have achieved substantial changes. Inflammatory bowel diseases rank among the most costly of chronic diseases, and physicians nowadays are increasingly engaged in health economics discussions. Value-based health care [VBHC] has gained a lot of attention recently, and is thought to be the way forward to contain costs while maintaining quality. The key concept behind VBHC is to improve achieved outcomes per encountered costs, and evaluate performance accordingly. Four main components need to be in place for the system to be effective: [1] accurate measurement of health outcomes and costs; [2] reporting of these outcomes and benchmarking against other providers; [3] identification of areas in need of improvement based on these data and adjusting the care delivery processes accordingly; and [4] rewarding high-performing participants. In this article we will explore the key components of VBHC, we will review available evidence focussing on inflammatory bowel diseases, and we will present our own experience as a guide for other providers. Copyright © 2015 European Crohn’s and Colitis Organisation (ECCO). Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  13. Identifying values and beliefs in an outcomes-based curriculum

    African Journals Online (AJOL)

    Erna Kinsey

    In an analysis of Curriculum 2005 and the National Curriculum Statement, value and belief systems ... directives for teachers about the identification of values within the ... Parks (Fowler et al., 1992:106) makes a distinction between religion,.

  14. Employee knowledge of value-based insurance design benefits.

    Science.gov (United States)

    Henrikson, Nora B; Anderson, Melissa L; Hubbard, Rebecca A; Fishman, Paul; Grossman, David C

    2014-08-01

    Value-based insurance designs (VBD) incorporate evidence-based medicine into health benefit design. Consumer knowledge of new VBD benefits is important to assessing their impact on health care use. To assess knowledge of features of a VBD. The eligible study population was employees receiving healthcare benefits in an integrated care system in the U.S. Pacific Northwest. In 2010, participants completed a web-based survey 2 months after rollout of the plan, including three true/false questions about benefit design features including copays for preventive care visits and chronic disease medications and premium costs. Analysis was completed in 2012. Knowledgeable was defined as correct response to all three questions; self-reported knowledge was also assessed. A total of 3,463 people completed the survey (response rate=71.7%). The majority of respondents were female (80.1%) Caucasians (79.6%) aged 35-64 years (79.0%), reflecting the overall employee population. A total of 45.7% had at least a 4-year college education, and 69.1% were married. About three quarters of respondents correctly answered each individual question; half (52.1%) of respondents answered all three questions correctly. On multivariate analysis, knowledge was independently associated with female gender (OR=1.80, 95% CI=1.40, 2.31); Caucasian race (OR=1.72, 95% CI=1.28, 2.32); increasing household income (OR for ≥$100,000=1.86, 95% CI=1.29, 2.68); nonunion job status (OR compared to union status=1.63, 95% CI=1.17, 2.26); and high satisfaction with the health plan (OR compared to low satisfaction=1.26; 95% CI=1.00, 1.57). Incomplete knowledge of benefits is prevalent in an employee population soon after VBD rollout. Copyright © 2014 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  15. Differences in 3D dose distributions due to calculation method of voxel S-values and the influence of image blurring in SPECT

    International Nuclear Information System (INIS)

    Pacilio, Massimiliano; Basile, Chiara; Amato, Ernesto; Lanconelli, Nico; Torres, Leonel Alberto; Perez, Marco Coca; Gil, Alex Vergara; Botta, Francesca; Ferrari, Mahila; Cremonesi, Marta; Diaz, Nestor Cornejo; Fernández, María; Lassmann, Michael

    2015-01-01

    This study compares 3D dose distributions obtained with voxel S values (VSVs) for soft tissue, calculated by several methods at their current state-of-the-art, varying the degree of image blurring. The methods were: 1) convolution of Dose Point Kernel (DPK) for water, using a scaling factor method; 2) an analytical model (AM), fitting the deposited energy as a function of the source-target distance; 3) a rescaling method (RSM) based on a set of high-resolution VSVs for each isotope; 4) local energy deposition (LED). VSVs calculated by direct Monte Carlo simulations were assumed as reference. Dose distributions were calculated considering spheroidal clusters with various sizes (251, 1237 and 4139 voxels of 3 mm size), uniformly filled with 131 I, 177 Lu, 188 Re or 90 Y. The activity distributions were blurred with Gaussian filters of various widths (6, 8 and 12 mm). Moreover, 3D-dosimetry was performed for 10 treatments with 90 Y derivatives. Cumulative Dose Volume Histograms (cDVHs) were compared, studying the differences in D 95% , D 50% or D max (ΔD 95% , ΔD 50% and ΔD max ) and dose profiles. For unblurred spheroidal clusters, ΔD 95% , ΔD 50% and ΔD max were mostly within some percents, slightly higher for 177 Lu with DPK (8%) and RSM (12%) and considerably higher for LED (ΔD 95% up to 59%). Increasing the blurring, differences decreased and also LED yielded very similar results, but D 95% and D 50% underestimations between 30–60% and 15–50%, respectively (with respect to 3D-dosimetry with unblurred distributions), were evidenced. Also for clinical images (affected by blurring as well), cDVHs differences for most methods were within few percents, except for slightly higher differences with LED, and almost systematic for dose profiles with DPK (−1.2%), AM (−3.0%) and RSM (4.5%), whereas showed an oscillating trend with LED. The major concern for 3D-dosimetry on clinical SPECT images is more strongly represented by image blurring than by

  16. A thermodynamic data base for Tc to calculate equilibrium solubilities at temperatures up to 300 deg C

    Energy Technology Data Exchange (ETDEWEB)

    Puigdomenech, I [Studsvik AB, Nykoeping (Sweden); Bruno, J [Intera Information Technologies SL, Cerdanyola (Spain)

    1995-04-01

    Thermodynamic data has been selected for solids and aqueous species of technetium. Equilibrium constants have been calculated in the temperature range 0 to 300 deg C at a pressure of 1 bar for T<100 deg C and at the steam saturated pressure at higher temperatures. For aqueous species, the revised Helgeson-Kirkham-Flowers model is used for temperature extrapolations. The data base contains a large amount of estimated data, and the methods used for these estimations are described in detail. A new equation is presented that allows the estimation of {Delta}{sub r}Cdeg{sub pm} values for mononuclear hydrolysis reactions. The formation constants for chloro complexes of Tc(V) and Tc(IV), whose existence is well established, have been estimated. The majority of entropy and heat capacity values in the data base have also been estimated, and therefore temperature extrapolations are largely based on estimations. The uncertainties derived from these calculations are described. Using the data base developed in this work, technetium solubilities have been calculated as a function of temperature for different chemical conditions. The implications for the mobility of Tc under nuclear repository conditions are discussed. 70 refs.

  17. 40 CFR 600.206-93 - Calculation and use of fuel economy values for gasoline-fueled, diesel-fueled, electric, alcohol...

    Science.gov (United States)

    2010-07-01

    ... EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures... equivalent petroleum-based fuel economy value exists for an electric vehicle configuration, all values for... values for gasoline-fueled, diesel-fueled, electric, alcohol-fueled, natural gas-fueled, alcohol dual...

  18. Data base structure and Management for Automatic Calculation of 210Pb Dating Methods Applying Different Models

    International Nuclear Information System (INIS)

    Gasco, C.; Anton, M. P.; Ampudia, J.

    2003-01-01

    The introduction of macros in try calculation sheets allows the automatic application of various dating models using unsupported ''210 Pb data from a data base. The calculation books the contain the models have been modified to permit the implementation of these macros. The Marine and Aquatic Radioecology group of CIEMAT (MARG) will be involved in new European Projects, thus new models have been developed. This report contains a detailed description of: a) the new implement macros b) the design of a dating Menu in the calculation sheet and c) organization and structure of the data base. (Author) 4 refs

  19. Comparison of calculated and experimental values of the yields of xenon isotopes in reactions with high-energy protons

    International Nuclear Information System (INIS)

    Shukolyukov, A.Yu.; Katargin, N.V.; Baishev, I.S.

    1989-01-01

    Calculations of the cumulative yields of isotopes of Xe have been carried out on the basis of the semi-empirical formula of Silverberg and Tsao for Ba- and Dy-targets and bombarding proton energies in the range 100-1050 MeV. Results are compared with experimental data for the yields of Xe isotopes, and domains of applicability of the semi-empirical formula are determined

  20. How do migratory species add ecosystem service value to wilderness? Calculating the spatial subsidies provided by protected areas

    Science.gov (United States)

    Lopez-Hoffman, Laura; Semmens, Darius J.; Diffendorfer, Jay

    2013-01-01

    Species that migrate through protected and wilderness areas and utilize their resources, deliver ecosystem services to people in faraway locations. The mismatch between the areas that most support a species and those areas where the species provides most benefits to society can lead to underestimation of the true value of protected areas such as wilderness. We present a method to communicate the “off-site” value of wilderness and protected areas in providing habitat to migratory species that, in turn, provide benefits to people in distant locations. Using northern pintail ducks (Anas acuta) as an example, the article provides a method to estimate the amount of subsidy – the value of the ecosystem services provided by a migratory species in one area versus the cost to support the species and its habitat elsewhere.

  1. Comprehending the multiple 'values' of green infrastructure - Valuing nature-based solutions for urban water management from multiple perspectives.

    Science.gov (United States)

    Wild, T C; Henneberry, J; Gill, L

    2017-10-01

    The valuation of urban water management practices and associated nature-based solutions (NBS) is highly contested, and is becoming increasingly important to cities seeking to increase their resilience to climate change whilst at the same time facing budgetary pressures. Different conceptions of 'values' exist, each being accompanied by a set of potential measures ranging from calculative practices (closely linked to established market valuation techniques) - through to holistic assessments that seek to address wider concerns of sustainability. Each has the potential to offer important insights that often go well beyond questions of balancing the costs and benefits of the schemes concerned. However, the need to address - and go beyond - economic considerations presents policy-makers, practitioners and researchers with difficult methodological, ethical and practical challenges, especially when considered without the benefit of a broader theoretical framework or in the absence of well-established tools (as might apply within more traditional infrastructural planning contexts, such as the analysis of transport interventions). Drawing on empirical studies undertaken in Sheffield over a period of 10 years, and delivered in partnership with several other European cities and regions, we compare and examine different attempts to evaluate the benefits of urban greening options and future development scenarios. Comparing these different approaches to the valuation of nature-based solutions alongside other, more conventional forms of infrastructure - and indeed integrating both 'green and grey' interventions within a broader framework of infrastructures - throws up some surprising results and conclusions, as well as providing important sign-posts for future research in this rapidly emerging field. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Replacement Value - Representation of Fair Value in Accounting. Techniques and Modeling Suitable for the Income Based Approach

    OpenAIRE

    Manea Marinela – Daniela

    2011-01-01

    The term fair value is spread within the sphere of international standards without reference to any detailed guidance on how to apply. However, specialized tangible assets, which are rarely sold, the rule IAS 16 "Intangible assets " makes it possible to estimate fair value using an income approach or a replacement cost or depreciation. The following material is intended to identify potential modeling of fair value as an income-based approach, appealing to techniques used by professional evalu...

  3. First-principles calculations of bulk and interfacial thermodynamic properties for fcc-based Al-Sc alloys

    International Nuclear Information System (INIS)

    Asta, M.; Foiles, S.M.; Quong, A.A.

    1998-01-01

    The configurational thermodynamic properties of fcc-based Al-Sc alloys and coherent Al/Al 3 Sc interphase-boundary interfaces have been calculated from first principles. The computational approach used in this study combines the results of pseudopotential total-energy calculations with a cluster-expansion description of the alloy energetics. Bulk and interface configurational-thermodynamic properties are computed using a low-temperature-expansion technique. Calculated values of the {100} and {111} Al/Al 3 Sc interfacial energies at zero temperature are, respectively, 192 and 226mJ/m 2 . The temperature dependence of the calculated interfacial free energies is found to be very weak for {100} and more appreciable for {111} orientations; the primary effect of configurational disordering at finite temperature is to reduce the degree of crystallographic anisotropy associated with calculated interfacial free energies. The first-principles-computed solid-solubility limits for Sc in bulk fcc Al are found to be underestimated significantly in comparison with experimental measurements. It is argued that this discrepancy can be largely attributed to nonconfigurational contributions to the entropy which have been neglected in the present thermodynamic calculations. copyright 1998 The American Physical Society

  4. Characteristic-Based, Task-Based, and Results-Based: Three Value Systems for Assessing Professionally Produced Technical Communication Products.

    Science.gov (United States)

    Carliner, Saul

    2003-01-01

    Notes that technical communicators have developed different methodologies for evaluating the effectiveness of their work, such as editing, usability testing, and determining the value added. Explains that at least three broad value systems underlie the assessment practices: characteristic-based, task-based, and results-based. Concludes that the…

  5. S values at voxels level for 188Re and 90Y calculated with the MCNP-4C code

    International Nuclear Information System (INIS)

    Coca Perez, Marco Antonio; Torres Aroche, Leonel Alberto; Cornejo, Nestor; Martin Hernandez, Guido

    2003-01-01

    The main objective of this work was estimate the voxels S values for 188 Re at cubical geometry using the MCNP-4C code for the simulation of radiation transport and energy deposition. Mean absorbed dose to target voxels per radioactive decay in a source voxels were estimated and reported for 188 Re and Y 90 . A comparison of voxels S values computed with the MCNP code the data reported in MIRD pamphlet 17 for 90 Y was performed in order to evaluate our results

  6. Values-Based Leadership: College Leaders' Perceptions on Maintaining Values in Decision Making

    Science.gov (United States)

    Buckner, Ramona K.

    2013-01-01

    The purpose of this study was to explore college leaders' experiences negotiating conflicts between personal and organizational values. This qualitative study utilized symbolic interactionism and involved interviews with five college campus leaders from various institutions. Analysis of interviews, observations, field notes and artifacts revealed…

  7. 40 CFR 600.209-08 - Calculation of vehicle-specific 5-cycle fuel economy values for a model type.

    Science.gov (United States)

    2010-07-01

    ...-cycle fuel economy values for a model type. 600.209-08 Section 600.209-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for...

  8. Sample size calculations based on a difference in medians for positively skewed outcomes in health care studies

    Directory of Open Access Journals (Sweden)

    Aidan G. O’Keeffe

    2017-12-01

    Full Text Available Abstract Background In healthcare research, outcomes with skewed probability distributions are common. Sample size calculations for such outcomes are typically based on estimates on a transformed scale (e.g. log which may sometimes be difficult to obtain. In contrast, estimates of median and variance on the untransformed scale are generally easier to pre-specify. The aim of this paper is to describe how to calculate a sample size for a two group comparison of interest based on median and untransformed variance estimates for log-normal outcome data. Methods A log-normal distribution for outcome data is assumed and a sample size calculation approach for a two-sample t-test that compares log-transformed outcome data is demonstrated where the change of interest is specified as difference in median values on the untransformed scale. A simulation study is used to compare the method with a non-parametric alternative (Mann-Whitney U test in a variety of scenarios and the method is applied to a real example in neurosurgery. Results The method attained a nominal power value in simulation studies and was favourable in comparison to a Mann-Whitney U test and a two-sample t-test of untransformed outcomes. In addition, the method can be adjusted and used in some situations where the outcome distribution is not strictly log-normal. Conclusions We recommend the use of this sample size calculation approach for outcome data that are expected to be positively skewed and where a two group comparison on a log-transformed scale is planned. An advantage of this method over usual calculations based on estimates on the log-transformed scale is that it allows clinical efficacy to be specified as a difference in medians and requires a variance estimate on the untransformed scale. Such estimates are often easier to obtain and more interpretable than those for log-transformed outcomes.

  9. British and German manufacturing productivity compared : A new benchmark for 1935/36 based on double deflated value added

    NARCIS (Netherlands)

    Fremdling, Rainer; de Jong, Herman; Timmer, Marcel P.

    We present a new estimate of Anglo-German manufacturing productivity levels for 1935/36. It is based on archival data on German manufacturing and published British census data. We calculate comparative levels of value added, correcting for differences in prices for outputs and inputs. This so-called

  10. Poster - 20: Detector selection for commissioning of a Monte Carlo based electron dose calculation algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Anusionwu, Princess [Medical Physics, CancerCare Manitoba, Winnipeg Canada (Canada); Department of Physics & Astronomy, University of Manitoba, Winnipeg Canada (Canada); Alpuche Aviles, Jorge E. [Medical Physics, CancerCare Manitoba, Winnipeg Canada (Canada); Pistorius, Stephen [Medical Physics, CancerCare Manitoba, Winnipeg Canada (Canada); Department of Physics & Astronomy, University of Manitoba, Winnipeg Canada (Canada); Department of Radiology, University of Manitoba, Winnipeg (Canada)

    2016-08-15

    Objective: Commissioning of a Monte Carlo based electron dose calculation algorithm requires percentage depth doses (PDDs) and beam profiles which can be measured with multiple detectors. Electron dosimetry is commonly performed with cylindrical chambers but parallel plate chambers and diodes can also be used. The purpose of this study was to determine the most appropriate detector to perform the commissioning measurements. Methods: PDDs and beam profiles were measured for beams with energies ranging from 6 MeV to 15 MeV and field sizes ranging from 6 cm × 6 cm to 40 cm × 40 cm. Detectors used included diodes, cylindrical and parallel plate ionization chambers. Beam profiles were measured in water (100 cm source to surface distance) and in air (95 cm source to detector distance). Results: PDDs for the cylindrical chambers were shallower (1.3 mm averaged over all energies and field sizes) than those measured with the parallel plate chambers and diodes. Surface doses measured with the diode and cylindrical chamber were on average larger by 1.6 % and 3% respectively than those of the parallel plate chamber. Profiles measured with a diode resulted in penumbra values smaller than those measured with the cylindrical chamber by 2 mm. Conclusion: The diode was selected as the most appropriate detector since PDDs agreed with those measured with parallel plate chambers (typically recommended for low energies) and results in sharper profiles. Unlike ion chambers, no corrections are needed to measure PDDs, making it more convenient to use.

  11. Calculating the dermal flux of chemicals with OELs based on their molecular structure: An attempt to assign the skin notation.

    Science.gov (United States)

    Kupczewska-Dobecka, Małgorzata; Jakubowski, Marek; Czerczak, Sławomir

    2010-09-01

    Our objectives included calculating the permeability coefficient and dermal penetration rates (flux value) for 112 chemicals with occupational exposure limits (OELs) according to the LFER (linear free-energy relationship) model developed using published methods. We also attempted to assign skin notations based on each chemical's molecular structure. There are many studies available where formulae for coefficients of permeability from saturated aqueous solutions (K(p)) have been related to physicochemical characteristics of chemicals. The LFER model is based on the solvation equation, which contains five main descriptors predicted from chemical structure: solute excess molar refractivity, dipolarity/polarisability, summation hydrogen bond acidity and basicity, and the McGowan characteristic volume. Descriptor values, available for about 5000 compounds in the Pharma Algorithms Database were used to calculate permeability coefficients. Dermal penetration rate was estimated as a ratio of permeability coefficient and concentration of chemical in saturated aqueous solution. Finally, estimated dermal penetration rates were used to assign the skin notation to chemicals. Defined critical fluxes defined from the literature were recommended as reference values for skin notation. The application of Abraham descriptors predicted from chemical structure and LFER analysis in calculation of permeability coefficients and flux values for chemicals with OELs was successful. Comparison of calculated K(p) values with data obtained earlier from other models showed that LFER predictions were comparable to those obtained by some previously published models, but the differences were much more significant for others. It seems reasonable to conclude that skin should not be characterised as a simple lipophilic barrier alone. Both lipophilic and polar pathways of permeation exist across the stratum corneum. It is feasible to predict skin notation on the basis of the LFER and other published

  12. Business Value of Information Technology Service Quality Based on Probabilistic Business-Driven Model

    Directory of Open Access Journals (Sweden)

    Jaka Sembiring

    2015-08-01

    Full Text Available The business value of information technology (IT services is often difficult to assess, especially from the point of view of a non-IT manager. This condition could severely impact organizational IT strategic decisions. Various approaches have been proposed to quantify the business value, but some are trapped in technical complexity while others misguide managers into directly and subjectively judging some technical entities outside their domain of expertise. This paper describes a method on how to properly capture both perspectives based on a  probabilistic business-driven model. The proposed model presents a procedure to calculate the business value of IT services. The model also covers IT security services and their business value as an important aspect of IT services that is not covered in previously published researches. The impact of changes in the quality of IT services on business value will also be discussed. A simulation and a case illustration are provided to show the possible application of the proposed model for a simple business process in an enterprise.

  13. An Interval-Valued Intuitionistic Fuzzy TOPSIS Method Based on an Improved Score Function

    Directory of Open Access Journals (Sweden)

    Zhi-yong Bai

    2013-01-01

    Full Text Available This paper proposes an improved score function for the effective ranking order of interval-valued intuitionistic fuzzy sets (IVIFSs and an interval-valued intuitionistic fuzzy TOPSIS method based on the score function to solve multicriteria decision-making problems in which all the preference information provided by decision-makers is expressed as interval-valued intuitionistic fuzzy decision matrices where each of the elements is characterized by IVIFS value and the information about criterion weights is known. We apply the proposed score function to calculate the separation measures of each alternative from the positive and negative ideal solutions to determine the relative closeness coefficients. According to the values of the closeness coefficients, the alternatives can be ranked and the most desirable one(s can be selected in the decision-making process. Finally, two illustrative examples for multicriteria fuzzy decision-making problems of alternatives are used as a demonstration of the applications and the effectiveness of the proposed decision-making method.

  14. A soil radiological quality guideline value for wildlife-based protection in uranium mine rehabilitation

    International Nuclear Information System (INIS)

    Doering, Che; Bollhöfer, Andreas

    2016-01-01

    A soil guideline value for radiological protection of the environment was determined for the impending rehabilitation of Ranger uranium mine in the wet-dry tropics of northern Australia. The guideline value was 1000 Bq kg"−"1 of "2"2"6Ra in the proposed waste rock substrate of the rehabilitated landform and corresponded to an above-baseline dose rate of 100 μGy h"−"1 to the most highly exposed individuals of the limiting organism. The limiting organism was reptile based on an assessment using site-specific concentration ratio data. - Highlights: • A soil guideline value for wildlife was derived for a mine rehabilitation situation. • The value was 1000 Bq kg"−"1 of "2"2"6Ra in the rehabilitation substrate. • The value was back-calculated from a benchmark dose rate of 100 μGy h"−"1. • Exposures from "2"2"2Rn and progeny were considered and included. • Use of site-specific concentration ratio data gave lower results than generic data.

  15. Value chain analysis on cassava and cassava based - products in ...

    African Journals Online (AJOL)

    This study examined the value Chain analysis (production process and cost related to each element of production chain to add value) on cassava and cassava products in Imo State specifically to ascertain the farm size holdings of the respondents as well as the ownerships of the land used for production. It also identified` ...

  16. Identifying values and beliefs in an outcomes-based curriculum ...

    African Journals Online (AJOL)

    There is therefore a need for teachers to be sensitised to the different values embedded in each belief system and all cultureal orientations. The prevalence of values and belief systems in the OBE curricula of C2005 and the NCS will have to be acknowledged, identified, and promoted. South African Journal of Education ...

  17. TO THE SOLUTION OF PROBLEMS ABOUT THE RAILWAYS CALCULATION FOR STRENGTH TAKING INTO ACCOUNT UNEQUAL ELASTICITY OF THE SUBRAIL BASE

    Directory of Open Access Journals (Sweden)

    D. M. Kurhan

    2014-11-01

    Full Text Available Purpose. The module of elasticity of the subrail base is one of the main characteristics for an assessment intense the deformed condition of a track. Need for different cases to consider unequal elasticity of the subrail base repeatedly was considered, however, results contained rather difficult mathematical approaches and the obtained decisions didn't keep within borders of standard engineering calculation of a railway on strength. Therefore the purpose of this work is obtaining the decision within this document. Methodology. It is offered to consider a rail model as a beam which has the distributed loading of such outline corresponding to value of the module of elasticity that gives an equivalent deflection at free seating on bearing parts. Findings. The method of the accounting of gradual change of the module of elasticity of the subrail base by means of the correcting coefficient in engineering calculation of a way on strength was received. Expansion of existing calculation of railways strength was developed for the accounting of sharp change of the module of elasticity of the subrail base (for example, upon transition from a ballast design of a way on the bridge. The characteristic of change of forces operating from a rail on a basis, depending on distance to the bridge on an approach site from a ballast design of a way was received. The results of the redistribution of forces after a sudden change in the elastic modulus of the base under the rail explain the formation of vertical irregularities before the bridge. Originality. The technique of engineering calculation of railways strength for performance of calculations taking into account unequal elasticity of the subrail base was improved. Practical value. The obtained results allow carrying out engineering calculations for an assessment of strength of a railway in places of unequal elasticity caused by a condition of a way or features of a design. The solution of the return task on

  18. Machine learning assisted first-principles calculation of multicomponent solid solutions: estimation of interface energy in Ni-based superalloys

    Science.gov (United States)

    Chandran, Mahesh; Lee, S. C.; Shim, Jae-Hyeok

    2018-02-01

    A disordered configuration of atoms in a multicomponent solid solution presents a computational challenge for first-principles calculations using density functional theory (DFT). The challenge is in identifying the few probable (low energy) configurations from a large configurational space before DFT calculation can be performed. The search for these probable configurations is possible if the configurational energy E({\\boldsymbol{σ }}) can be calculated accurately and rapidly (with a negligibly small computational cost). In this paper, we demonstrate such a possibility by constructing a machine learning (ML) model for E({\\boldsymbol{σ }}) trained with DFT-calculated energies. The feature vector for the ML model is formed by concatenating histograms of pair and triplet (only equilateral triangle) correlation functions, {g}(2)(r) and {g}(3)(r,r,r), respectively. These functions are a quantitative ‘fingerprint’ of the spatial arrangement of atoms, familiar in the field of amorphous materials and liquids. The ML model is used to generate an accurate distribution P(E({\\boldsymbol{σ }})) by rapidly spanning a large number of configurations. The P(E) contains full configurational information of the solid solution and can be selectively sampled to choose a few configurations for targeted DFT calculations. This new framework is employed to estimate (100) interface energy ({σ }{{IE}}) between γ and γ \\prime at 700 °C in Alloy 617, a Ni-based superalloy, with composition reduced to five components. The estimated {σ }{{IE}} ≈ 25.95 mJ m-2 is in good agreement with the value inferred by the precipitation model fit to experimental data. The proposed new ML-based ab initio framework can be applied to calculate the parameters and properties of alloys with any number of components, thus widening the reach of first-principles calculation to realistic compositions of industrially relevant materials and alloys.

  19. Association between value-based purchasing score and hospital characteristics

    Directory of Open Access Journals (Sweden)

    Borah Bijan J

    2012-12-01

    Full Text Available Abstract Background Medicare hospital Value-based purchasing (VBP program that links Medicare payments to quality of care will become effective from 2013. It is unclear whether specific hospital characteristics are associated with a hospital’s VBP score, and consequently incentive payments. The objective of the study was to assess the association of hospital characteristics with (i the mean VBP score, and (ii specific percentiles of the VBP score distribution. The secondary objective was to quantify the associations of hospital characteristics with the VBP score components: clinical process of care (CPC score and patient satisfaction score. Methods Observational analysis that used data from three sources: Medicare Hospital Compare Database, American Hospital Association 2010 Annual Survey and Medicare Impact File. The final study sample included 2,491 U.S. acute care hospitals eligible for the VBP program. The association of hospital characteristics with the mean VBP score and specific VBP score percentiles were assessed by ordinary least square (OLS regression and quantile regression (QR, respectively. Results VBP score had substantial variations, with mean score of 30 and 60 in the first and fourth quartiles of the VBP score distribution. For-profit status (vs. non-profit, smaller bed size (vs. 100–199 beds, East South Central region (vs. New England region and the report of specific CPC measures (discharge instructions, timely provision of antibiotics and beta blockers, and serum glucose controls in cardiac surgery patients were positively associated with mean VBP scores (p Conclusions Although hospitals serving the poor and the elderly are more likely to score lower under the VBP program, the correlation appears small. Profit status, geographic regions, number and type of CPC measures reported explain the most variation among scores.

  20. Applications of thermodynamic calculations to Mg alloy design: Mg-Sn based alloy development

    International Nuclear Information System (INIS)

    Jung, In-Ho; Park, Woo-Jin; Ahn, Sang Ho; Kang, Dae Hoon; Kim, Nack J.

    2007-01-01

    Recently an Mg-Sn based alloy system has been investigated actively in order to develop new magnesium alloys which have a stable structure and good mechanical properties at high temperatures. Thermodynamic modeling of the Mg-Al-Mn-Sb-Si-Sn-Zn system was performed based on available thermodynamic, phase equilibria and phase diagram data. Using the optimized database, the phase relationships of the Mg-Sn-Al-Zn alloys with additions of Si and Sb were calculated and compared with their experimental microstructures. It is shown that the calculated results are in good agreement with experimental microstructures, which proves the applicability of thermodynamic calculations for new Mg alloy design. All calculations were performed using FactSage thermochemical software. (orig.)

  1. Error Propagation dynamics: from PIV-based pressure reconstruction to vorticity field calculation

    Science.gov (United States)

    Pan, Zhao; Whitehead, Jared; Richards, Geordie; Truscott, Tadd; USU Team; BYU Team

    2017-11-01

    Noninvasive data from velocimetry experiments (e.g., PIV) have been used to calculate vorticity and pressure fields. However, the noise, error, or uncertainties in the PIV measurements would eventually propagate to the calculated pressure or vorticity field through reconstruction schemes. Despite the vast applications of pressure and/or vorticity field calculated from PIV measurements, studies on the error propagation from the velocity field to the reconstructed fields (PIV-pressure and PIV-vorticity are few. In the current study, we break down the inherent connections between PIV-based pressure reconstruction and PIV-based vorticity calculation. The similar error propagation dynamics, which involve competition between physical properties of the flow and numerical errors from reconstruction schemes, are found in both PIV-pressure and PIV-vorticity reconstructions.

  2. Calculating evidence-based renal replacement therapy - Introducing an excel-based calculator to improve prescribing and delivery in renal replacement therapy - A before and after study.

    Science.gov (United States)

    Cottle, Daniel; Mousdale, Stephen; Waqar-Uddin, Haroon; Tully, Redmond; Taylor, Benjamin

    2016-02-01

    Transferring the theoretical aspect of continuous renal replacement therapy to the bedside and delivering a given "dose" can be difficult. In research, the "dose" of renal replacement therapy is given as effluent flow rate in ml kg -1  h -1 . Unfortunately, most machines require other information when they are initiating therapy, including blood flow rate, pre-blood pump flow rate, dialysate flow rate, etc. This can lead to confusion, resulting in patients receiving inappropriate doses of renal replacement therapy. Our aim was to design an excel calculator which would personalise patient's treatment, deliver an effective, evidence-based dose of renal replacement therapy without large variations in practice and prolong filter life. Our calculator prescribes a haemodialfiltration dose of 25 ml kg -1  h -1 whilst limiting the filtration fraction to 15%. We compared the episodes of renal replacement therapy received by a historical group of patients, by retrieving their data stored on the haemofiltration machines, to a group where the calculator was used. In the second group, the data were gathered prospectively. The median delivered dose reduced from 41.0 ml kg -1  h -1 to 26.8 ml kg -1  h -1 with reduced variability that was significantly closer to the aim of 25 ml kg -1 .h -1 ( p  < 0.0001). The median treatment time increased from 8.5 h to 22.2 h ( p  = 0.00001). Our calculator significantly reduces variation in prescriptions of continuous veno-venous haemodiafiltration and provides an evidence-based dose. It is easy to use and provides personal care for patients whilst optimizing continuous veno-venous haemodiafiltration delivery and treatment times.

  3. A New Optimization Method for Centrifugal Compressors Based on 1D Calculations and Analyses

    Directory of Open Access Journals (Sweden)

    Pei-Yuan Li

    2015-05-01

    Full Text Available This paper presents an optimization design method for centrifugal compressors based on one-dimensional calculations and analyses. It consists of two parts: (1 centrifugal compressor geometry optimization based on one-dimensional calculations and (2 matching optimization of the vaned diffuser with an impeller based on the required throat area. A low pressure stage centrifugal compressor in a MW level gas turbine is optimized by this method. One-dimensional calculation results show that D3/D2 is too large in the original design, resulting in the low efficiency of the entire stage. Based on the one-dimensional optimization results, the geometry of the diffuser has been redesigned. The outlet diameter of the vaneless diffuser has been reduced, and the original single stage diffuser has been replaced by a tandem vaned diffuser. After optimization, the entire stage pressure ratio is increased by approximately 4%, and the efficiency is increased by approximately 2%.

  4. Calculation of parameters of radial-piston reducer based on the use of functional semantic networks

    Directory of Open Access Journals (Sweden)

    Pashkevich V.M.

    2016-12-01

    Full Text Available The questions of сalculation of parameters of radial-piston reducer are considered in this article. It is used the approach which is based technologies of functional semantic networks. It is considered possibility applications of functional se-mantic networks for calculation of parameters of radial-piston reducer. Semantic networks to calculate the mass of the radial piston reducer are given.

  5. Three-Phase Short-Circuit Current Calculation of Power Systems with High Penetration of VSC-Based Renewable Energy

    Directory of Open Access Journals (Sweden)

    Niancheng Zhou

    2018-03-01

    Full Text Available Short-circuit current level of power grid will be increased with high penetration of VSC-based renewable energy, and a strong coupling between transient fault process and control strategy will change the fault features. The full current expression of VSC-based renewable energy was obtained according to transient characteristics of short-circuit current. Furtherly, by analyzing the closed-loop transfer function model of controller and current source characteristics presented in steady state during a fault, equivalent circuits of VSC-based renewable energy of fault transient state and steady state were proposed, respectively. Then the correctness of the theory was verified by experimental tests. In addition, for power grid with VSC-based renewable energy, superposition theorem was used to calculate AC component and DC component of short-circuit current, respectively, then the peak value of short-circuit current was evaluated effectively. The calculated results could be used for grid planning and design, short-circuit current management as well as adjustment of relay protection. Based on comparing calculation and simulation results of 6-node 500 kV Huainan power grid and 35-node 220 kV Huaisu power grid, the effectiveness of the proposed method was verified.

  6. Value of apparent diffusion coefficient calculation before and after gustatory stimulation in the diagnosis of acute or chronic parotitis

    Energy Technology Data Exchange (ETDEWEB)

    Ries, T.; Arndt, C.; Regier, M.; Cramer, M.C.; Adam, G.; Habermann, C.R. [Diagnostic Center, Department of Diagnostic and Interventional Radiology, Hamburg (Germany); Graessner, J. [Medical Solutions Hamburg, Siemens AG, Hamburg (Germany); Reitmeier, F.; Jaehne, M. [University Medical Center Hamburg-Eppendorf, Center of Head Care and Dermatology, Department of Otorhinolaryngology, Hamburg (Germany)

    2008-10-15

    The purpose of the study was to investigate the value of diffusion-weighted (DW) echo-planar imaging (EPI) for quantifying physiological changes of the parotid gland before and after gustatory stimulation in patients suffering from acute or chronic recurrent inflammation in comparison with healthy volunteers. Using a DW-EPI sequence at 1.5 T, parotid glands of 19 consecutive patients with acute (n = 14) and chronic (n=) inflammation of parotid glands and 52 healthy volunteers were examined. Magnetic-resonance (MR) images were obtained before and after gustatory stimulation with 5 cc of lemon juice. In volunteers mean ADC values of 1.14 x 10{sup -3} mm{sup 2}/s before and 1.2 x 10{sup -3} mm{sup 2}/s after gustatory stimulation were observed. In acute inflammation ADC values were higher before [1.22 x 10{sup -3} mm{sup 2}/s (p = 0.006)] and after stimulation [1.32 x 10{sup -3} mm{sup 2}/s (p<0.001)]. Before stimulation ADC differences between chronic inflammation (1.05 x 10{sup -3} mm{sup 2}/s) and healthy volunteers (p = 0.04) as well as between acute and chronic inflammation (p=0.005) were statistically significant. No differences were detected after stimulation between chronic inflammation (1.2 x 10{sup -3} mm{sup 2}/s) and healthy volunteers (p=0.94) and between acute and chronic inflammation (p=0.15), respectively. DW-EPI seems to display the physiological changes of the parotid gland in patients suffering from acute or chronic inflammation and might be useful for discriminating healthy from affected glands. (orig.)

  7. The Estimation of Compaction Parameter Values Based on Soil Properties Values Stabilized with Portland Cement

    Science.gov (United States)

    Lubis, A. S.; Muis, Z. A.; Pasaribu, M. I.

    2017-03-01

    The strength and durability of pavement construction is highly dependent on the properties and subgrade bearing capacity. This then led to the idea of the selection methods to estimate the density of the soil with the proper implementation of the system, fast and economical. This study aims to estimate the compaction parameter value namely the maximum dry unit weight (γd max) and optimum moisture content (wopt) of the soil properties value that stabilized with Portland Cement. Tests conducted in the laboratory of soil mechanics to determine the index properties (fines and liquid limit) and Standard Compaction Test. Soil samples that have Plasticity Index (PI) between 0-15% then mixed with Portland Cement (PC) with variations of 2%, 4%, 6%, 8% and 10%, each 10 samples. The results showed that the maximum dry unit weight (γd max) and wopt has a significant relationship with percent fines, liquid limit and the percentation of cement. Equation for the estimated maximum dry unit weight (γd max) = 1.782 - 0.011*LL + 0,000*F + 0.006*PS with R2 = 0.915 and the estimated optimum moisture content (wopt) = 3.441 + 0.594*LL + 0,025*F + 0,024*PS with R2 = 0.726.

  8. Software testing and source code for the calculation of clearance values. Final report; Erprobung von Software und Quellcode zur Berechnung von Freigabewerten. Abschlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Artmann, Andreas; Meyering, Henrich

    2016-11-15

    The GRS research project was aimed to the test the appropriateness of the software package ''residual radioactivity'' (RESRAD) for the calculation of clearance values according to German and European regulations. Comparative evaluations were performed with RESRAD-OFFSITE and the code SiWa-PRO DSS used by GRS and the GRS program code ARTM. It is recommended to use RESRAD-OFFSITE for comparative calculations. The dose relevant air-path dispersion of radionuclides should not be modeled using RESRAD-OFFSITE, the use of ARTM is recommended. The sensitivity analysis integrated into RESRAD-OFFSITE allows a fast identification of crucial parameters.

  9. Competence-Based Approach in Value Chain Processes

    Science.gov (United States)

    Azevedo, Rodrigo Cambiaghi; D'Amours, Sophie; Rönnqvist, Mikael

    There is a gap between competence theory and value chain processes frameworks. While individually considered as core elements in contemporary management thinking, the integration of the two concepts is still lacking. We claim that this integration would allow for the development of more robust business models by structuring value chain activities around aspects such as capabilities and skills, as well as individual and organizational knowledge. In this context, the objective of this article is to reduce this gap and consequently open a field for further improvements of value chain processes frameworks.

  10. Lift calculations based on accepted wake models for animal flight are inconsistent and sensitive to vortex dynamics.

    Science.gov (United States)

    Gutierrez, Eric; Quinn, Daniel B; Chin, Diana D; Lentink, David

    2016-12-06

    There are three common methods for calculating the lift generated by a flying animal based on the measured airflow in the wake. However, these methods might not be accurate according to computational and robot-based studies of flapping wings. Here we test this hypothesis for the first time for a slowly flying Pacific parrotlet in still air using stereo particle image velocimetry recorded at 1000 Hz. The bird was trained to fly between two perches through a laser sheet wearing laser safety goggles. We found that the wingtip vortices generated during mid-downstroke advected down and broke up quickly, contradicting the frozen turbulence hypothesis typically assumed in animal flight experiments. The quasi-steady lift at mid-downstroke was estimated based on the velocity field by applying the widely used Kutta-Joukowski theorem, vortex ring model, and actuator disk model. The calculated lift was found to be sensitive to the applied model and its different parameters, including vortex span and distance between the bird and laser sheet-rendering these three accepted ways of calculating weight support inconsistent. The three models predict different aerodynamic force values mid-downstroke compared to independent direct measurements with an aerodynamic force platform that we had available for the same species flying over a similar distance. Whereas the lift predictions of the Kutta-Joukowski theorem and the vortex ring model stayed relatively constant despite vortex breakdown, their values were too low. In contrast, the actuator disk model predicted lift reasonably accurately before vortex breakdown, but predicted almost no lift during and after vortex breakdown. Some of these limitations might be better understood, and partially reconciled, if future animal flight studies report lift calculations based on all three quasi-steady lift models instead. This would also enable much needed meta studies of animal flight to derive bioinspired design principles for quasi-steady lift

  11. Calculations of atomic magnetic nuclear shielding constants based on the two-component normalized elimination of the small component method

    Science.gov (United States)

    Yoshizawa, Terutaka; Zou, Wenli; Cremer, Dieter

    2017-04-01

    A new method for calculating nuclear magnetic resonance shielding constants of relativistic atoms based on the two-component (2c), spin-orbit coupling including Dirac-exact NESC (Normalized Elimination of the Small Component) approach is developed where each term of the diamagnetic and paramagnetic contribution to the isotropic shielding constant σi s o is expressed in terms of analytical energy derivatives with regard to the magnetic field B and the nuclear magnetic moment 𝝁 . The picture change caused by renormalization of the wave function is correctly described. 2c-NESC/HF (Hartree-Fock) results for the σiso values of 13 atoms with a closed shell ground state reveal a deviation from 4c-DHF (Dirac-HF) values by 0.01%-0.76%. Since the 2-electron part is effectively calculated using a modified screened nuclear shielding approach, the calculation is efficient and based on a series of matrix manipulations scaling with (2M)3 (M: number of basis functions).

  12. Independent Monte-Carlo dose calculation for MLC based CyberKnife radiotherapy

    Science.gov (United States)

    Mackeprang, P.-H.; Vuong, D.; Volken, W.; Henzen, D.; Schmidhalter, D.; Malthaner, M.; Mueller, S.; Frei, D.; Stampanoni, M. F. M.; Dal Pra, A.; Aebersold, D. M.; Fix, M. K.; Manser, P.

    2018-01-01

    This work aims to develop, implement and validate a Monte Carlo (MC)-based independent dose calculation (IDC) framework to perform patient-specific quality assurance (QA) for multi-leaf collimator (MLC)-based CyberKnife® (Accuray Inc., Sunnyvale, CA) treatment plans. The IDC framework uses an XML-format treatment plan as exported from the treatment planning system (TPS) and DICOM format patient CT data, an MC beam model using phase spaces, CyberKnife MLC beam modifier transport using the EGS++ class library, a beam sampling and coordinate transformation engine and dose scoring using DOSXYZnrc. The framework is validated against dose profiles and depth dose curves of single beams with varying field sizes in a water tank in units of cGy/Monitor Unit and against a 2D dose distribution of a full prostate treatment plan measured with Gafchromic EBT3 (Ashland Advanced Materials, Bridgewater, NJ) film in a homogeneous water-equivalent slab phantom. The film measurement is compared to IDC results by gamma analysis using 2% (global)/2 mm criteria. Further, the dose distribution of the clinical treatment plan in the patient CT is compared to TPS calculation by gamma analysis using the same criteria. Dose profiles from IDC calculation in a homogeneous water phantom agree within 2.3% of the global max dose or 1 mm distance to agreement to measurements for all except the smallest field size. Comparing the film measurement to calculated dose, 99.9% of all voxels pass gamma analysis, comparing dose calculated by the IDC framework to TPS calculated dose for the clinical prostate plan shows 99.0% passing rate. IDC calculated dose is found to be up to 5.6% lower than dose calculated by the TPS in this case near metal fiducial markers. An MC-based modular IDC framework was successfully developed, implemented and validated against measurements and is now available to perform patient-specific QA by IDC.

  13. Quantum-mechanical calculation of H on Ni(001) using a model potential based on first-principles calculations

    DEFF Research Database (Denmark)

    Mattsson, T.R.; Wahnström, G.; Bengtsson, L.

    1997-01-01

    First-principles density-functional calculations of hydrogen adsorption on the Ni (001) surface have been performed in order to get a better understanding of adsorption and diffusion of hydrogen on metal surfaces. We find good agreement with experiments for the adsorption energy, binding distance...

  14. A method of self-pursued boundary value on a body and the Magnus effect calculated with this method

    Science.gov (United States)

    Yoshino, Fumio; Hayashi, Tatsuo; Waka, Ryoji

    1991-03-01

    A computational method, designated 'SPB', is proposed for the automatic determination of the stream function Phi on an arbitrarily profiled body without recourse to empirical factors. The method is applied to the case of a rotating, circular cross-section cylinder in a uniform shear flow, and the results obtained are compared with those of both the method in which the value of Phi is fixed on a body and the conventional empirical method; it is in view of this established that the SPB method is very efficient and applicable to both steady and unsteady flows. The SPB method, in addition to yielding the aerodynamic forces acting on a cylinder, shows that the Magnus effect lift force decreases as the velocity gradient of the shear flow increases while the cylinder's rotational speed is kept constant.

  15. The calculated reference value of the tubular extraction rate in infants and children. An attempt to use a new regression equation

    International Nuclear Information System (INIS)

    Watanabe, Nami; Sugai Yukio; Komatani, Akio; Yamaguchi, Koichi; Takahashi, Kazuei

    1999-01-01

    This study was designed to investigate the empirical tubular extraction rate (TER) of the normal renal function in childhood and then propose a new equation to obtain TER theoretically. The empirical TER was calculated using Russell's method for determination of single-sample plasma clearance and 99m Tc-MAG 3 in 40 patients with renal disease younger than 10 years of age who were classified as having normal renal function using diagnostic criteria defined by the Paediatric Task Group of EANM. First, we investigated the relationships of the empirical value of absolute TER to age, body weight, body surface area (BSA) and distribution volume. Next we investigated the relationships of the empirical value of BSA corrected TER to age, body weight, BSA and distribution volume. Linear relationship was indicated between the absolute TER and each body dimensional factors, especially regarding to BSA, its correlation coefficient was 0.90 (p value). The BSA-corrected TER showed a logarithmic relationship with BSA, but linear regression did not show any significant correlation. Therefore, it was thought that the normal value of TER could be calculated theoretically using the body surface area, and here we proposed the following linear regression equation; Theoretical TER (ml/min/1.73 m 2 )=(-39.8+257.2 x BSA)/BSA/1.73. The theoretical TER could be one of the reference values of the renal function in the period of the renal maturation. (author)

  16. Feasibility of CBCT-based dose calculation: Comparative analysis of HU adjustment techniques

    International Nuclear Information System (INIS)

    Fotina, Irina; Hopfgartner, Johannes; Stock, Markus; Steininger, Thomas; Lütgendorf-Caucig, Carola; Georg, Dietmar

    2012-01-01

    Background and purpose: The aim of this work was to compare the accuracy of different HU adjustments for CBCT-based dose calculation. Methods and materials: Dose calculation was performed on CBCT images of 30 patients. In the first two approaches phantom-based (Pha-CC) and population-based (Pop-CC) conversion curves were used. The third method (WAB) represents override of the structures with standard densities for water, air and bone. In ROI mapping approach all structures were overridden with average HUs from planning CT. All techniques were benchmarked to the Pop-CC and CT-based plans by DVH comparison and γ-index analysis. Results: For prostate plans, WAB and ROI mapping compared to Pop-CC showed differences in PTV D median below 2%. The WAB and Pha-CC methods underestimated the bladder dose in IMRT plans. In lung cases PTV coverage was underestimated by Pha-CC method by 2.3% and slightly overestimated by the WAB and ROI techniques. The use of the Pha-CC method for head–neck IMRT plans resulted in difference in PTV coverage up to 5%. Dose calculation with WAB and ROI techniques showed better agreement with pCT than conversion curve-based approaches. Conclusions: Density override techniques provide an accurate alternative to the conversion curve-based methods for dose calculation on CBCT images.

  17. Calculating the Fee-Based Services of Library Institutions: Theoretical Foundations and Practical Challenges

    Directory of Open Access Journals (Sweden)

    Sysіuk Svitlana V.

    2017-05-01

    Full Text Available The article is aimed at highlighting features of the provision of the fee-based services by library institutions, identifying problems related to the legal and regulatory framework for their calculation, and the methods to implement this. The objective of the study is to develop recommendations to improve the calculation of the fee-based library services. The theoretical foundations have been systematized, the need to develop a Provision for the procedure of the fee-based services by library institutions has been substantiated. Such a Provision would protect library institution from errors in fixing the fee for a paid service and would be an informational source of its explicability. The appropriateness of applying the market pricing law based on demand and supply has been substantiated. The development and improvement of accounting and calculation, taking into consideration both industry-specific and market-based conditions, would optimize the costs and revenues generated by the provision of the fee-based services. In addition, the complex combination of calculation leverages with development of the system of internal accounting together with use of its methodology – provides another equally efficient way of improving the efficiency of library institutions’ activity.

  18. GIS supported calculations of 137Cs deposition in Sweden based on precipitation data

    International Nuclear Information System (INIS)

    Almgren, S.; Nilsson, E.; Isaksson, M.; Erlandsson, B.

    2005-01-01

    137 Cs deposition maps were made using Kriging interpolation in a Geographical Information System (GIS). Quarterly values of 137 Cs deposition density per unit precipitation (Bq/m 2 /mm) at three reference sites and quarterly precipitation at 62 weather stations distributed over Sweden were used in the calculations of Nuclear Weapons Fallout (NWF). The deposition density of 137 Cs, resulting from the Chernobyl accident, was calculated for western Sweden using precipitation data from 46 stations. The lowest levels of NWF 137 Cs deposition density were noted in the northeastern and eastern Sweden and the highest levels in the western parts of Sweden. The Chernobyl 137 Cs deposition density is highest along the coast in the selected area and the lowest in the southeastern part and along the middle. The sum of the calculated deposition density from NWF and Chernobyl in western Sweden was compared to accumulated activities in soil samples at 27 locations. Comparisons between the predicted values of this study show a good agreement with measured values

  19. Calculation of marine propeller static strength based on coupled BEM/FEM

    Directory of Open Access Journals (Sweden)

    YE Liyu

    2017-10-01

    Full Text Available [Objectives] The reliability of propeller stress has a great influence on the safe navigation of a ship. To predict propeller stress quickly and accurately,[Methods] a new numerical prediction model is developed by coupling the Boundary Element Method(BEMwith the Finite Element Method (FEM. The low order BEM is used to calculate the hydrodynamic load on the blades, and the Prandtl-Schlichting plate friction resistance formula is used to calculate the viscous load. Next, the calculated hydrodynamic load and viscous correction load are transmitted to the calculation of the Finite Element as surface loads. Considering the particularity of propeller geometry, a continuous contact detection algorithm is developed; an automatic method for generating the finite element mesh is developed for the propeller blade; a code based on the FEM is compiled for predicting blade stress and deformation; the DTRC 4119 propeller model is applied to validate the reliability of the method; and mesh independence is confirmed by comparing the calculated results with different sizes and types of mesh.[Results] The results show that the calculated blade stress and displacement distribution are reliable. This method avoids the process of artificial modeling and finite element mesh generation, and has the advantages of simple program implementation and high calculation efficiency.[Conclusions] The code can be embedded into the code of theoretical and optimized propeller designs, thereby helping to ensure the strength of designed propellers and improve the efficiency of propeller design.

  20. Development and validation of a criticality calculation scheme based on French deterministic transport codes

    International Nuclear Information System (INIS)

    Santamarina, A.

    1991-01-01

    A criticality-safety calculational scheme using the automated deterministic code system, APOLLO-BISTRO, has been developed. The cell/assembly code APOLLO is used mainly in LWR and HCR design calculations, and its validation spans a wide range of moderation ratios, including voided configurations. Its recent 99-group library and self-shielded cross-sections has been extensively qualified through critical experiments and PWR spent fuel analysis. The PIC self-shielding formalism enables a rigorous treatment of the fuel double heterogeneity in dissolver medium calculations. BISTRO is an optimized multidimensional SN code, part of the modular CCRR package used mainly in FBR calculations. The APOLLO-BISTRO scheme was applied to the 18 experimental benchmarks selected by the OECD/NEACRP Criticality Calculation Working Group. The Calculation-Experiment discrepancy was within ± 1% in ΔK/K and always looked consistent with the experimental uncertainty margin. In the critical experiments corresponding to a dissolver type benchmark, our tools computed a satisfactory Keff. In the VALDUC fuel storage experiments, with hafnium plates, the computed Keff ranged between 0.994 and 1.003 for the various watergaps spacing the fuel clusters from the absorber plates. The APOLLO-KENOEUR statistic calculational scheme, based on the same self-shielded multigroup library, supplied consistent results within 0.3% in ΔK/K. (Author)

  1. Holistic oil field value management: using system dynamics for 'intermediate level' and 'value-based' modelling in the oil industry

    International Nuclear Information System (INIS)

    Corben, D.; Stevenson, R.; Wolstenholme, E.F.

    1999-01-01

    System dynamics has been seen primarily as a strategic tool, most effectively used at the highest level of strategy to identify robust policy interventions under a wide range of scenarios. However, an alternative, complementary and powerful role is emerging. This is at an 'intermediate level' in organisations to coordinate and integrate policies across the value chain. It is at this level where business value, as defined by the discounted value of future free cash flow, is both created and destroyed. This paper introduces the need for 'intermediate-level' and 'value-based' modelling and emphasises the natural role of system dynamics in supporting a methodology to fulfil the need. It describes the development of an approach and its application in the oil industry to coordinate the response of people and tools within operational, financial and commercial functions across the value chain to address a variety of problems and issues. (author)

  2. Minimum critical values of uranyl and plutonium nitrate solutions calculated by various routes of the french criticality codes system CRISTAL using the new isopiestic nitrate density law

    International Nuclear Information System (INIS)

    Anno, Jacques; Rouyer, Veronique; Leclaire, Nicolas

    2003-01-01

    This paper provides for various cases of 235 U enrichment or Pu isotopic vectors, and different reflectors, new minimum critical values of uranyl nitrate and plutonium nitrate solutions (H + =0) obtained by the standard IRSN calculation route and the new isopiestic density laws. Comparisons are also made with other more accurate routes showing that the standard one's results are most often conservative and usable for criticality safety assessments. (author)

  3. Value Chain and Innovation at the Base of the Pyramid

    DEFF Research Database (Denmark)

    Esko, Siim; Zeromskis, Mindaugas; Hsuan, Juliana

    2013-01-01

    is introduced for analysing business readiness in BoP: organisation, value chain and strategy. Four diverse cases were analysed: GE’s reverse innovation project, GrameenPhone, Essilor, and P&G’s PuR. Findings – BoP project should be a top-down supported separate entity with its own strategic processes...... and financial measurements. Working in the value chain requires diverse thinking in terms of interactivity, partners, setup, and governance. Involving customers and consumers in the innovation process is crucial. The venture also needs to make its offerings accessible, affordable, acceptable, available......, and valuable to the customers. A step-by-step scale-up must be followed. Originality/value – The BoP framework can be used as a practical roadmap for companies to analyse the readiness of the business venture and strategy development....

  4. Coal Calorific Value Prediction Based on Projection Pursuit Principle

    OpenAIRE

    QI Minfang; FU Zhongguang; JING Yuan

    2012-01-01

    The calorific value of coal is an important factor for the economic operation of coal-fired power plant. However, calorific value is tremendous difference between the different coal, and even if coal is from the same mine. Restricted by the coal market, most of coal fired power plants can not burn the designed-coal by now in China. The properties of coal as received are changing so frequently that pulverized coal firing is always with the unexpected condition. Therefore, the researches on the...

  5. A service and value based approach to estimating environmental flows

    DEFF Research Database (Denmark)

    Korsgaard, Louise; Jensen, R.A.; Jønch-Clausen, Torkil

    2008-01-01

    at filling that gap by presenting a new environmental flows assessment approach that explicitly links environmental flows to (socio)-economic values by focusing on ecosystem services. This Service Provision Index (SPI) approach is a novel contribution to the existing field of environmental flows assessment...... of sustaining ecosystems but also a matter of supporting humankind/livelihoods. One reason for the marginalisation of environmental flows is the lack of operational methods to demonstrate the inherently multi-disciplinary link between environmental flows, ecosystem services and economic value. This paper aims...

  6. A scheme to calculate higher-order homogenization as applied to micro-acoustic boundary value problems

    Science.gov (United States)

    Vagh, Hardik A.; Baghai-Wadji, Alireza

    2008-12-01

    Current technological challenges in materials science and high-tech device industry require the solution of boundary value problems (BVPs) involving regions of various scales, e.g. multiple thin layers, fibre-reinforced composites, and nano/micro pores. In most cases straightforward application of standard variational techniques to BVPs of practical relevance necessarily leads to unsatisfactorily ill-conditioned analytical and/or numerical results. To remedy the computational challenges associated with sub-sectional heterogeneities various sophisticated homogenization techniques need to be employed. Homogenization refers to the systematic process of smoothing out the sub-structural heterogeneities, leading to the determination of effective constitutive coefficients. Ordinarily, homogenization involves a sophisticated averaging and asymptotic order analysis to obtain solutions. In the majority of the cases only zero-order terms are constructed due to the complexity of the processes involved. In this paper we propose a constructive scheme for obtaining homogenized solutions involving higher order terms, and thus, guaranteeing higher accuracy and greater robustness of the numerical results. We present

  7. Development of Calculation Module for Intake Retention Functions based on Occupational Intakes of Radionuclides

    Energy Technology Data Exchange (ETDEWEB)

    Noh, Siwan; Kwon, Tae-Eun; Lee, Jai-Ki [Hanyang Univ., Seoul (Korea, Republic of); Lee, Jong-Il; Kim, Jang-Lyul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    In internal dosimetry, intake retention and excretion functions are essential to estimate intake activity using bioassay sample such as whole body counter, lung counter, and urine sample. Even though ICRP (International Commission on Radiological Protection)provides the functions in some ICRP publications, it is needed to calculate the functions because the functions from the publications are provided for very limited time. Thus, some computer program are generally used to calculate intake retention and excretion functions and estimate intake activity. OIR (Occupational Intakes of Radionuclides) will be published soon by ICRP, which totally replaces existing internal dosimetry models and relevant data including intake retention and excretion functions. Thus, the calculation tool for the functions is needed based on OIR. In this study, we developed calculation module for intake retention and excretion functions based on OIR using C++ programming language with Intel Math Kernel Library. In this study, we developed the intake retention and excretion function calculation module based on OIR using C++ programing language.

  8. Development of Calculation Module for Intake Retention Functions based on Occupational Intakes of Radionuclides

    International Nuclear Information System (INIS)

    Noh, Siwan; Kwon, Tae-Eun; Lee, Jai-Ki; Lee, Jong-Il; Kim, Jang-Lyul

    2014-01-01

    In internal dosimetry, intake retention and excretion functions are essential to estimate intake activity using bioassay sample such as whole body counter, lung counter, and urine sample. Even though ICRP (International Commission on Radiological Protection)provides the functions in some ICRP publications, it is needed to calculate the functions because the functions from the publications are provided for very limited time. Thus, some computer program are generally used to calculate intake retention and excretion functions and estimate intake activity. OIR (Occupational Intakes of Radionuclides) will be published soon by ICRP, which totally replaces existing internal dosimetry models and relevant data including intake retention and excretion functions. Thus, the calculation tool for the functions is needed based on OIR. In this study, we developed calculation module for intake retention and excretion functions based on OIR using C++ programming language with Intel Math Kernel Library. In this study, we developed the intake retention and excretion function calculation module based on OIR using C++ programing language

  9. Code accuracy evaluation of ISP 35 calculations based on NUPEC M-7-1 test

    International Nuclear Information System (INIS)

    Auria, F.D.; Oriolo, F.; Leonardi, M.; Paci, S.

    1995-01-01

    Quantitative evaluation of code uncertainties is a necessary step in the code assessment process, above all if best-estimate codes are utilised for licensing purposes. Aiming at quantifying the code accuracy, an integral methodology based on the Fast Fourier Transform (FFT) has been developed at the University of Pisa (DCMN) and has been already applied to several calculations related to primary system test analyses. This paper deals with the first application of the FFT based methodology to containment code calculations based on a hydrogen mixing and distribution test performed in the NUPEC (Nuclear Power Engineering Corporation) facility. It is referred to pre-test and post-test calculations submitted for the International Standard Problem (ISP) n. 35. This is a blind exercise, simulating the effects of steam injection and spray behaviour on gas distribution and mixing. The result of the application of this methodology to nineteen selected variables calculated by ten participants are here summarized, and the comparison (where possible) of the accuracy evaluated for the pre-test and for the post-test calculations of a same user is also presented. (author)

  10. Applying Activity Based Costing (ABC) Method to Calculate Cost Price in Hospital and Remedy Services.

    Science.gov (United States)

    Rajabi, A; Dabiri, A

    2012-01-01

    Activity Based Costing (ABC) is one of the new methods began appearing as a costing methodology in the 1990's. It calculates cost price by determining the usage of resources. In this study, ABC method was used for calculating cost price of remedial services in hospitals. To apply ABC method, Shahid Faghihi Hospital was selected. First, hospital units were divided into three main departments: administrative, diagnostic, and hospitalized. Second, activity centers were defined by the activity analysis method. Third, costs of administrative activity centers were allocated into diagnostic and operational departments based on the cost driver. Finally, with regard to the usage of cost objectives from services of activity centers, the cost price of medical services was calculated. The cost price from ABC method significantly differs from tariff method. In addition, high amount of indirect costs in the hospital indicates that capacities of resources are not used properly. Cost price of remedial services with tariff method is not properly calculated when compared with ABC method. ABC calculates cost price by applying suitable mechanisms but tariff method is based on the fixed price. In addition, ABC represents useful information about the amount and combination of cost price services.

  11. ACCEPTABILITY EVALUATION FOR USING ICRP TISSUE WEIGHTING FACTORS TO CALCULATE EFFECTIVE DOSE VALUE FOR SEPARATE GENDER-AGE GROUPS OF RUSSIAN FEDERATION

    Directory of Open Access Journals (Sweden)

    L. V. Repin

    2013-01-01

    Full Text Available An article describes radiation risk factors for several gender-age population groups according to Russian statistical and medical-demographic data, evaluates the lethality rate for separate nosologic forms of malignant neoplasms based on Russian cancer registries according to the method of the International Agency for Cancer Research. Relative damage factors are calculated for the gender-age groups under consideration. The tissue weighting factors recommended by ICRP to calculate effective doses are compared with relative damage factors calculated by ICRP for the nominal population and with similar factors calculated in this work for separate population cohorts in theRussian Federation. The significance of differences and the feasibility of using tissue weighting factors adapted for the Russian population in assessing population risks in cohorts of different gender-age compositions have been assessed.

  12. Consumer value of context aware and location based mobile services

    NARCIS (Netherlands)

    De Vos, Henny; Haaker, Timber; Teerling, Marije; Kleijnen, M.H.P.

    2008-01-01

    Context aware services have the ability to utilize information about the user's context to adapt services to the user's current situation and needs. In this paper we consider users' perceptions of the added value of location awareness and presence information in mobile services. We use an

  13. Nutritive value and sensory acceptability of corn- and kocho- based ...

    African Journals Online (AJOL)

    The pumpkin in CBP and KBP provided 54 µg RAE per 100 kcal, increasing the Vitamin A value of the mixes by 25- and 180-fold, respectively. Sensory evaluation of CBP by 30 mother and child pairs, and KBP by 28 pairs indicated high acceptability (4.7 - 4.9 on a 5-pt Hedonic scales) of the complementary foods.

  14. Value added based on educational positions in Dutch secondary education

    NARCIS (Netherlands)

    Timmermans, Anneke C.; Bosker, Roel J.; de Wolf, Inge F.; Doolaard, Simone; van der Werf, Margaretha P.C.

    2014-01-01

    Estimating added value as an indicator of school effectiveness in the context of educational accountability often occurs using test or examination scores of students. This study investigates the possibilities of using scores for educational positions as an alternative indicator. A number of

  15. Radiocarbon Analysis to Calculate New End-Member Values for Biomass Burning Source Samples Specific to the Bay Area

    Science.gov (United States)

    Yoon, S.; Kirchstetter, T.; Fairley, D.; Sheesley, R. J.; Tang, X.

    2017-12-01

    Elemental carbon (EC), also known as black carbon or soot, is an important particulate air pollutant that contributes to climate forcing through absorption of solar radiation and to adverse human health impacts through inhalation. Both fossil fuel combustion and biomass burning, via residential firewood burning, agricultural burning, wild fires, and controlled burns, are significant sources of EC. Our ability to successfully control ambient EC concentrations requires understanding the contribution of these different emission sources. Radiocarbon (14C) analysis has been increasingly used as an apportionment tool to distinguish between EC from fossil fuel and biomass combustion sources. However, there are uncertainties associated with this method including: 1) uncertainty associated with the isolation of EC to be used for radiocarbon analysis (e.g., inclusion of organic carbon, blank contamination, recovery of EC, etc.) 2) uncertainty associated with the radiocarbon signature of the end member. The objective of this research project is to utilize laboratory experiments to evaluate some of these uncertainties, particularly for EC sources that significantly impact the San Francisco Bay Area. Source samples of EC only and a mix of EC and organic carbon (OC) were produced for this study to represent known emission sources and to approximate the mixing of EC and OC that would be present in the atmosphere. These samples include a combination of methane flame soot, various wood smoke samples (i.e. cedar, oak, sugar pine, pine at various ages, etc.), meat cooking, and smoldering cellulose smoke. EC fractions were isolated using a Sunset Laboratory's thermal optical transmittance carbon analyzer. For 14C analysis, samples were sent to Woods Hole Oceanographic Institution for isotope analysis using an accelerated mass spectrometry. End member values and uncertainties for the EC isolation utilizing this method will be reported.

  16. Effect-based trigger values for in vitro bioassays: Reading across from existing water quality guideline values.

    Science.gov (United States)

    Escher, Beate I; Neale, Peta A; Leusch, Frederic D L

    2015-09-15

    Cell-based bioassays are becoming increasingly popular in water quality assessment. The new generations of reporter-gene assays are very sensitive and effects are often detected in very clean water types such as drinking water and recycled water. For monitoring applications it is therefore imperative to derive trigger values that differentiate between acceptable and unacceptable effect levels. In this proof-of-concept paper, we propose a statistical method to read directly across from chemical guideline values to trigger values without the need to perform in vitro to in vivo extrapolations. The derivation is based on matching effect concentrations with existing chemical guideline values and filtering out appropriate chemicals that are responsive in the given bioassays at concentrations in the range of the guideline values. To account for the mixture effects of many chemicals acting together in a complex water sample, we propose bioanalytical equivalents that integrate the effects of groups of chemicals with the same mode of action that act in a concentration-additive manner. Statistical distribution methods are proposed to derive a specific effect-based trigger bioanalytical equivalent concentration (EBT-BEQ) for each bioassay of environmental interest that targets receptor-mediated toxicity. Even bioassays that are indicative of the same mode of action have slightly different numeric trigger values due to differences in their inherent sensitivity. The algorithm was applied to 18 cell-based bioassays and 11 provisional effect-based trigger bioanalytical equivalents were derived as an illustrative example using the 349 chemical guideline values protective for human health of the Australian Guidelines for Water Recycling. We illustrate the applicability using the example of a diverse set of water samples including recycled water. Most recycled water samples were compliant with the proposed triggers while wastewater effluent would not have been compliant with a few

  17. Performance Analyses of Counter-Flow Closed Wet Cooling Towers Based on a Simplified Calculation Method

    Directory of Open Access Journals (Sweden)

    Xiaoqing Wei

    2017-02-01

    Full Text Available As one of the most widely used units in water cooling systems, the closed wet cooling towers (CWCTs have two typical counter-flow constructions, in which the spray water flows from the top to the bottom, and the moist air and cooling water flow in the opposite direction vertically (parallel or horizontally (cross, respectively. This study aims to present a simplified calculation method for conveniently and accurately analyzing the thermal performance of the two types of counter-flow CWCTs, viz. the parallel counter-flow CWCT (PCFCWCT and the cross counter-flow CWCT (CCFCWCT. A simplified cooling capacity model that just includes two characteristic parameters is developed. The Levenberg–Marquardt method is employed to determine the model parameters by curve fitting of experimental data. Based on the proposed model, the predicted outlet temperatures of the process water are compared with the measurements of a PCFCWCT and a CCFCWCT, respectively, reported in the literature. The results indicate that the predicted values agree well with the experimental data in previous studies. The maximum absolute errors in predicting the process water outlet temperatures are 0.20 and 0.24 °C for the PCFCWCT and CCFCWCT, respectively. These results indicate that the simplified method is reliable for performance prediction of counter-flow CWCTs. Although the flow patterns of the two towers are different, the variation trends of thermal performance are similar to each other under various operating conditions. The inlet air wet-bulb temperature, inlet cooling water temperature, air flow rate, and cooling water flow rate are crucial for determining the cooling capacity of a counter-flow CWCT, while the cooling tower effectiveness is mainly determined by the flow rates of air and cooling water. Compared with the CCFCWCT, the PCFCWCT is much more applicable in a large-scale cooling water system, and the superiority would be amplified when the scale of water

  18. Reward-based training of recurrent neural networks for cognitive and value-based tasks.

    Science.gov (United States)

    Song, H Francis; Yang, Guangyu R; Wang, Xiao-Jing

    2017-01-13

    Trained neural network models, which exhibit features of neural activity recorded from behaving animals, may provide insights into the circuit mechanisms of cognitive functions through systematic analysis of network activity and connectivity. However, in contrast to the graded error signals commonly used to train networks through supervised learning, animals learn from reward feedback on definite actions through reinforcement learning. Reward maximization is particularly relevant when optimal behavior depends on an animal's internal judgment of confidence or subjective preferences. Here, we implement reward-based training of recurrent neural networks in which a value network guides learning by using the activity of the decision network to predict future reward. We show that such models capture behavioral and electrophysiological findings from well-known experimental paradigms. Our work provides a unified framework for investigating diverse cognitive and value-based computations, and predicts a role for value representation that is essential for learning, but not executing, a task.

  19. Calculation for Primary Combustion Characteristics of Boron-Based Fuel-Rich Propellant Based on BP Neural Network

    Directory of Open Access Journals (Sweden)

    Wu Wan'e

    2012-01-01

    Full Text Available A practical scheme for selecting characterization parameters of boron-based fuel-rich propellant formulation was put forward; a calculation model for primary combustion characteristics of boron-based fuel-rich propellant based on backpropagation neural network was established, validated, and then was used to predict primary combustion characteristics of boron-based fuel-rich propellant. The results show that the calculation error of burning rate is less than ±7.3%; in the formulation range (hydroxyl-terminated polybutadiene 28%–32%, ammonium perchlorate 30%–35%, magnalium alloy 4%–8%, catocene 0%–5%, and boron 30%, the variation of the calculation data is consistent with the experimental results.

  20. Failure Probability Calculation Method Using Kriging Metamodel-based Importance Sampling Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seunggyu [Korea Aerospace Research Institue, Daejeon (Korea, Republic of); Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2017-05-15

    The kernel density was determined based on sampling points obtained in a Markov chain simulation and was assumed to be an important sampling function. A Kriging metamodel was constructed in more detail in the vicinity of a limit state. The failure probability was calculated based on importance sampling, which was performed for the Kriging metamodel. A pre-existing method was modified to obtain more sampling points for a kernel density in the vicinity of a limit state. A stable numerical method was proposed to find a parameter of the kernel density. To assess the completeness of the Kriging metamodel, the possibility of changes in the calculated failure probability due to the uncertainty of the Kriging metamodel was calculated.

  1. a New Method for Calculating Fractal Dimensions of Porous Media Based on Pore Size Distribution

    Science.gov (United States)

    Xia, Yuxuan; Cai, Jianchao; Wei, Wei; Hu, Xiangyun; Wang, Xin; Ge, Xinmin

    Fractal theory has been widely used in petrophysical properties of porous rocks over several decades and determination of fractal dimensions is always the focus of researches and applications by means of fractal-based methods. In this work, a new method for calculating pore space fractal dimension and tortuosity fractal dimension of porous media is derived based on fractal capillary model assumption. The presented work establishes relationship between fractal dimensions and pore size distribution, which can be directly used to calculate the fractal dimensions. The published pore size distribution data for eight sandstone samples are used to calculate the fractal dimensions and simultaneously compared with prediction results from analytical expression. In addition, the proposed fractal dimension method is also tested through Micro-CT images of three sandstone cores, and are compared with fractal dimensions by box-counting algorithm. The test results also prove a self-similar fractal range in sandstone when excluding smaller pores.

  2. The Resource-Based View and Value: The Customer-Based View of the Firm

    Science.gov (United States)

    Clulow, Val; Barry, Carol; Gerstman, Julie

    2007-01-01

    Purpose: The resource-based view (RBV) explores the role of key resources, identified as intangible assets and capabilities, in creating competitive advantage and superior performance. To a great extent the conceptual analysis and empirical research within the RBV has focused on the firm's perspective of key resources and the value to the firm of…

  3. A boundary-value inverse model and its application to the calculation of tidal oscillation systems in the Western South Atlantic Ocean

    International Nuclear Information System (INIS)

    Miranda-Alonso, S.

    1991-01-01

    A Cauchy-Riemann problem is solved for the case of the linearized equations for long waves. The initial-values are amplitudes and phases measured at the coast. No boundary values are made use of. This inverse-problem is solved by starting the calculations at the coast and continuing outwards to the open ocean in a rectangular areas with one side at the coast and the other three at the open ocean. The initial values were expanded into the complex plane to get a platform to perform with the calculations. This non-well-posed problem was solved by means of two different mathematical techniques for comparison. The results produced with the inverse model were compared with those produced with a 'classical' model initialized at the three open boundaries with the results of the inverse model. The oscillating systems produced by both models were quite similar, giving validity to this invese modeling approach which should be a useful technique to solve problems when only initial values are known. (orig.)

  4. Housing Value Forecasting Based on Machine Learning Methods

    OpenAIRE

    Mu, Jingyi; Wu, Fang; Zhang, Aihua

    2014-01-01

    In the era of big data, many urgent issues to tackle in all walks of life all can be solved via big data technique. Compared with the Internet, economy, industry, and aerospace fields, the application of big data in the area of architecture is relatively few. In this paper, on the basis of the actual data, the values of Boston suburb houses are forecast by several machine learning methods. According to the predictions, the government and developers can make decisions about whether developing...

  5. A simple method for calculating power based on a prior trial.

    NARCIS (Netherlands)

    Borm, G.F.; Bloem, B.R.; Munneke, M.; Teerenstra, S.

    2010-01-01

    OBJECTIVE: When an investigator wants to base the power of a planned clinical trial on the outcome of another trial, the latter study may not have been reported in sufficient detail to allow this. For example, when the outcome is a change from baseline, the power calculation requires the standard

  6. Real Estate Value Tax Based on the Latvian Experience

    Directory of Open Access Journals (Sweden)

    Hełdak Maria

    2015-02-01

    Full Text Available The article deals with the subject of the planned real estate changes in Poland as viewed in relation to the solutions accepted in Latvia. The current basis for real estate tax is a set fee per 1m² of the estate’s area established in a town council resolution, taking into account the maximum fees established by the Minister of Finances. Currently, the owners of real estates with identical area often pay the same tax regardless of the location, condition and function of the real estate formulated in the plan. The cadastral tax currently in preparation addresses these and other features which influence the value of real estate. A set cadastral value approximate to the market value will serve as the basis for determining the cadastral tax. The principles of real estate tax retrieval in Poland are not clearly established which is why it might prove useful to use the experience of other countries undergoing similar governmental changes. The article makes references to tax solutions recognized in Latvia in the domain of tax fees, valuation principles and problems accompanying real estate tax retrieval.

  7. Slope excavation quality assessment and excavated volume calculation in hydraulic projects based on laser scanning technology

    Directory of Open Access Journals (Sweden)

    Chao Hu

    2015-04-01

    Full Text Available Slope excavation is one of the most crucial steps in the construction of a hydraulic project. Excavation project quality assessment and excavated volume calculation are critical in construction management. The positioning of excavation projects using traditional instruments is inefficient and may cause error. To improve the efficiency and precision of calculation and assessment, three-dimensional laser scanning technology was used for slope excavation quality assessment. An efficient data acquisition, processing, and management workflow was presented in this study. Based on the quality control indices, including the average gradient, slope toe elevation, and overbreak and underbreak, cross-sectional quality assessment and holistic quality assessment methods were proposed to assess the slope excavation quality with laser-scanned data. An algorithm was also presented to calculate the excavated volume with laser-scanned data. A field application and a laboratory experiment were carried out to verify the feasibility of these methods for excavation quality assessment and excavated volume calculation. The results show that the quality assessment indices can be obtained rapidly and accurately with design parameters and scanned data, and the results of holistic quality assessment are consistent with those of cross-sectional quality assessment. In addition, the time consumption in excavation quality assessment with the laser scanning technology can be reduced by 70%–90%, as compared with the traditional method. The excavated volume calculated with the scanned data only slightly differs from measured data, demonstrating the applicability of the excavated volume calculation method presented in this study.

  8. Radial electromagnetic force calculation of induction motor based on multi-loop theory

    Directory of Open Access Journals (Sweden)

    HE Haibo

    2017-12-01

    Full Text Available [Objectives] In order to study the vibration and noise of induction motors, a method of radial electromagnetic force calculation is established on the basis of the multi-loop model.[Methods] Based on the method of calculating air-gap magneto motive force according to stator and rotor fundamental wave current, the analytic formulas are deduced for calculating the air-gap magneto motive force and radial electromagnetic force generated in accordance with any stator winding and rotor conducting bar current. The multi-loop theory and calculation method for the electromagnetic parameters of a motor are introduced, and a dynamic simulation model of an induction motor built to achieve the current of the stator winding and rotor conducting bars, and obtain the calculation formula of radial electromagnetic force. The radial electromagnetic force and vibration are then estimated.[Results] The experimental results indicate that the vibration acceleration frequency and amplitude of the motor are consistent with the experimental results.[Conclusions] The results and calculation method can support the low noise design of converters.

  9. Ab initio Calculations of Electronic Fingerprints of DNA bases on Graphene

    Science.gov (United States)

    Ahmed, Towfiq; Rehr, John J.; Kilina, Svetlana; Das, Tanmoy; Haraldsen, Jason T.; Balatsky, Alexander V.

    2012-02-01

    We have carried out first principles DFT calculations of the electronic local density of states (LDOS) of DNA nucleotide bases (A,C,G,T) adsorbed on graphene using LDA with ultra-soft pseudo-potentials. We have also calculated the longitudinal transmission currents T(E) through graphene nano-pores as an individual DNA base passes through it, using a non-equilibrium Green's function (NEGF) formalism. We observe several dominant base-dependent features in the LDOS and T(E) in an energy range within a few eV of the Fermi level. These features can serve as electronic fingerprints for the identification of individual bases from dI/dV measurements in scanning tunneling spectroscopy (STS) and nano-pore experiments. Thus these electronic signatures can provide an alternative approach to DNA sequencing.

  10. Estimation of ΔR/R values by benchmark study of the Mössbauer Isomer shifts for Ru, Os complexes using relativistic DFT calculations

    Energy Technology Data Exchange (ETDEWEB)

    Kaneko, Masashi [Japan Atomic Energy Agency, Nuclear Science and Engineering Center (Japan); Yasuhara, Hiroki; Miyashita, Sunao; Nakashima, Satoru, E-mail: snaka@hiroshima-u.ac.jp [Hiroshima University, Graduate School of Science (Japan)

    2017-11-15

    The present study applies all-electron relativistic DFT calculation with Douglas-Kroll-Hess (DKH) Hamiltonian to each ten sets of Ru and Os compounds. We perform the benchmark investigation of three density functionals (BP86, B3LYP and B2PLYP) using segmented all-electron relativistically contracted (SARC) basis set with the experimental Mössbauer isomer shifts for {sup 99}Ru and {sup 189}Os nuclides. Geometry optimizations at BP86 theory of level locate the structure in a local minimum. We calculate the contact density to the wavefunction obtained by a single point calculation. All functionals show the good linear correlation with experimental isomer shifts for both {sup 99}Ru and {sup 189}Os. Especially, B3LYP functional gives a stronger correlation compared to BP86 and B2PLYP functionals. The comparison of contact density between SARC and well-tempered basis set (WTBS) indicated that the numerical convergence of contact density cannot be obtained, but the reproducibility is less sensitive to the choice of basis set. We also estimate the values of ΔR/R, which is an important nuclear constant, for {sup 99}Ru and {sup 189}Os nuclides by using the benchmark results. The sign of the calculated ΔR/R values is consistent with the predicted data for {sup 99}Ru and {sup 189}Os. We obtain computationally the ΔR/R values of {sup 99}Ru and {sup 189}Os (36.2 keV) as 2.35×10{sup −4} and −0.20×10{sup −4}, respectively, at B3LYP level for SARC basis set.

  11. Theoretical calculations of L alpha one x-ray emission intensity ratios for uranium in various matrices: a comparison with experimental values

    International Nuclear Information System (INIS)

    Anderson, L.D.

    1976-01-01

    The U L/sub α1/ x-ray emission intensity ratios (I/sub lambda/sub L//I sub lambda/sub L/, sub 100 percent/sub UO 2 /) in various matrices were calculated using the fundamental parameters formula of Criss and Birks and mass absorption coefficients calculated from a formula developed by Dewey. The use of the intensity ratio made it unnecessary to know the fluorescence yield for the U L/sub III/ level, the probability of emission of the U L/sub α1/ line, and the jump ratios for the three absorption edges of uranium. Also, since an intensity ratio was used, the results are independent of the x-ray tube current and the spectral distribution of the x-ray tube. A method is presented to calculate the intensity ratios for x-ray tube voltages other than the value (45 kV) used in the calculations. The theoretical results are calculated and compared with the experimental results obtained for 141 matrices. Difficulties due to oxidation of some of the metal powders used in the sample preparation, to small concentrations of uranium, and to an excessively large number of elements present in some of the samples resulted in the invalidation of the experimental results for 91 of the matrices. For the remaining 50 matrices, the theoretical and experimental values agreed to within +-5 percent relative error for 36 matrices; to within +-5 percent to +- 10 percent for 7 matrices; to within +-10 percent to +-20 percent for 6 matrices; and was greater than +-20 percent for 1 matrix

  12. Medication calculation: the potential role of digital game-based learning in nurse education.

    Science.gov (United States)

    Foss, Brynjar; Mordt Ba, Petter; Oftedal, Bjørg F; Løkken, Atle

    2013-12-01

    Medication dose calculation is one of several medication-related activities that are conducted by nurses daily. However, medication calculation skills appear to be an area of global concern, possibly because of low numeracy skills, test anxiety, low self-confidence, and low self-efficacy among student nurses. Various didactic strategies have been developed for student nurses who still lack basic mathematical competence. However, we suggest that the critical nature of these skills demands the investigation of alternative and/or supplementary didactic approaches to improve medication calculation skills and to reduce failure rates. Digital game-based learning is a possible solution because of the following reasons. First, mathematical drills may improve medication calculation skills. Second, games are known to be useful during nursing education. Finally, mathematical drill games appear to improve the attitudes of students toward mathematics. The aim of this article was to discuss common challenges of medication calculation skills in nurse education, and we highlight the potential role of digital game-based learning in this area.

  13. Calculation of proper values {lambda}{sub mn}(c) of the spheroidal equation; Calcul des valeurs propres {lambda}{sub mn}(c) de l'equation spheroidale

    Energy Technology Data Exchange (ETDEWEB)

    Dufour, J M [CEA Limeil Valenton, 94 - Villeneuve-Saint-Georges (France)

    1969-05-01

    The aim of this report is to find, with a fair accuracy, a proper value {lambda}{sub mn}(c) for the spheroidal differential equation: d/dz[(1-z{sup 2})du/dz]+[ {lambda} - c{sup 2}z{sup 2} - m{sup 2}/(1-z{sup 2})]u = 0 obtained by the separation of the three variables of the wave equation: {delta}{sup 2}u + k{sup 2}u = 0 with rotational elongated or flattened ellipsoidal coordinates. The program drawn up calculates {lambda}{sub mn}(c) for any values of (mnc) chosen in the zones 0 {<=} | m | {<=} 10, a whole number; |m| {<=} n {<=} 20, n a whole number; 0 {<=} |c | {<=} 30; previous work has covered a smaller field of values. The function to be solved by the approximation method of the Newton-Raphson type, and the initial value, are chosen so as to converge towards the required solution. (author) [French] L'objet de ce rapport est de rechercher avec une tres bonne approximation, une valeur propre {lambda}{sub mn}(c) de l'equation differentielle spheroidale: d/dz[(1-z{sup 2})du/dz]+[ {lambda} - c{sup 2}z{sup 2} - m{sup 2}/1-z{sup 2}]u = 0 obtenue par separation des 3 variables de l'equation des ondes: {delta}{sup 2}u + k{sup 2}u = 0 en coordonnees des ellipsoides de revolution allonges ou aplatis. Le programme etabli calcule {lambda}{sub mn}(c) quel que soit le jeu (mnc) choisi dans le domaine 0 {<=} | m | {<=} 10 entier; |m| {<=} n {<=} 20, n entier; 0 {<=} |c | {<=} 30; alors que les etudes precedentes portaient sur un domaine plus restreint. La fonction a resoudre par la methode d'approximation du type NEWTON-RAPHSON et la valeur initiale, sont choisies de facon a converger vers la solution desiree. (auteur)

  14. Comparison of lysimeter based and calculated ASCE reference evapotranspiration in a subhumid climate

    Science.gov (United States)

    Nolz, Reinhard; Cepuder, Peter; Eitzinger, Josef

    2016-04-01

    The standardized form of the well-known FAO Penman-Monteith equation, published by the Environmental and Water Resources Institute of the American Society of Civil Engineers (ASCE-EWRI), is recommended as a standard procedure for calculating reference evapotranspiration (ET ref) and subsequently plant water requirements. Applied and validated under different climatic conditions it generally achieved good results compared to other methods. However, several studies documented deviations between measured and calculated reference evapotranspiration depending on environmental and weather conditions. Therefore, it seems generally advisable to evaluate the model under local environmental conditions. In this study, reference evapotranspiration was determined at a subhumid site in northeastern Austria from 2005 to 2010 using a large weighing lysimeter (ET lys). The measured data were compared with ET ref calculations. Daily values differed slightly during a year, at which ET ref was generally overestimated at small values, whereas it was rather underestimated when ET was large, which is supported also by other studies. In our case, advection of sensible heat proved to have an impact, but it could not explain the differences exclusively. Obviously, there were also other influences, such as seasonal varying surface resistance or albedo. Generally, the ASCE-EWRI equation for daily time steps performed best at average weather conditions. The outcomes should help to correctly interpret ET ref data in the region and in similar environments and improve knowledge on the dynamics of influencing factors causing deviations.

  15. SU-F-J-109: Generate Synthetic CT From Cone Beam CT for CBCT-Based Dose Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, H; Barbee, D; Wang, W; Pennell, R; Hu, K; Osterman, K [Department of Radiation Oncology, NYU Langone Medical Center, New York, NY (United States)

    2016-06-15

    Purpose: The use of CBCT for dose calculation is limited by its HU inaccuracy from increased scatter. This study presents a method to generate synthetic CT images from CBCT data by a probabilistic classification that may be robust to CBCT noise. The feasibility of using the synthetic CT for dose calculation is evaluated in IMRT for unilateral H&N cancer. Methods: In the training phase, a fuzzy c-means classification was performed on HU vectors (CBCT, CT) of planning CT and registered day-1 CBCT image pair. Using the resulting centroid CBCT and CT values for five classified “tissue” types, a synthetic CT for a daily CBCT was created by classifying each CBCT voxel to obtain its probability belonging to each tissue class, then assigning a CT HU with a probability-weighted summation of the classes’ CT centroids. Two synthetic CTs from a CBCT were generated: s-CT using the centroids from classification of individual patient CBCT/CT data; s2-CT using the same centroids for all patients to investigate the applicability of group-based centroids. IMRT dose calculations for five patients were performed on the synthetic CTs and compared with CT-planning doses by dose-volume statistics. Results: DVH curves of PTVs and critical organs calculated on s-CT and s2-CT agree with those from planning-CT within 3%, while doses calculated with heterogeneity off or on raw CBCT show DVH differences up to 15%. The differences in PTV D95% and spinal cord max are 0.6±0.6% and 0.6±0.3% for s-CT, and 1.6±1.7% and 1.9±1.7% for s2-CT. Gamma analysis (2%/2mm) shows 97.5±1.6% and 97.6±1.6% pass rates for using s-CTs and s2-CTs compared with CT-based doses, respectively. Conclusion: CBCT-synthesized CTs using individual or group-based centroids resulted in dose calculations that are comparable to CT-planning dose for unilateral H&N cancer. The method may provide a tool for accurate dose calculation based on daily CBCT.

  16. SU-F-J-109: Generate Synthetic CT From Cone Beam CT for CBCT-Based Dose Calculation

    International Nuclear Information System (INIS)

    Wang, H; Barbee, D; Wang, W; Pennell, R; Hu, K; Osterman, K

    2016-01-01

    Purpose: The use of CBCT for dose calculation is limited by its HU inaccuracy from increased scatter. This study presents a method to generate synthetic CT images from CBCT data by a probabilistic classification that may be robust to CBCT noise. The feasibility of using the synthetic CT for dose calculation is evaluated in IMRT for unilateral H&N cancer. Methods: In the training phase, a fuzzy c-means classification was performed on HU vectors (CBCT, CT) of planning CT and registered day-1 CBCT image pair. Using the resulting centroid CBCT and CT values for five classified “tissue” types, a synthetic CT for a daily CBCT was created by classifying each CBCT voxel to obtain its probability belonging to each tissue class, then assigning a CT HU with a probability-weighted summation of the classes’ CT centroids. Two synthetic CTs from a CBCT were generated: s-CT using the centroids from classification of individual patient CBCT/CT data; s2-CT using the same centroids for all patients to investigate the applicability of group-based centroids. IMRT dose calculations for five patients were performed on the synthetic CTs and compared with CT-planning doses by dose-volume statistics. Results: DVH curves of PTVs and critical organs calculated on s-CT and s2-CT agree with those from planning-CT within 3%, while doses calculated with heterogeneity off or on raw CBCT show DVH differences up to 15%. The differences in PTV D95% and spinal cord max are 0.6±0.6% and 0.6±0.3% for s-CT, and 1.6±1.7% and 1.9±1.7% for s2-CT. Gamma analysis (2%/2mm) shows 97.5±1.6% and 97.6±1.6% pass rates for using s-CTs and s2-CTs compared with CT-based doses, respectively. Conclusion: CBCT-synthesized CTs using individual or group-based centroids resulted in dose calculations that are comparable to CT-planning dose for unilateral H&N cancer. The method may provide a tool for accurate dose calculation based on daily CBCT.

  17. Value-based insurance design: consumers' views on paying more for high-cost, low-value care.

    Science.gov (United States)

    Ginsburg, Marjorie

    2010-11-01

    Value-based insurance designs frequently lower consumers' cost sharing to motivate healthy behavior, such as adhering to medication regimens. Few health care purchasers have followed the more controversial approach of using increased cost sharing to temper demand for high-cost, low-value medical care. Yet there is evidence that when health care's affordability is at stake, the public may be willing to compromise on coverage of certain medical problems and less effective treatments. Businesses should engage employees in discussions about if and how this type of value-based insurance design should apply to their own insurance coverage. A similar process could also be used for Medicare and other public-sector programs.

  18. A program for monitor unit calculation for high energy photon beams in isocentric condition based on measured data

    International Nuclear Information System (INIS)

    Gesheva-Atanasova, N.

    2008-01-01

    The aim of this study is: 1) to propose a procedure and a program for monitor unit calculation for radiation therapy with high energy photon beams, based on data measured by author; 2) to compare this data with published one and 3) to evaluate the precision of the monitor unit calculation program. From this study it could be concluded that, we reproduced with a good agreement the published data, except the TPR values for dept up to 5 cm. The measured relative weight of upper and lower jaws - parameter A was dramatically different from the published data, but perfectly described the collimator exchange effect for our treatment machine. No difference was found between the head scatter ratios, measured in a mini phantom and those measured with a proper brass buildup cap. Our monitor unit calculation program was found to be reliable and it can be applied for check up of the patient's plans for irradiation with high energy photon beams and for some fast calculations. Because of the identity in the construction, design and characteristics of the Siemens accelerators, and the agreement with the published data for the same beam qualities, we hope that most of our experimental data and this program can be used after verification in other hospitals

  19. 76 FR 39006 - Medicare Program; Hospital Inpatient Value-Based Purchasing Program; Correction

    Science.gov (United States)

    2011-07-05

    ... and 480 [CMS-3239-CN] RIN 0938-AQ55 Medicare Program; Hospital Inpatient Value-Based Purchasing... Value-Based Purchasing Program.'' DATES: Effective Date: These corrections are effective on July 1, 2011... for the hospital value-based purchasing program. Therefore, in section III. 6. and 7. of this notice...

  20. SARP: a value-based approach to hospice admissions triage.

    Science.gov (United States)

    MacDonald, D

    1995-01-01

    As hospices become established and case referrals increase, many programs are faced with the necessity of instituting waiting lists. Prioritizing cases for order of admission requires a triage method that is rational, fair, and consistent. This article describes the SARP method of hospice admissions triage, which evaluates prospective cases according to seniority, acuity, risk, and political significance. SARP's essential features, operative assumptions, advantages, and limitations are discussed, as well as the core hospice values which underlie its use. The article concludes with a call for trial and evaluation of SARP in other hospice settings.

  1. The Value of Negotiating Cost-Based Transfer Prices

    Directory of Open Access Journals (Sweden)

    Anne Chwolka

    2010-10-01

    Full Text Available This paper analyzes the potential of one-step transfer prices based on either variable or full costs for coordinating decentralized production and quality-improving investment decisions. Transfer prices based on variable costs fail to induce investments on the upstream stage. In contrast, transfer prices based on full costs provide strong investment incentives for the upstream divisions. However, they fail to coordinate the investment decisions. We show that negotiations prevent such coordination failure. In particular, we find that the firm benefits from a higher degree of decentralization so that total profit increases in the number of parameters being subject to negotiations.

  2. Examining the Perceived Value of Integration of Earned Value Management with Risk Management-Based Performance Measurement Baseline

    Science.gov (United States)

    Shah, Akhtar H.

    2014-01-01

    Many projects fail despite the use of evidence-based project management practices such as Performance Measurement Baseline (PMB), Earned Value Management (EVM) and Risk Management (RM). Although previous researchers have found that integrated project management techniques could be more valuable than the same techniques used by themselves, these…

  3. Two-dimensional core calculation research for fuel management optimization based on CPACT code

    International Nuclear Information System (INIS)

    Chen Xiaosong; Peng Lianghui; Gang Zhi

    2013-01-01

    Fuel management optimization process requires rapid assessment for the core layout program, and the commonly used methods include two-dimensional diffusion nodal method, perturbation method, neural network method and etc. A two-dimensional loading patterns evaluation code was developed based on the three-dimensional LWR diffusion calculation program CPACT. Axial buckling introduced to simulate the axial leakage was searched in sub-burnup sections to correct the two-dimensional core diffusion calculation results. Meanwhile, in order to get better accuracy, the weight equivalent volume method of the control rod assembly cross-section was improved. (authors)

  4. Calculation and Simulation Study on Transient Stability of Power System Based on Matlab/Simulink

    Directory of Open Access Journals (Sweden)

    Shi Xiu Feng

    2016-01-01

    Full Text Available The stability of the power system is destroyed, will cause a large number of users power outage, even cause the collapse of the whole system, extremely serious consequences. Based on the analysis in single machine infinite system as an example, when at the f point two phase ground fault occurs, the fault lines on either side of the circuit breaker tripping resection at the same time,respectively by two kinds of calculation and simulation methods of system transient stability analysis, the conclusion are consistent. and the simulation analysis is superior to calculation analysis.

  5. Core physics design calculation of mini-type fast reactor based on Monte Carlo method

    International Nuclear Information System (INIS)

    He Keyu; Han Weishi

    2007-01-01

    An accurate physics calculation model has been set up for the mini-type sodium-cooled fast reactor (MFR) based on MCNP-4C code, then a detailed calculation of its critical physics characteristics, neutron flux distribution, power distribution and reactivity control has been carried out. The results indicate that the basic physics characteristics of MFR can satisfy the requirement and objectives of the core design. The power density and neutron flux distribution are symmetrical and reasonable. The control system is able to make a reliable reactivity balance efficiently and meets the request for long-playing operation. (authors)

  6. 'What the patient wants': an investigation of the methods of ascertaining patient values in evidence-based medicine and values-based practice.

    Science.gov (United States)

    Wieten, Sarah

    2018-02-01

    Evidence-Based Medicine (EBM), Values-Based Practice (VBP) and Person-Centered Healthcare (PCH) are all concerned with the values in play in the clinical encounter. However, these recent movements are not in agreement about how to discover these relevant values. In some parts of EBM textbooks, the prescribed method for discovering values is through social science research on the average values in a particular population. VBP by contrast always investigates the individually held values of the different stakeholders in the particular clinical encounter, although the account has some other difficulties. I argue that although average values for populations might be very useful in informing questions of resource distribution and policy making, their use cannot replace the individual solicitation of patient (and other stakeholder) values in the clinical encounter. Because of the inconsistency of the EBM stance on values, the incompatibility of some versions of the EBM treatment of values with PCH, and EBM's attempt to transplant research methods from science into the realm of values, I must recommend the use of the VBP account of values discovery. © 2015 John Wiley & Sons, Ltd.

  7. Efficient p-value evaluation for resampling-based tests

    KAUST Repository

    Yu, K.; Liang, F.; Ciampa, J.; Chatterjee, N.

    2011-01-01

    The resampling-based test, which often relies on permutation or bootstrap procedures, has been widely used for statistical hypothesis testing when the asymptotic distribution of the test statistic is unavailable or unreliable. It requires repeated

  8. Information integration in perceptual and value-based decisions

    OpenAIRE

    Tsetsos, K.

    2012-01-01

    Research on the psychology and neuroscience of simple, evidence-based choices has led to an impressive progress in capturing the underlying mental processes as optimal mechanisms that make the fastest decision for a specified accuracy. The idea that decision-making is an optimal process stands in contrast with findings in more complex, motivation-based decisions, focussed on multiple goals with trade-offs. Here, a number of paradoxical and puzzling choice behaviours have been r...

  9. Housing Value Forecasting Based on Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Jingyi Mu

    2014-01-01

    Full Text Available In the era of big data, many urgent issues to tackle in all walks of life all can be solved via big data technique. Compared with the Internet, economy, industry, and aerospace fields, the application of big data in the area of architecture is relatively few. In this paper, on the basis of the actual data, the values of Boston suburb houses are forecast by several machine learning methods. According to the predictions, the government and developers can make decisions about whether developing the real estate on corresponding regions or not. In this paper, support vector machine (SVM, least squares support vector machine (LSSVM, and partial least squares (PLS methods are used to forecast the home values. And these algorithms are compared according to the predicted results. Experiment shows that although the data set exists serious nonlinearity, the experiment result also show SVM and LSSVM methods are superior to PLS on dealing with the problem of nonlinearity. The global optimal solution can be found and best forecasting effect can be achieved by SVM because of solving a quadratic programming problem. In this paper, the different computation efficiencies of the algorithms are compared according to the computing times of relevant algorithms.

  10. Singular value decomposition based feature extraction technique for physiological signal analysis.

    Science.gov (United States)

    Chang, Cheng-Ding; Wang, Chien-Chih; Jiang, Bernard C

    2012-06-01

    Multiscale entropy (MSE) is one of the popular techniques to calculate and describe the complexity of the physiological signal. Many studies use this approach to detect changes in the physiological conditions in the human body. However, MSE results are easily affected by noise and trends, leading to incorrect estimation of MSE values. In this paper, singular value decomposition (SVD) is adopted to replace MSE to extract the features of physiological signals, and adopt the support vector machine (SVM) to classify the different physiological states. A test data set based on the PhysioNet website was used, and the classification results showed that using SVD to extract features of the physiological signal could attain a classification accuracy rate of 89.157%, which is higher than that using the MSE value (71.084%). The results show the proposed analysis procedure is effective and appropriate for distinguishing different physiological states. This promising result could be used as a reference for doctors in diagnosis of congestive heart failure (CHF) disease.

  11. Specification of materials Data for Fire Safety Calculations based on ENV 1992-1-2

    DEFF Research Database (Denmark)

    Hertz, Kristian Dahl

    1997-01-01

    of constructions of any concrete exposed to any time of any fire exposure can be calculated.Chapter 4.4 provides information on what should be observed if more general calculation methods are used.Annex A provides some additional information on materials data. This chapter is not a part of the code......The part 1-2 of the Eurocode on Concrete deals with Structural Fire Design.In chapter 3, which is partly written by the author of this paper, some data are given for the development of a few material parameters at high temperatures. These data are intended to represent the worst possible concrete...... to experience form tests on structural specimens based on German siliceous concrete subjected to Standard fire exposure until the time of maximum gas temperature.Chapter 4.3, which is written by the author of this paper, provides a simplified calculation method by means of which the load bearing capacity...

  12. An automated Monte-Carlo based method for the calculation of cascade summing factors

    Science.gov (United States)

    Jackson, M. J.; Britton, R.; Davies, A. V.; McLarty, J. L.; Goodwin, M.

    2016-10-01

    A versatile method has been developed to calculate cascade summing factors for use in quantitative gamma-spectrometry analysis procedures. The proposed method is based solely on Evaluated Nuclear Structure Data File (ENSDF) nuclear data, an X-ray energy library, and accurate efficiency characterisations for single detector counting geometries. The algorithm, which accounts for γ-γ, γ-X, γ-511 and γ-e- coincidences, can be applied to any design of gamma spectrometer and can be expanded to incorporate any number of nuclides. Efficiency characterisations can be derived from measured or mathematically modelled functions, and can accommodate both point and volumetric source types. The calculated results are shown to be consistent with an industry standard gamma-spectrometry software package. Additional benefits including calculation of cascade summing factors for all gamma and X-ray emissions, not just the major emission lines, are also highlighted.

  13. Value-based management : an application in North West regional pharmacies / L. Nel.

    OpenAIRE

    Nel, Lindi

    2012-01-01

    Value based management is a process that can be used to determine a business’s value drivers. It attempts to determine how the drivers link to value creation, and then break down the value drivers into achievable activities that can be pursued by employees. Due to strict medicine pricing regulations in the country, it is becoming increasingly difficult for pharmacy businesses to stay profitable. This study set out to develop a value based management framework that could be used by pharmac...

  14. Promoting networks between evidence-based medicine and values-based medicine in continuing medical education.

    Science.gov (United States)

    Altamirano-Bustamante, Myriam M; Altamirano-Bustamante, Nelly F; Lifshitz, Alberto; Mora-Magaña, Ignacio; de Hoyos, Adalberto; Avila-Osorio, María Teresa; Quintana-Vargas, Silvia; Aguirre, Jorge A; Méndez, Jorge; Murata, Chiharu; Nava-Diosdado, Rodrigo; Martínez-González, Oscar; Calleja, Elisa; Vargas, Raúl; Mejía-Arangure, Juan Manuel; Cortez-Domínguez, Araceli; Vedrenne-Gutiérrez, Fernand; Sueiras, Perla; Garduño, Juan; Islas-Andrade, Sergio; Salamanca, Fabio; Kumate-Rodríguez, Jesús; Reyes-Fuentes, Alejandro

    2013-02-15

    In recent years, medical practice has followed two different paradigms: evidence-based medicine (EBM) and values-based medicine (VBM). There is an urgent need to promote medical education that strengthens the relationship between these two paradigms. This work is designed to establish the foundations for a continuing medical education (CME) program aimed at encouraging the dialogue between EBM and VBM by determining the values relevant to everyday medical activities. A quasi-experimental, observational, comparative, prospective and qualitative study was conducted by analyzing through a concurrent triangulation strategy the correlation between healthcare personnel-patient relationship, healthcare personnel's life history, and ethical judgments regarding dilemmas that arise in daily clinical practice.In 2009, healthcare personnel working in Mexico were invited to participate in a free, online clinical ethics course. Each participant responded to a set of online survey instruments before and after the CME program. Face-to-face semi-structured interviews were conducted with healthcare personnel, focusing on their views and representations of clinical practice. The healthcare personnel's core values were honesty and respect. There were significant differences in the clinical practice axiology before and after the course (P ethical discernment, the CME program had an impact on autonomy (P ≤0.0001). Utilitarian autonomy was reinforced in the participants (P ≤0.0001). Regarding work values, significant differences due to the CME intervention were found in openness to change (OC) (P ethical discernment and healthcare personnel-patient relation were beneficence, respect and compassion, respectively. The healthcare personnel participating in a CME intervention in clinical ethics improved high-order values: Openness to change (OC) and Self Transcendence (ST), which are essential to fulfilling the healing ends of medicine. The CME intervention strengthened the role of

  15. Advances in audio watermarking based on singular value decomposition

    CERN Document Server

    Dhar, Pranab Kumar

    2015-01-01

    This book introduces audio watermarking methods for copyright protection, which has drawn extensive attention for securing digital data from unauthorized copying. The book is divided into two parts. First, an audio watermarking method in discrete wavelet transform (DWT) and discrete cosine transform (DCT) domains using singular value decomposition (SVD) and quantization is introduced. This method is robust against various attacks and provides good imperceptible watermarked sounds. Then, an audio watermarking method in fast Fourier transform (FFT) domain using SVD and Cartesian-polar transformation (CPT) is presented. This method has high imperceptibility and high data payload and it provides good robustness against various attacks. These techniques allow media owners to protect copyright and to show authenticity and ownership of their material in a variety of applications.   ·         Features new methods of audio watermarking for copyright protection and ownership protection ·         Outl...

  16. The Resource-Based View and The Concept of Value: The Role of Emergence in Value Creation

    Directory of Open Access Journals (Sweden)

    Luis Armando Luján Salazar

    2017-02-01

    Full Text Available This theoretical paper deals with the concept of value. It asserts that value is the only and necessary condition in the resource-based view (RBV. It also argues that no resource or strategy is valuable per se: it is related to a configuration of resources, routines, and embedded assets. For example, concerning the RBV attribute of imitation, we can ask to what extent a valuable resource is independent of the rest of resources, and by extension, to what extent a configuration of resources is rare by itself. This paper discusses the emergence of value and it is embeddedness in a configuration of resources. Revising the concept of value could challenge the other main conditions in the RBV: rarity and cost of imitation, impossibility to replace with strategic substitutes. If the relations of these attributes with the rest of the resources are taken into account, we might have a better understanding of how value emerges and how a firm’s resources and capabilities are related with the creation of value.

  17. On the validity of microscopic calculations of double-quantum-dot spin qubits based on Fock-Darwin states

    Science.gov (United States)

    Chan, GuoXuan; Wang, Xin

    2018-04-01

    We consider two typical approximations that are used in the microscopic calculations of double-quantum dot spin qubits, namely, the Heitler-London (HL) and the Hund-Mulliken (HM) approximations, which use linear combinations of Fock-Darwin states to approximate the two-electron states under the double-well confinement potential. We compared these results to a case in which the solution to a one-dimensional Schr¨odinger equation was exactly known and found that typical microscopic calculations based on Fock-Darwin states substantially underestimate the value of the exchange interaction, which is the key parameter that controls the quantum dot spin qubits. This underestimation originates from the lack of tunneling of Fock-Darwin states, which is accurate only in the case with a single potential well. Our results suggest that the accuracies of the current two-dimensional molecular- orbit-theoretical calculations based on Fock-Darwin states should be revisited since underestimation could only deteriorate in dimensions that are higher than one.

  18. Grey-Markov prediction model based on background value optimization and central-point triangular whitenization weight function

    Science.gov (United States)

    Ye, Jing; Dang, Yaoguo; Li, Bingjun

    2018-01-01

    Grey-Markov forecasting model is a combination of grey prediction model and Markov chain which show obvious optimization effects for data sequences with characteristics of non-stationary and volatility. However, the state division process in traditional Grey-Markov forecasting model is mostly based on subjective real numbers that immediately affects the accuracy of forecasting values. To seek the solution, this paper introduces the central-point triangular whitenization weight function in state division to calculate possibilities of research values in each state which reflect preference degrees in different states in an objective way. On the other hand, background value optimization is applied in the traditional grey model to generate better fitting data. By this means, the improved Grey-Markov forecasting model is built. Finally, taking the grain production in Henan Province as an example, it verifies this model's validity by comparing with GM(1,1) based on background value optimization and the traditional Grey-Markov forecasting model.

  19. Continuous energy Monte Carlo calculations for randomly distributed spherical fuels based on statistical geometry model

    Energy Technology Data Exchange (ETDEWEB)

    Murata, Isao [Osaka Univ., Suita (Japan); Mori, Takamasa; Nakagawa, Masayuki; Itakura, Hirofumi

    1996-03-01

    The method to calculate neutronics parameters of a core composed of randomly distributed spherical fuels has been developed based on a statistical geometry model with a continuous energy Monte Carlo method. This method was implemented in a general purpose Monte Carlo code MCNP, and a new code MCNP-CFP had been developed. This paper describes the model and method how to use it and the validation results. In the Monte Carlo calculation, the location of a spherical fuel is sampled probabilistically along the particle flight path from the spatial probability distribution of spherical fuels, called nearest neighbor distribution (NND). This sampling method was validated through the following two comparisons: (1) Calculations of inventory of coated fuel particles (CFPs) in a fuel compact by both track length estimator and direct evaluation method, and (2) Criticality calculations for ordered packed geometries. This method was also confined by applying to an analysis of the critical assembly experiment at VHTRC. The method established in the present study is quite unique so as to a probabilistic model of the geometry with a great number of spherical fuels distributed randomly. Realizing the speed-up by vector or parallel computations in future, it is expected to be widely used in calculation of a nuclear reactor core, especially HTGR cores. (author).

  20. An independent dose calculation algorithm for MLC-based stereotactic radiotherapy

    International Nuclear Information System (INIS)

    Lorenz, Friedlieb; Killoran, Joseph H.; Wenz, Frederik; Zygmanski, Piotr

    2007-01-01

    We have developed an algorithm to calculate dose in a homogeneous phantom for radiotherapy fields defined by multi-leaf collimator (MLC) for both static and dynamic MLC delivery. The algorithm was developed to supplement the dose algorithms of the commercial treatment planning systems (TPS). The motivation for this work is to provide an independent dose calculation primarily for quality assurance (QA) and secondarily for the development of static MLC field based inverse planning. The dose calculation utilizes a pencil-beam kernel. However, an explicit analytical integration results in a closed form for rectangular-shaped beamlets, defined by single leaf pairs. This approach reduces spatial integration to summation, and leads to a simple method of determination of model parameters. The total dose for any static or dynamic MLC field is obtained by summing over all individual rectangles from each segment which offers faster speed to calculate two-dimensional dose distributions at any depth in the phantom. Standard beam data used in the commissioning of the TPS was used as input data for the algorithm. The calculated results were compared with the TPS and measurements for static and dynamic MLC. The agreement was very good (<2.5%) for all tested cases except for very small static MLC sizes of 0.6 cmx0.6 cm (<6%) and some ion chamber measurements in a high gradient region (<4.4%). This finding enables us to use the algorithm for routine QA as well as for research developments

  1. Programs and subroutines for calculating cadmium body burdens based on a one-compartment model

    International Nuclear Information System (INIS)

    Robinson, C.V.; Novak, K.M.

    1980-08-01

    A pair of FORTRAN programs for calculating the body burden of cadmium as a function of age is presented, together with a discussion of the assumptions which serve to specify the underlying, one-compartment model. Account is taken of the contributions to the body burden from food, from ambient air, from smoking, and from occupational inhalation. The output is a set of values for ages from birth to 90 years which is either longitudinal (for a given year of birth) or cross-sectional (for a given calendar year), depending on the choice of input parameters

  2. An ill-conditioning conformal radiotherapy analysis based on singular values decomposition

    International Nuclear Information System (INIS)

    Lefkopoulos, D.; Grandjean, P.; Bendada, S.; Dominique, C.; Platoni, K.; Schlienger, M.

    1995-01-01

    Clinical experience in stereotactic radiotherapy of irregular complex lesions had shown that optimization algorithms were necessary to improve the dose distribution. We have developed a general optimization procedure which can be applied to different conformal irradiation techniques. In this presentation this procedure is tested on the stereotactic radiotherapy modality of complex cerebral lesions treated with multi-isocentric technique based on the 'associated targets methodology'. In this inverse procedure we use the singular value decomposition (SVD) analysis which proposes several optimal solutions for the narrow beams weights of each isocentre. The SVD analysis quantifies the ill-conditioning of the dosimetric calculation of the stereotactic irradiation, using the condition number which is the ratio of the bigger to smaller singular values. Our dose distribution optimization approach consists on the study of the irradiation parameters influence on the stereotactic radiotherapy inverse problem. The adjustment of the different irradiation parameters into the 'SVD optimizer' procedure is realized taking into account the ratio of the quality reconstruction to the time calculation. It will permit a more efficient use of the 'SVD optimizer' in clinical applications for real 3D lesions. The evaluation criteria for the choice of satisfactory solutions are based on the dose-volume histograms and clinical considerations. We will present the efficiency of ''SVD optimizer'' to analyze and predict the ill-conditioning in stereotactic radiotherapy and to recognize the topography of the different beams in order to create optimal reconstructed weighting vector. The planification of stereotactic treatments using the ''SVD optimizer'' is examined for mono-isocentrically and complex dual-isocentrically treated lesions. The application of the SVD optimization technique provides conformal dose distribution for complex intracranial lesions. It is a general optimization procedure

  3. Mineral Retention Values for Blends of Cereal Based ...

    African Journals Online (AJOL)

    Prof. Ogunji

    household diets can be manipulated to enhance the micronutrients content or alter the levels of ... (2007) reported that fermentation for 72h increased most of the minerals in cereal based ... Production of Amylase Rich Flour (ARF): Amylase rich flour was produced by the method of Odumodu ..... Practical chemistry (3rd Ed).

  4. Value Creation in the Knowledge-Based Economy

    Science.gov (United States)

    Liu, Fang-Chun

    2013-01-01

    Effective investment strategies help companies form dynamic core organizational capabilities allowing them to adapt and survive in today's rapidly changing knowledge-based economy. This dissertation investigates three valuation issues that challenge managers with respect to developing business-critical investment strategies that can have…

  5. Loss of conformational entropy in protein folding calculated using realistic ensembles and its implications for NMR-based calculations

    Science.gov (United States)

    Baxa, Michael C.; Haddadian, Esmael J.; Jumper, John M.; Freed, Karl F.; Sosnick, Tobin R.

    2014-01-01

    The loss of conformational entropy is a major contribution in the thermodynamics of protein folding. However, accurate determination of the quantity has proven challenging. We calculate this loss using molecular dynamic simulations of both the native protein and a realistic denatured state ensemble. For ubiquitin, the total change in entropy is TΔSTotal = 1.4 kcal⋅mol−1 per residue at 300 K with only 20% from the loss of side-chain entropy. Our analysis exhibits mixed agreement with prior studies because of the use of more accurate ensembles and contributions from correlated motions. Buried side chains lose only a factor of 1.4 in the number of conformations available per rotamer upon folding (ΩU/ΩN). The entropy loss for helical and sheet residues differs due to the smaller motions of helical residues (TΔShelix−sheet = 0.5 kcal⋅mol−1), a property not fully reflected in the amide N-H and carbonyl C=O bond NMR order parameters. The results have implications for the thermodynamics of folding and binding, including estimates of solvent ordering and microscopic entropies obtained from NMR. PMID:25313044

  6. CALCULATION METHOD OF ELECTRIC POWER LINES MAGNETIC FIELD STRENGTH BASED ON CYLINDRICAL SPATIAL HARMONICS

    Directory of Open Access Journals (Sweden)

    A.V. Erisov

    2016-05-01

    Full Text Available Purpose. Simplification of accounting ratio to determine the magnetic field strength of electric power lines, and assessment of their environmental safety. Methodology. Description of the transmission lines of the magnetic field by using techniques of spatial harmonic analysis in the cylindrical coordinate system is carried out. Results. For engineering calculations of electric power lines magnetic field with sufficient accuracy describes their first spatial harmonic magnetic field. Originality. Substantial simplification of the definition of the impact of the construction of transmission line poles on the value of its magnetic field and the bands of land alienation sizes. Practical value. The environmentally friendly projection electric power lines on the level of the magnetic field.

  7. Simulation-based power calculations for planning a two-stage individual participant data meta-analysis.

    Science.gov (United States)

    Ensor, Joie; Burke, Danielle L; Snell, Kym I E; Hemming, Karla; Riley, Richard D

    2018-05-18

    Researchers and funders should consider the statistical power of planned Individual Participant Data (IPD) meta-analysis projects, as they are often time-consuming and costly. We propose simulation-based power calculations utilising a two-stage framework, and illustrate the approach for a planned IPD meta-analysis of randomised trials with continuous outcomes where the aim is to identify treatment-covariate interactions. The simulation approach has four steps: (i) specify an underlying (data generating) statistical model for trials in the IPD meta-analysis; (ii) use readily available information (e.g. from publications) and prior knowledge (e.g. number of studies promising IPD) to specify model parameter values (e.g. control group mean, intervention effect, treatment-covariate interaction); (iii) simulate an IPD meta-analysis dataset of a particular size from the model, and apply a two-stage IPD meta-analysis to obtain the summary estimate of interest (e.g. interaction effect) and its associated p-value; (iv) repeat the previous step (e.g. thousands of times), then estimate the power to detect a genuine effect by the proportion of summary estimates with a significant p-value. In a planned IPD meta-analysis of lifestyle interventions to reduce weight gain in pregnancy, 14 trials (1183 patients) promised their IPD to examine a treatment-BMI interaction (i.e. whether baseline BMI modifies intervention effect on weight gain). Using our simulation-based approach, a two-stage IPD meta-analysis has meta-analysis was appropriate. Pre-specified adjustment for prognostic factors would increase power further. Incorrect dichotomisation of BMI would reduce power by over 20%, similar to immediately throwing away IPD from ten trials. Simulation-based power calculations could inform the planning and funding of IPD projects, and should be used routinely.

  8. Calculation of acoustic field based on laser-measured vibration velocities on ultrasonic transducer surface

    Science.gov (United States)

    Hu, Liang; Zhao, Nannan; Gao, Zhijian; Mao, Kai; Chen, Wenyu; Fu, Xin

    2018-05-01

    Determination of the distribution of a generated acoustic field is valuable for studying ultrasonic transducers, including providing the guidance for transducer design and the basis for analyzing their performance, etc. A method calculating the acoustic field based on laser-measured vibration velocities on the ultrasonic transducer surface is proposed in this paper. Without knowing the inner structure of the transducer, the acoustic field outside it can be calculated by solving the governing partial differential equation (PDE) of the field based on the specified boundary conditions (BCs). In our study, the BC on the transducer surface, i.e. the distribution of the vibration velocity on the surface, is accurately determined by laser scanning measurement of discrete points and follows a data fitting computation. In addition, to ensure the calculation accuracy for the whole field even in an inhomogeneous medium, a finite element method is used to solve the governing PDE based on the mixed BCs, including the discretely measured velocity data and other specified BCs. The method is firstly validated on numerical piezoelectric transducer models. The acoustic pressure distributions generated by a transducer operating in an homogeneous and inhomogeneous medium, respectively, are both calculated by the proposed method and compared with the results from other existing methods. Then, the method is further experimentally validated with two actual ultrasonic transducers used for flow measurement in our lab. The amplitude change of the output voltage signal from the receiver transducer due to changing the relative position of the two transducers is calculated by the proposed method and compared with the experimental data. This method can also provide the basis for complex multi-physical coupling computations where the effect of the acoustic field should be taken into account.

  9. A parallel orbital-updating based plane-wave basis method for electronic structure calculations

    International Nuclear Information System (INIS)

    Pan, Yan; Dai, Xiaoying; Gironcoli, Stefano de; Gong, Xin-Gao; Rignanese, Gian-Marco; Zhou, Aihui

    2017-01-01

    Highlights: • Propose three parallel orbital-updating based plane-wave basis methods for electronic structure calculations. • These new methods can avoid the generating of large scale eigenvalue problems and then reduce the computational cost. • These new methods allow for two-level parallelization which is particularly interesting for large scale parallelization. • Numerical experiments show that these new methods are reliable and efficient for large scale calculations on modern supercomputers. - Abstract: Motivated by the recently proposed parallel orbital-updating approach in real space method , we propose a parallel orbital-updating based plane-wave basis method for electronic structure calculations, for solving the corresponding eigenvalue problems. In addition, we propose two new modified parallel orbital-updating methods. Compared to the traditional plane-wave methods, our methods allow for two-level parallelization, which is particularly interesting for large scale parallelization. Numerical experiments show that these new methods are more reliable and efficient for large scale calculations on modern supercomputers.

  10. Calculation of the Instream Ecological Flow of the Wei River Based on Hydrological Variation

    Directory of Open Access Journals (Sweden)

    Shengzhi Huang

    2014-01-01

    Full Text Available It is of great significance for the watershed management department to reasonably allocate water resources and ensure the sustainable development of river ecosystems. The greatly important issue is to accurately calculate instream ecological flow. In order to precisely compute instream ecological flow, flow variation is taken into account in this study. Moreover, the heuristic segmentation algorithm that is suitable to detect the mutation points of flow series is employed to identify the change points. Besides, based on the law of tolerance and ecological adaptation theory, the maximum instream ecological flow is calculated, which is the highest frequency of the monthly flow based on the GEV distribution and very suitable for healthy development of the river ecosystems. Furthermore, in order to guarantee the sustainable development of river ecosystems under some bad circumstances, minimum instream ecological flow is calculated by a modified Tennant method which is improved by replacing the average flow with the highest frequency of flow. Since the modified Tennant method is more suitable to reflect the law of flow, it has physical significance, and the calculation results are more reasonable.

  11. An automated Monte-Carlo based method for the calculation of cascade summing factors

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, M.J., E-mail: mark.j.jackson@awe.co.uk; Britton, R.; Davies, A.V.; McLarty, J.L.; Goodwin, M.

    2016-10-21

    A versatile method has been developed to calculate cascade summing factors for use in quantitative gamma-spectrometry analysis procedures. The proposed method is based solely on Evaluated Nuclear Structure Data File (ENSDF) nuclear data, an X-ray energy library, and accurate efficiency characterisations for single detector counting geometries. The algorithm, which accounts for γ–γ, γ–X, γ–511 and γ–e{sup −} coincidences, can be applied to any design of gamma spectrometer and can be expanded to incorporate any number of nuclides. Efficiency characterisations can be derived from measured or mathematically modelled functions, and can accommodate both point and volumetric source types. The calculated results are shown to be consistent with an industry standard gamma-spectrometry software package. Additional benefits including calculation of cascade summing factors for all gamma and X-ray emissions, not just the major emission lines, are also highlighted. - Highlights: • Versatile method to calculate coincidence summing factors for gamma-spectrometry analysis. • Based solely on ENSDF format nuclear data and detector efficiency characterisations. • Enables generation of a CSF library for any detector, geometry and radionuclide. • Improves measurement accuracy and reduces acquisition times required to meet MDA.

  12. Heavy Ion SEU Cross Section Calculation Based on Proton Experimental Data, and Vice Versa

    CERN Document Server

    Wrobel, F; Pouget, V; Dilillo, L; Ecoffet, R; Lorfèvre, E; Bezerra, F; Brugger, M; Saigné, F

    2014-01-01

    The aim of this work is to provide a method to calculate single event upset (SEU) cross sections by using experimental data. Valuable tools such as PROFIT and SIMPA already focus on the calculation of the proton cross section by using heavy ions cross-section experiments. However, there is no available tool that calculates heavy ion cross sections based on measured proton cross sections with no knowledge of the technology. We based our approach on the diffusion-collection model with the aim of analyzing the characteristics of transient currents that trigger SEUs. We show that experimental cross sections could be used to characterize the pulses that trigger an SEU. Experimental results allow yet defining an empirical rule to identify the transient current that are responsible for an SEU. Then, the SEU cross section can be calculated for any kind of particle and any energy with no need to know the Spice model of the cell. We applied our method to some technologies (250 nm, 90 nm and 65 nm bulk SRAMs) and we sho...

  13. Calculation of the acid-base equilibrium constants at the alumina/electrolyte interface from the ph dependence of the adsorption of singly charged ions (Na+, Cl-)

    Science.gov (United States)

    Gololobova, E. G.; Gorichev, I. G.; Lainer, Yu. A.; Skvortsova, I. V.

    2011-05-01

    A procedure was proposed for the calculation of the acid-base equilibrium constants at an alumina/electrolyte interface from experimental data on the adsorption of singly charged ions (Na+, Cl-) at various pH values. The calculated constants (p K {1/0}= 4.1, p K {2/0}= 11.9, p K {3/0}= 8.3, and p K {4/0}= 7.7) are shown to agree with the values obtained from an experimental pH dependence of the electrokinetic potential and the results of potentiometric titration of Al2O3 suspensions.

  14. Interval MULTIMOORA method with target values of attributes based on interval distance and preference degree: biomaterials selection

    Science.gov (United States)

    Hafezalkotob, Arian; Hafezalkotob, Ashkan

    2017-06-01

    A target-based MADM method covers beneficial and non-beneficial attributes besides target values for some attributes. Such techniques are considered as the comprehensive forms of MADM approaches. Target-based MADM methods can also be used in traditional decision-making problems in which beneficial and non-beneficial attributes only exist. In many practical selection problems, some attributes have given target values. The values of decision matrix and target-based attributes can be provided as intervals in some of such problems. Some target-based decision-making methods have recently been developed; however, a research gap exists in the area of MADM techniques with target-based attributes under uncertainty of information. We extend the MULTIMOORA method for solving practical material selection problems in which material properties and their target values are given as interval numbers. We employ various concepts of interval computations to reduce degeneration of uncertain data. In this regard, we use interval arithmetic and introduce innovative formula for interval distance of interval numbers to create interval target-based normalization technique. Furthermore, we use a pairwise preference matrix based on the concept of degree of preference of interval numbers to calculate the maximum, minimum, and ranking of these numbers. Two decision-making problems regarding biomaterials selection of hip and knee prostheses are discussed. Preference degree-based ranking lists for subordinate parts of the extended MULTIMOORA method are generated by calculating the relative degrees of preference for the arranged assessment values of the biomaterials. The resultant rankings for the problem are compared with the outcomes of other target-based models in the literature.

  15. The Role of Liposomal Bupivacaine in Value-Based Care.

    Science.gov (United States)

    Iorio, Richard

    Multimodal pain control strategies are crucial in reducing opioid use and delivering effective pain management to facilitate improved surgical outcomes. The utility of liposomal bupivacaine in enabling effective pain control in multimodal strategies has been demonstrated in several studies, but others have found the value of liposomal bupivacaine in such approaches to be insignificant. At New York University Langone Medical Center, liposomal bupivacaine injection and femoral nerve block were compared in their delivery of efficacious and cost-effective multimodal analgesia among patients undergoing total joint arthroplasty (TJA). Retrospective analysis revealed that including liposomal bupivacaine in a multimodal pain control protocol for TJA resulted in improved quality and efficiency metrics, decreased narcotic use, and faster mobilization, all relative to femoral nerve block, and without a significant increase in admission costs. In addition, liposomal bupivacaine use was associated with elimination of the need for patient-controlled analgesia in TJA. Thus, at Langone Medical Center, the introduction of liposomal bupivacaine to TJA has been instrumental in achieving adequate pain control, delivering high-level quality of care, and controlling costs.

  16. Section 3: Quality and Value-Based Requirements

    Science.gov (United States)

    Mylopoulos, John

    Traditionally, research and practice in software engineering has focused its attention on specific software qualities, such as functionality and performance. According to this perspective, a system is deemed to be of good quality if it delivers all required functionality (“fitness-for-purpose”) and its performance is above required thresholds. Increasingly, primarily in research but also in practice, other qualities are attracting attention. To facilitate evolution, maintainability and adaptability are gaining popularity. Usability, universal accessibility, innovativeness, and enjoyability are being studied as novel types of non-functional requirements that we do not know how to define, let alone accommodate, but which we realize are critical under some contingencies. The growing importance of the business context in the design of software-intensive systems has also thrust economic value, legal compliance, and potential social and ethical implications into the forefront of requirements topics. A focus on the broader user environment and experience, as well as the organizational and societal implications of system use, thus has become more central to the requirements discourse. This section includes three contributions to this broad and increasingly important topic.

  17. A trait based approach to defining valued mentoring qualities

    Science.gov (United States)

    Pendall, E.

    2012-12-01

    Graduate training in the sciences requires strong personal interactions among faculty, senior lab members and more junior members. Within the lab-group setting we learn to frame problems, to conduct research and to communicate findings. The result is that individual scientists are partly shaped by a few influential mentors. We have all been influenced by special relationships with mentors, and on reflection we may find that certain qualities have been especially influential in our career choices. In this presentation I will discuss favorable mentoring traits as determined from an informal survey of scientists in varying stages of careers and from diverse backgrounds. Respondents addressed questions about traits they value in their mentors in several categories: 1) personal qualities such as approachability, humor and encouragement; background including gender, ethnicity, and family status; 2) scientific qualities including discipline or specialization, perceived stature in discipline, seniority, breadth of perspective, and level of expectations; and 3) community-oriented qualities promoted by mentors, such as encouraging service contributions and peer-mentoring within the lab group. The results will be compared among respondents by gender, ethnicity, stage of career, type of work, and subdiscipline within the broadly defined Biogeoscience community. We hope to contribute to the growing discussion on building a diverse and balanced scientific workforce.

  18. Foraging Value, Risk Avoidance, and Multiple Control Signals: How the Anterior Cingulate Cortex Controls Value-based Decision-making.

    Science.gov (United States)

    Brown, Joshua W; Alexander, William H

    2017-10-01

    Recent work on the role of the ACC in cognition has focused on choice difficulty, action value, risk avoidance, conflict resolution, and the value of exerting control among other factors. A main underlying question is what are the output signals of ACC, and relatedly, what is their effect on downstream cognitive processes? Here we propose a model of how ACC influences cognitive processing in other brain regions that choose actions. The model builds on the earlier Predicted Response Outcome model and suggests that ACC learns to represent specifically the states in which the potential costs or risks of an action are high, on both short and long timescales. It then uses those cost signals as a basis to bias decisions to minimize losses while maximizing gains. The model simulates both proactive and reactive control signals and accounts for a variety of empirical findings regarding value-based decision-making.

  19. Calculation of passive earth pressure of cohesive soil based on Culmann's method

    Directory of Open Access Journals (Sweden)

    Hai-feng Lu

    2011-03-01

    Full Text Available Based on the sliding plane hypothesis of Coulumb earth pressure theory, a new method for calculation of the passive earth pressure of cohesive soil was constructed with Culmann's graphical construction. The influences of the cohesive force, adhesive force, and the fill surface form were considered in this method. In order to obtain the passive earth pressure and sliding plane angle, a program based on the sliding surface assumption was developed with the VB.NET programming language. The calculated results from this method were basically the same as those from the Rankine theory and Coulumb theory formulas. This method is conceptually clear, and the corresponding formulas given in this paper are simple and convenient for application when the fill surface form is complex.

  20. World Wide Web-based system for the calculation of substituent parameters and substituent similarity searches.

    Science.gov (United States)

    Ertl, P

    1998-02-01

    Easy to use, interactive, and platform-independent WWW-based tools are ideal for development of chemical applications. By using the newly emerging Web technologies such as Java applets and sophisticated scripting, it is possible to deliver powerful molecular processing capabilities directly to the desk of synthetic organic chemists. In Novartis Crop Protection in Basel, a Web-based molecular modelling system has been in use since 1995. In this article two new modules of this system are presented: a program for interactive calculation of important hydrophobic, electronic, and steric properties of organic substituents, and a module for substituent similarity searches enabling the identification of bioisosteric functional groups. Various possible applications of calculated substituent parameters are also discussed, including automatic design of molecules with the desired properties and creation of targeted virtual combinatorial libraries.