WorldWideScience

Sample records for parameter case standardized

  1. Oncoplastic round block technique has comparable operative parameters as standard wide local excision: a matched case-control study.

    Science.gov (United States)

    Lim, Geok-Hoon; Allen, John Carson; Ng, Ruey Pyng

    2017-08-01

    Although oncoplastic breast surgery is used to resect larger tumors with lower re-excision rates compared to standard wide local excision (sWLE), criticisms of oncoplastic surgery include a longer-albeit, well concealed-scar, longer operating time and hospital stay, and increased risk of complications. Round block technique has been reported to be very suitable for patients with relatively smaller breasts and minimal ptosis. We aim to determine if round block technique will result in operative parameters comparable with sWLE. Breast cancer patients who underwent a round block procedure from 1st May 2014 to 31st January 2016 were included in the study. These patients were then matched for the type of axillary procedure, on a one to one basis, with breast cancer patients who had undergone sWLE from 1st August 2011 to 31st January 2016. The operative parameters between the 2 groups were compared. 22 patients were included in the study. Patient demographics and histologic parameters were similar in the 2 groups. No complications were reported in either group. The mean operating time was 122 and 114 minutes in the round block and sWLE groups, respectively (P=0.64). Length of stay was similar in the 2 groups (P=0.11). Round block patients had better cosmesis and lower re-excision rates. A higher rate of recurrence was observed in the sWLE group. The round block technique has comparable operative parameters to sWLE with no evidence of increased complications. Lower re-excision rate and better cosmesis were observed in the round block patients suggesting that the round block technique is not only comparable in general, but may have advantages to sWLE in selected cases.

  2. Determination of service standard time for liquid waste parameter in certification institution

    Science.gov (United States)

    Sembiring, M. T.; Kusumawaty, D.

    2018-02-01

    Baristand Industry Medan is a technical implementation unit under the Industrial and Research and Development Agency, the Ministry of Industry. One of the services often used in Baristand Industry Medan is liquid waste testing service. The company set the standard of service 9 working days for testing services. At 2015, 89.66% on testing services liquid waste does not meet the specified standard of services company. The purpose of this research is to specify the standard time of each parameter in testing services liquid waste. The method used is the stopwatch time study. There are 45 test parameters in liquid waste laboratory. The measurement of the time done 4 samples per test parameters using the stopwatch. From the measurement results obtained standard time that the standard Minimum Service test of liquid waste is 13 working days if there is testing E. coli.

  3. Standard Test Method for Determining the Linearity of a Photovoltaic Device Parameter with Respect To a Test Parameter

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2005-01-01

    1.1 This test method determines the degree of linearity of a photovoltaic device parameter with respect to a test parameter, for example, short-circuit current with respect to irradiance. 1.2 The linearity determined by this test method applies only at the time of testing, and implies no past or future performance level. 1.3 This test method applies only to non-concentrator terrestrial photovoltaic devices. 1.4 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.5 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  4. The PolyMAX Frequency-Domain Method: A New Standard for Modal Parameter Estimation?

    Directory of Open Access Journals (Sweden)

    Bart Peeters

    2004-01-01

    Full Text Available Recently, a new non-iterative frequency-domain parameter estimation method was proposed. It is based on a (weighted least-squares approach and uses multiple-input-multiple-output frequency response functions as primary data. This so-called “PolyMAX” or polyreference least-squares complex frequency-domain method can be implemented in a very similar way as the industry standard polyreference (time-domain least-squares complex exponential method: in a first step a stabilisation diagram is constructed containing frequency, damping and participation information. Next, the mode shapes are found in a second least-squares step, based on the user selection of stable poles. One of the specific advantages of the technique lies in the very stable identification of the system poles and participation factors as a function of the specified system order, leading to easy-to-interpret stabilisation diagrams. This implies a potential for automating the method and to apply it to “difficult” estimation cases such as high-order and/or highly damped systems with large modal overlap. Some real-life automotive and aerospace case studies are discussed. PolyMAX is compared with classical methods concerning stability, accuracy of the estimated modal parameters and quality of the frequency response function synthesis.

  5. Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters

    Science.gov (United States)

    Hoshino, Takahiro; Shigemasu, Kazuo

    2008-01-01

    The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…

  6. American National Standard: guidelines for evaluating site-related geotechnical parameters at nuclear power sites

    International Nuclear Information System (INIS)

    Anon.

    1978-01-01

    This standard presents guidelines for evaluating site-related geotechnical parameters for nuclear power sites. Aspects considered include geology, ground water, foundation engineering, and earthwork engineering. These guidelines identify the basic geotechnical parameters to be considered in site evaluation, and in the design, construction, and performance of foundations and earthwork aspects for nuclear power plants. Also included are tabulations of typical field and laboratory investigative methods useful in identifying geotechnical parameters. Those areas where interrelationships with other standards may exist are indicated

  7. A reliable parameter to standardize the scoring of stem cell spheres.

    Directory of Open Access Journals (Sweden)

    Xiaochen Zhou

    Full Text Available Sphere formation assay is widely used in selection and enrichment of normal stem cells or cancer stem cells (CSCs, also known as tumor initiating cells (TICs, based on their ability to grow in serum-free suspension culture for clonal proliferation. However, there is no standardized parameter to accurately score the spheres, which should be reflected by both the number and size of the spheres. Here we define a novel parameter, designated as Standardized Sphere Score (SSS, which is expressed by the total volume of selected spheres divided by the number of cells initially plated. SSS was validated in quantification of both tumor spheres from cancer cell lines and embryonic bodies (EB from mouse embryonic stem cells with high sensitivity and reproducibility.

  8. Standard model parameters and the search for new physics

    International Nuclear Information System (INIS)

    Marciano, W.J.

    1988-04-01

    In these lectures, my aim is to present an up-to-date status report on the standard model and some key tests of electroweak unification. Within that context, I also discuss how and where hints of new physics may emerge. To accomplish those goals, I have organized my presentation as follows: I discuss the standard model parameters with particular emphasis on the gauge coupling constants and vector boson masses. Examples of new physics appendages are also briefly commented on. In addition, because these lectures are intended for students and thus somewhat pedagogical, I have included an appendix on dimensional regularization and a simple computational example that employs that technique. Next, I focus on weak charged current phenomenology. Precision tests of the standard model are described and up-to-date values for the Cabibbo-Kobayashi-Maskawa (CKM) mixing matrix parameters are presented. Constraints implied by those tests for a 4th generation, supersymmetry, extra Z/prime/ bosons, and compositeness are also discussed. I discuss weak neutral current phenomenology and the extraction of sin/sup 2/ /theta//sub W/ from experiment. The results presented there are based on a recently completed global analysis of all existing data. I have chosen to concentrate that discussion on radiative corrections, the effect of a heavy top quark mass, and implications for grand unified theories (GUTS). The potential for further experimental progress is also commented on. I depart from the narrowest version of the standard model and discuss effects of neutrino masses and mixings. I have chosen to concentrate on oscillations, the Mikheyev-Smirnov- Wolfenstein (MSW) effect, and electromagnetic properties of neutrinos. On the latter topic, I will describe some recent work on resonant spin-flavor precession. Finally, I conclude with a prospectus on hopes for the future. 76 refs

  9. Enhancing the power of genetic association studies through the use of silver standard cases derived from electronic medical records.

    Directory of Open Access Journals (Sweden)

    Andrew McDavid

    Full Text Available The feasibility of using imperfectly phenotyped "silver standard" samples identified from electronic medical record diagnoses is considered in genetic association studies when these samples might be combined with an existing set of samples phenotyped with a gold standard technique. An analytic expression is derived for the power of a chi-square test of independence using either research-quality case/control samples alone, or augmented with silver standard data. The subset of the parameter space where inclusion of silver standard samples increases statistical power is identified. A case study of dementia subjects identified from electronic medical records from the Electronic Medical Records and Genomics (eMERGE network, combined with subjects from two studies specifically targeting dementia, verifies these results.

  10. The case for regime-based water quality standards

    Science.gov (United States)

    G.C. Poole; J.B. Dunham; D.M. Keenan; S.T. Sauter; D.A. McCullough; C. Mebane; J.C. Lockwood; D.A. Essig; M.P. Hicks; D.J. Sturdevant; E.J. Materna; S.A. Spalding; J. Risley; M. Deppman

    2004-01-01

    Conventional water quality standards have been successful in reducing the concentration of toxic substances in US waters. However, conventional standards are based on simple thresholds and are therefore poorly structured to address human-caused imbalances in dynamic, natural water quality parameters, such as nutrients, sediment, and temperature. A more applicable type...

  11. JPL Thermal Design Modeling Philosophy and NASA-STD-7009 Standard for Models and Simulations - A Case Study

    Science.gov (United States)

    Avila, Arturo

    2011-01-01

    The Standard JPL thermal engineering practice prescribes worst-case methodologies for design. In this process, environmental and key uncertain thermal parameters (e.g., thermal blanket performance, interface conductance, optical properties) are stacked in a worst case fashion to yield the most hot- or cold-biased temperature. Thus, these simulations would represent the upper and lower bounds. This, effectively, represents JPL thermal design margin philosophy. Uncertainty in the margins and the absolute temperatures is usually estimated by sensitivity analyses and/or by comparing the worst-case results with "expected" results. Applicability of the analytical model for specific design purposes along with any temperature requirement violations are documented in peer and project design review material. In 2008, NASA released NASA-STD-7009, Standard for Models and Simulations. The scope of this standard covers the development and maintenance of models, the operation of simulations, the analysis of the results, training, recommended practices, the assessment of the Modeling and Simulation (M&S) credibility, and the reporting of the M&S results. The Mars Exploration Rover (MER) project thermal control system M&S activity was chosen as a case study determining whether JPL practice is in line with the standard and to identify areas of non-compliance. This paper summarizes the results and makes recommendations regarding the application of this standard to JPL thermal M&S practices.

  12. Autopsy standards of body parameters and fresh organ weights in nonmacerated and macerated human fetuses

    DEFF Research Database (Denmark)

    Maroun, Lisa Leth; Graem, Niels

    2005-01-01

    Standards for body parameters and organ weights are important tools in fetal and perinatal pathology. Previously there has been only a weak emphasis on the effect of maceration on dimensions and weights. This study provides autopsy standards for body weight, body dimensions, and fresh organ weigh...... increased slightly with maceration, whereas body weight and head circumference were unaffected. User-friendly charts and tables of mean values and standard deviations for nonmacerated and macerated fetuses are provided.......Standards for body parameters and organ weights are important tools in fetal and perinatal pathology. Previously there has been only a weak emphasis on the effect of maceration on dimensions and weights. This study provides autopsy standards for body weight, body dimensions, and fresh organ weights...

  13. On the Consistency of Bootstrap Testing for a Parameter on the Boundary of the Parameter Space

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Nielsen, Heino Bohn; Rahbek, Anders

    2017-01-01

    It is well known that with a parameter on the boundary of the parameter space, such as in the classic cases of testing for a zero location parameter or no autoregressive conditional heteroskedasticity (ARCH) effects, the classic nonparametric bootstrap – based on unrestricted parameter estimates...... – leads to inconsistent testing. In contrast, we show here that for the two aforementioned cases, a nonparametric bootstrap test based on parameter estimates obtained under the null – referred to as ‘restricted bootstrap’ – is indeed consistent. While the restricted bootstrap is simple to implement...... in practice, novel theoretical arguments are required in order to establish consistency. In particular, since the bootstrap is analysed both under the null hypothesis and under the alternative, non-standard asymptotic expansions are required to deal with parameters on the boundary. Detailed proofs...

  14. Shifts of neutrino oscillation parameters in reactor antineutrino experiments with non-standard interactions

    Directory of Open Access Journals (Sweden)

    Yu-Feng Li

    2014-11-01

    Full Text Available We discuss reactor antineutrino oscillations with non-standard interactions (NSIs at the neutrino production and detection processes. The neutrino oscillation probability is calculated with a parametrization of the NSI parameters by splitting them into the averages and differences of the production and detection processes respectively. The average parts induce constant shifts of the neutrino mixing angles from their true values, and the difference parts can generate the energy (and baseline dependent corrections to the initial mass-squared differences. We stress that only the shifts of mass-squared differences are measurable in reactor antineutrino experiments. Taking Jiangmen Underground Neutrino Observatory (JUNO as an example, we analyze how NSIs influence the standard neutrino measurements and to what extent we can constrain the NSI parameters.

  15. Comparison of DVH parameters and loading patterns of standard loading, manual and inverse optimization for intracavitary brachytherapy on a subset of tandem/ovoid cases

    International Nuclear Information System (INIS)

    Jamema, Swamidas V.; Kirisits, Christian; Mahantshetty, Umesh; Trnkova, Petra; Deshpande, Deepak D.; Shrivastava, Shyam K.; Poetter, Richard

    2010-01-01

    Purpose: Comparison of inverse planning with the standard clinical plan and with the manually optimized plan based on dose-volume parameters and loading patterns. Materials and methods: Twenty-eight patients who underwent MRI based HDR brachytherapy for cervix cancer were selected for this study. Three plans were calculated for each patient: (1) standard loading, (2) manual optimized, and (3) inverse optimized. Dosimetric outcomes from these plans were compared based on dose-volume parameters. The ratio of Total Reference Air Kerma of ovoid to tandem (TRAK O/T ) was used to compare the loading patterns. Results: The volume of HR CTV ranged from 9-68 cc with a mean of 41(±16.2)cc. Mean V100 for standard, manual optimized and inverse plans was found to be not significant (p = 0.35, 0.38, 0.4). Dose to bladder (7.8 ± 1.6 Gy) and sigmoid (5.6 ± 1.4 Gy) was high for standard plans; Manual optimization reduced the dose to bladder (7.1 ± 1.7 Gy p = 0.006) and sigmoid (4.5 ± 1.0 Gy p = 0.005) without compromising the HR CTV coverage. The inverse plan resulted in a significant reduction to bladder dose (6.5 ± 1.4 Gy, p = 0.002). TRAK was found to be 0.49(±0.02), 0.44(±0.04) and 0.40(±0.04)cGy m -2 for the standard loading, manual optimized and inverse plans, respectively. It was observed that TRAK O/T was 0.82(±0.05), 1.7(±1.04) and 1.41(±0.93) for standard loading, manual optimized and inverse plans, respectively, while this ratio was 1 for the traditional loading pattern. Conclusions: Inverse planning offers good sparing of critical structures without compromising the target coverage. The average loading pattern of the whole patient cohort deviates from the standard Fletcher loading pattern.

  16. Comparison of DVH parameters and loading patterns of standard loading, manual and inverse optimization for intracavitary brachytherapy on a subset of tandem/ovoid cases.

    Science.gov (United States)

    Jamema, Swamidas V; Kirisits, Christian; Mahantshetty, Umesh; Trnkova, Petra; Deshpande, Deepak D; Shrivastava, Shyam K; Pötter, Richard

    2010-12-01

    Comparison of inverse planning with the standard clinical plan and with the manually optimized plan based on dose-volume parameters and loading patterns. Twenty-eight patients who underwent MRI based HDR brachytherapy for cervix cancer were selected for this study. Three plans were calculated for each patient: (1) standard loading, (2) manual optimized, and (3) inverse optimized. Dosimetric outcomes from these plans were compared based on dose-volume parameters. The ratio of Total Reference Air Kerma of ovoid to tandem (TRAK(O/T)) was used to compare the loading patterns. The volume of HR CTV ranged from 9-68 cc with a mean of 41(±16.2) cc. Mean V100 for standard, manual optimized and inverse plans was found to be not significant (p=0.35, 0.38, 0.4). Dose to bladder (7.8±1.6 Gy) and sigmoid (5.6±1.4 Gy) was high for standard plans; Manual optimization reduced the dose to bladder (7.1±1.7 Gy p=0.006) and sigmoid (4.5±1.0 Gy p=0.005) without compromising the HR CTV coverage. The inverse plan resulted in a significant reduction to bladder dose (6.5±1.4 Gy, p=0.002). TRAK was found to be 0.49(±0.02), 0.44(±0.04) and 0.40(±0.04) cGy m(-2) for the standard loading, manual optimized and inverse plans, respectively. It was observed that TRAK(O/T) was 0.82(±0.05), 1.7(±1.04) and 1.41(±0.93) for standard loading, manual optimized and inverse plans, respectively, while this ratio was 1 for the traditional loading pattern. Inverse planning offers good sparing of critical structures without compromising the target coverage. The average loading pattern of the whole patient cohort deviates from the standard Fletcher loading pattern. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  17. Sustainable development induction in organizations: a convergence analysis of ISO standards management tools' parameters.

    Science.gov (United States)

    Merlin, Fabrício Kurman; Pereira, Vera Lúciaduarte do Valle; Pacheco, Waldemar

    2012-01-01

    Organizations are part of an environment in which they are pressured to meet society's demands and acting in a sustainable way. In an attempt to meet such demands, organizations make use of various management tools, among which, ISO standards are used. Although there are evidences of contributions provided by these standards, it is questionable whether its parameters converge for a possible induction for sustainable development in organizations. This work presents a theoretical study, designed on structuralism world view, descriptive and deductive method, which aims to analyze the convergence of management tools' parameters in ISO standards. In order to support the analysis, a generic framework for possible convergence was developed, based on systems approach, linking five ISO standards (ISO 9001, ISO 14001, OHSAS 18001, ISO 31000 and ISO 26000) with sustainable development and positioning them according to organization levels (strategic, tactical and operational). The structure was designed based on Brundtland report concept. The analysis was performed exploring the generic framework for possible convergence based on Nadler and Tushman model. The results found the standards can contribute to a possible sustainable development induction in organizations, as long as they meet certain minimum conditions related to its strategic alignment.

  18. The "covariation method" for estimating the parameters of the standard Dynamic Energy Budget model II: Properties and preliminary patterns

    Science.gov (United States)

    Lika, Konstadia; Kearney, Michael R.; Kooijman, Sebastiaan A. L. M.

    2011-11-01

    The covariation method for estimating the parameters of the standard Dynamic Energy Budget (DEB) model provides a single-step method of accessing all the core DEB parameters from commonly available empirical data. In this study, we assess the robustness of this parameter estimation procedure and analyse the role of pseudo-data using elasticity coefficients. In particular, we compare the performance of Maximum Likelihood (ML) vs. Weighted Least Squares (WLS) approaches and find that the two approaches tend to converge in performance as the number of uni-variate data sets increases, but that WLS is more robust when data sets comprise single points (zero-variate data). The efficiency of the approach is shown to be high, and the prior parameter estimates (pseudo-data) have very little influence if the real data contain information about the parameter values. For instance, the effects of the pseudo-value for the allocation fraction κ is reduced when there is information for both growth and reproduction, that for the energy conductance is reduced when information on age at birth and puberty is given, and the effects of the pseudo-value for the maturity maintenance rate coefficient are insignificant. The estimation of some parameters (e.g., the zoom factor and the shape coefficient) requires little information, while that of others (e.g., maturity maintenance rate, puberty threshold and reproduction efficiency) require data at several food levels. The generality of the standard DEB model, in combination with the estimation of all of its parameters, allows comparison of species on the basis of parameter values. We discuss a number of preliminary patterns emerging from the present collection of parameter estimates across a wide variety of taxa. We make the observation that the estimated value of the fraction κ of mobilised reserve that is allocated to soma is far away from the value that maximises reproduction. We recognise this as the reason why two very different

  19. Identification of Water Quality Significant Parameter with Two Transformation/Standardization Methods on Principal Component Analysis and Scilab Software

    Directory of Open Access Journals (Sweden)

    Jovan Putranda

    2016-09-01

    Full Text Available Water quality monitoring is prone to encounter error on its recording or measuring process. The monitoring on river water quality not only aims to recognize the water quality dynamic, but also to evaluate the data to create river management policy and water pollution in order to maintain the continuity of human health or sanitation requirement, and biodiversity preservation. Evaluation on water quality monitoring needs to be started by identifying the important water quality parameter. This research objected to identify the significant parameters by using two transformation or standardization methods on water quality data, which are the river Water Quality Index, WQI (Indeks Kualitas Air, Sungai, IKAs transformation or standardization method and transformation or standardization method with mean 0 and variance 1; so that the variability of water quality parameters could be aggregated with one another. Both of the methods were applied on the water quality monitoring data which its validity and reliability have been tested. The PCA, Principal Component Analysis (Analisa Komponen Utama, AKU, with the help of Scilab software, has been used to process the secondary data on water quality parameters of Gadjah Wong river in 2004-2013, with its validity and reliability has been tested. The Scilab result was cross examined with the result from the Excel-based Biplot Add In software. The research result showed that only 18 from total 35 water quality parameters that have passable data quality. The two transformation or standardization data methods gave different significant parameter type and amount result. On the transformation or standardization mean 0 variances 1, there were water quality significant parameter dynamic to mean concentration of each water quality parameters, which are TDS, SO4, EC, TSS, NO3N, COD, BOD5, Grease Oil and NH3N. On the river WQI transformation or standardization, the water quality significant parameter showed the level of

  20. Large parameter cases of the Gauss hypergeometric function

    NARCIS (Netherlands)

    N.M. Temme (Nico)

    2002-01-01

    textabstractWe consider the asymptotic behaviour of the Gauss hypergeometric function when several of the parameters {it a, b, c} are large. We indicate which cases are of interest for orthogonal polynomials (Jacobi, but also Meixner, Krawtchouk, etc.), which results are already available and

  1. Technology transfer through a network of standard methods and recommended practices - The case of petrochemicals

    Science.gov (United States)

    Batzias, Dimitris F.; Karvounis, Sotirios

    2012-12-01

    Technology transfer may take place in parallel with cooperative action between companies participating in the same organizational scheme or using one another as subcontractor (outsourcing). In this case, cooperation should be realized by means of Standard Methods and Recommended Practices (SRPs) to achieve (i) quality of intermediate/final products according to specifications and (ii) industrial process control as required to guarantee such quality with minimum deviation (corresponding to maximum reliability) from preset mean values of representative quality parameters. This work deals with the design of the network of SRPs needed in each case for successful cooperation, implying also the corresponding technology transfer, effectuated through a methodological framework developed in the form of an algorithmic procedure with 20 activity stages and 8 decision nodes. The functionality of this methodology is proved by presenting the path leading from (and relating) a standard test method for toluene, as petrochemical feedstock in the toluene diisocyanate production, to the (6 generations distance upstream) performance evaluation of industrial process control systems (ie., from ASTM D5606 to BS EN 61003-1:2004 in the SRPs network).

  2. A relation between the Barbero-Immirzi parameter and the standard model

    Energy Technology Data Exchange (ETDEWEB)

    Broda, Boguslaw, E-mail: bobroda@uni.lodz.p [Department of Theoretical Physics, University of Lodz, Pomorska 149/153, PL-90-236 Lodz (Poland); Szanecki, Michal, E-mail: michalszanecki@wp.p [Department of Theoretical Physics, University of Lodz, Pomorska 149/153, PL-90-236 Lodz (Poland)

    2010-06-07

    It has been shown that Sakharov's induced, from the fields entering the standard model, Barbero-Immirzi parameter {gamma} assumes, in the framework of Euclidean formalism, the UV cutoff-independent value, 1/9. The calculus uses the Schwinger's proper-time formalism, the Seeley-DeWitt heat-kernel expansion, and it is akin to the derivation of the ABJ chiral anomaly in space-time with torsion.

  3. Modern standardization case studies at the crossroads of technology, economics, and politics

    CERN Document Server

    Schneiderman, R

    2015-01-01

    Modern Standardization -- Case Studies at the Crossroads of Technology, Economics, and Politics covers the development of new technical standards, how these standards are typically triggered, and how they are submitted to standards development organizations (SDOs) for review and evaluation. It fills the gap in the shortage of reference material in the development of real-world standards. The increasing pace of innovation in technology has accelerated the competitive nature of standardization, particularly in emerging markets. Modern Standardization addresses these and other issues through a series of case studies in a format designed for academics and their engineering, business, and law school students.

  4. Intellectual property rights and standardization. The case of GSM

    NARCIS (Netherlands)

    Bekkers, R.N.A.; Verspagen, B.; Smits, J.M.

    2002-01-01

    This paper investigates the role of intellectual property rights (IPRs) in the process of standardization in the telecommunications industry. We take the global system for mobile communications (GSM) case as a highly relevant example, being part of a high-tech industry in which standards play a

  5. The selection of the parameters of high pressure syringe in performing interventional angiography: a retrospective analysis of 692 cases

    International Nuclear Information System (INIS)

    Shi Dehai; Luo Laishu; Yang Zhihong; Liu Yong

    2011-01-01

    Objective: To explore the optimal parameters of high pressure injector in performing interventional angiography and therapy of different parts of body in order to improve the image quality. Methods: During the period from July 2009 to September 2010 interventional angiography or therapy of different parts of body with the help of high pressure injector was performed in 692 patients, including 538 males and 154 females with a mean age of (53.6±2.5) years. The clinical data were retrospectively analyzed. The angiographic regions included vessels (n=341), cerebral vessels (n=71), thoracic larger vessels (n=19) and the vessels of the arms and legs (n=203). The technical parameters and the image qualities were evaluated and analyzed. Results: Based on the contrast filling degree, the presence or absence of contrast reflux, the imaging resolution and the satisfactory degree in meeting the diagnostic requirements, the angiographic images were evaluated. The image quality was up to standard in 615 cases (88.7%). Unsatisfactory contrast filling with no contrast reflux was seen in 62 cases (9.0%), and poor vascular opacification with contrast reflux was found in 9 cases (1.3%). Vagueness of the images caused by the body movement during exposure was seen in 6 cases (0.8%). No accidental events occurred in all procedures. Conclusion: The use of appropriate catheter, equipment and reasonable injecting parameters, which can match the characteristics of the target lesions, is the key to provide physicians with reliable angiography images. (authors)

  6. IAEA nuclear data for applications: Cross section standards and the reference input parameter library (RIPL)

    International Nuclear Information System (INIS)

    Capote Noy, Roberto; Nichols, Alan L.; Pronyaev, Vladimir G.

    2003-01-01

    An integral part of the activities of the IAEA Nuclear Data Section involves the development of nuclear data for a wide range of user applications. When considering low-energy nuclear reactions induced by neutrons, photons and charged particles, a detailed knowledge is required of the production cross sections over a wide energy range, spectra of emitted particles and their angular distributions. Two highly relevant IAEA data development projects are considered in this paper. Neutron reaction cross-section standards represent the basic quantities needed in nuclear reaction cross-section measurements and evaluations. These standards and the covariance matrices of their uncertainties were previously evaluated and released in 1987. However, the derived uncertainties were subsequently considered to be unrealistic low due to the effect of the low uncertainties obtained in fitting the light element standards to the R-matrix model; as a result, evaluators were forced to scale up the uncertainties to 'expected values'. An IAEA Coordinated Research Project (CRP) entitled 'Improvement of the Standard Cross Sections for Light Elements' was initiated in 2002 to improve the evaluation methodology for the covariance matrix of uncertainty in the R-matrix model fits, and to produce R-matrix evaluations of the important light element standards. The scope of this CRP has been substantially extended to include the preparation of a full set of evaluated standard reactions and covariance matrices of their uncertainties. While almost all requests for nuclear data were originally addressed through measurement programmes, our theoretical understanding of nuclear phenomena has reached a reasonable degree of reliability and nuclear modeling has become standard practice in nuclear data evaluations (with measurements remaining crucial for data testing and benchmarking). Since nuclear model codes require a considerable amount of numerical input, the IAEA has instigated extensive efforts to

  7. Computing tools for implementing standards for single-case designs.

    Science.gov (United States)

    Chen, Li-Ting; Peng, Chao-Ying Joanne; Chen, Ming-E

    2015-11-01

    In the single-case design (SCD) literature, five sets of standards have been formulated and distinguished: design standards, assessment standards, analysis standards, reporting standards, and research synthesis standards. This article reviews computing tools that can assist researchers and practitioners in meeting the analysis standards recommended by the What Works Clearinghouse: Procedures and Standards Handbook-the WWC standards. These tools consist of specialized web-based calculators or downloadable software for SCD data, and algorithms or programs written in Excel, SAS procedures, SPSS commands/Macros, or the R programming language. We aligned these tools with the WWC standards and evaluated them for accuracy and treatment of missing data, using two published data sets. All tools were tested to be accurate. When missing data were present, most tools either gave an error message or conducted analysis based on the available data. Only one program used a single imputation method. This article concludes with suggestions for an inclusive computing tool or environment, additional research on the treatment of missing data, and reasonable and flexible interpretations of the WWC standards. © The Author(s) 2015.

  8. Standardization and Green Economic Change - the Case of Energy Efficiency in Buildings

    DEFF Research Database (Denmark)

    Andersen, Maj Munch; Faria, Lourenco

    2015-01-01

    This paper investigates the role of standardization for green economic change using energy efficiency in buildings as a case. Innovation research on standards tends to focus on the competition between competing emerging standards as well as the economic impacts of these. The idea pursued here...... energy efficiency becomes an issue in standardization work using buildings as a case. The paper seeks more specifically to investigate the rise of building related standards generally over time as well as in different technical areas and geographic regions. The hypothesis pursued in this paper...... is that the rise of the green economy can only take place accompanied by considerable institution formation in the form of standards. In this sense, the presence of standards may be seen as an important indicator on the maturity of the greening of the economy. The paper presents early empirical work...

  9. mr. A C++ library for the matching and running of the Standard Model parameters

    International Nuclear Information System (INIS)

    Kniehl, Bernd A.; Veretin, Oleg L.; Pikelner, Andrey F.; Joint Institute for Nuclear Research, Dubna

    2016-01-01

    We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library.

  10. mr. A C++ library for the matching and running of the Standard Model parameters

    Energy Technology Data Exchange (ETDEWEB)

    Kniehl, Bernd A.; Veretin, Oleg L. [Hamburg Univ. (Germany). II. Inst. fuer Theoretische Physik; Pikelner, Andrey F. [Hamburg Univ. (Germany). II. Inst. fuer Theoretische Physik; Joint Institute for Nuclear Research, Dubna (Russian Federation). Bogoliubov Lab. of Theoretical Physics

    2016-01-15

    We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library.

  11. Software measurement standards for areal surface texture parameters: part 2—comparison of software

    International Nuclear Information System (INIS)

    Harris, P M; Smith, I M; Giusca, C; Leach, R K; Wang, C

    2012-01-01

    A companion paper in this issue describes reference software for the evaluation of areal surface texture parameters, focusing on the definitions of the parameters and giving details of the numerical algorithms employed in the software to implement those definitions. The reference software is used as a benchmark against which software in a measuring instrument can be compared. A data set is used as input to both the software under test and the reference software, and the results delivered by the software under test are compared with those provided by the reference software. This paper presents a comparison of the results returned by the reference software with those reported by proprietary software for surface texture measurement. Differences between the results can be used to identify where algorithms and software for evaluating the parameters differ. They might also be helpful in identifying where parameters are not sufficiently well-defined in standards. (paper)

  12. Correlation pattern recognition: optimal parameters for quality standards control of chocolate marshmallow candy

    Science.gov (United States)

    Flores, Jorge L.; García-Torales, G.; Ponce Ávila, Cristina

    2006-08-01

    This paper describes an in situ image recognition system designed to inspect the quality standards of the chocolate pops during their production. The essence of the recognition system is the localization of the events (i.e., defects) in the input images that affect the quality standards of pops. To this end, processing modules, based on correlation filter, and segmentation of images are employed with the objective of measuring the quality standards. Therefore, we designed the correlation filter and defined a set of features from the correlation plane. The desired values for these parameters are obtained by exploiting information about objects to be rejected in order to find the optimal discrimination capability of the system. Regarding this set of features, the pop can be correctly classified. The efficacy of the system has been tested thoroughly under laboratory conditions using at least 50 images, containing 3 different types of possible defects.

  13. TGLF Recalibration for ITER Standard Case Parameters FY2015: Theory and Simulation Performance Target Final Report

    International Nuclear Information System (INIS)

    Candy, J.

    2015-01-01

    This work was motivated by the observation, as early as 2008, that GYRO simulations of some ITER operating scenarios exhibited nonlinear zonal-flow generation large enough to effectively quench turbulence inside r /a ~ 0.5. This observation of flow-dominated, low-transport states persisted even as more accurate and comprehensive predictions of ITER profiles were made using the state-of-the-art TGLF transport model. This core stabilization is in stark contrast to GYRO-TGLF comparisons for modern-day tokamaks, for which GYRO and TGLF are typically in very close agreement. So, we began to suspect that TGLF needed to be generalized to include the effect of zonal-flow stabilization in order to be more accurate for the conditions of reactor simulations. While the precise cause of the GYRO-TGLF discrepancy for ITER parameters was not known, it was speculated that closeness to threshold in the absence of driven rotation, as well as electromagnetic stabilization, created conditions more sensitive the self-generated zonal-flow stabilization than in modern tokamaks. Need for nonlinear zonal-flow stabilization: To explore the inclusion of a zonal-flow stabilization mechanism in TGLF, we started with a nominal ITER profile predicted by TGLF, and then performed linear and nonlinear GYRO simulations to characterize the behavior at and slightly above the nominal temperature gradients for finite levels of energy transport. Then, we ran TGLF on these cases to see where the discrepancies were largest. The predicted ITER profiles were indeed near to the TGLF threshold over most of the plasma core in the hybrid discharge studied (weak magnetic shear, q > 1). Scanning temperature gradients above the TGLF power balance values also showed that TGLF overpredicted the electron energy transport in the low-collisionality ITER plasma. At first (in Q3), a model of only the zonal-flow stabilization (Dimits shift) was attempted. Although we were able to construct an ad hoc model of the zonal

  14. TGLF Recalibration for ITER Standard Case Parameters FY2015: Theory and Simulation Performance Target Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J. [General Atomics, San Diego, CA (United States)

    2015-12-01

    This work was motivated by the observation, as early as 2008, that GYRO simulations of some ITER operating scenarios exhibited nonlinear zonal-flow generation large enough to effectively quench turbulence inside r /a ~ 0.5. This observation of flow-dominated, low-transport states persisted even as more accurate and comprehensive predictions of ITER profiles were made using the state-of-the-art TGLF transport model. This core stabilization is in stark contrast to GYRO-TGLF comparisons for modern-day tokamaks, for which GYRO and TGLF are typically in very close agreement. So, we began to suspect that TGLF needed to be generalized to include the effect of zonal-flow stabilization in order to be more accurate for the conditions of reactor simulations. While the precise cause of the GYRO-TGLF discrepancy for ITER parameters was not known, it was speculated that closeness to threshold in the absence of driven rotation, as well as electromagnetic stabilization, created conditions more sensitive the self-generated zonal-flow stabilization than in modern tokamaks. Need for nonlinear zonal-flow stabilization: To explore the inclusion of a zonal-flow stabilization mechanism in TGLF, we started with a nominal ITER profile predicted by TGLF, and then performed linear and nonlinear GYRO simulations to characterize the behavior at and slightly above the nominal temperature gradients for finite levels of energy transport. Then, we ran TGLF on these cases to see where the discrepancies were largest. The predicted ITER profiles were indeed near to the TGLF threshold over most of the plasma core in the hybrid discharge studied (weak magnetic shear, q > 1). Scanning temperature gradients above the TGLF power balance values also showed that TGLF overpredicted the electron energy transport in the low-collisionality ITER plasma. At first (in Q3), a model of only the zonal-flow stabilization (Dimits shift) was attempted. Although we were able to construct an ad hoc model of the zonal

  15. Standard Test Method for Determination of the Spectral Mismatch Parameter Between a Photovoltaic Device and a Photovoltaic Reference Cell

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This test method covers a procedure for the determination of a spectral mismatch parameter used in performance testing of photovoltaic devices. 1.2 The spectral mismatch parameter is a measure of the error, introduced in the testing of a photovoltaic device, caused by mismatch between the spectral responses of the photovoltaic device and the photovoltaic reference cell, as well as mismatch between the test light source and the reference spectral irradiance distribution to which the photovoltaic reference cell was calibrated. Examples of reference spectral irradiance distributions are Tables E490 or G173. 1.3 The spectral mismatch parameter can be used to correct photovoltaic performance data for spectral mismatch error. 1.4 This test method is intended for use with linear photovoltaic devices. 1.5 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.6 This standard does not purport to address all of the safety concerns, if any, a...

  16. TU-AB-BRA-04: Quantitative Radiomics: Sensitivity of PET Textural Features to Image Acquisition and Reconstruction Parameters Implies the Need for Standards

    International Nuclear Information System (INIS)

    Nyflot, MJ; Yang, F; Byrd, D; Bowen, SR; Sandison, GA; Kinahan, PE

    2015-01-01

    Purpose: Despite increased use of heterogeneity metrics for PET imaging, standards for metrics such as textural features have yet to be developed. We evaluated the quantitative variability caused by image acquisition and reconstruction parameters on PET textural features. Methods: PET images of the NEMA IQ phantom were simulated with realistic image acquisition noise. 35 features based on intensity histograms (IH), co-occurrence matrices (COM), neighborhood-difference matrices (NDM), and zone-size matrices (ZSM) were evaluated within lesions (13, 17, 22, 28, 33 mm diameter). Variability in metrics across 50 independent images was evaluated as percent difference from mean for three phantom girths (850, 1030, 1200 mm) and two OSEM reconstructions (2 iterations, 28 subsets, 5 mm FWHM filtration vs 6 iterations, 28 subsets, 8.6 mm FWHM filtration). Also, patient sample size to detect a clinical effect of 30% with Bonferroni-corrected α=0.001 and 95% power was estimated. Results: As a class, NDM features demonstrated greatest sensitivity in means (5–50% difference for medium girth and reconstruction comparisons and 10–100% for large girth comparisons). Some IH features (standard deviation, energy, entropy) had variability below 10% for all sensitivity studies, while others (kurtosis, skewness) had variability above 30%. COM and ZSM features had complex sensitivities; correlation, energy, entropy (COM) and zone percentage, short-zone emphasis, zone-size non-uniformity (ZSM) had variability less than 5% while other metrics had differences up to 30%. Trends were similar for sample size estimation; for example, coarseness, contrast, and strength required 12, 38, and 52 patients to detect a 30% effect for the small girth case but 38, 88, and 128 patients in the large girth case. Conclusion: The sensitivity of PET textural features to image acquisition and reconstruction parameters is large and feature-dependent. Standards are needed to ensure that prospective trials

  17. TU-AB-BRA-04: Quantitative Radiomics: Sensitivity of PET Textural Features to Image Acquisition and Reconstruction Parameters Implies the Need for Standards

    Energy Technology Data Exchange (ETDEWEB)

    Nyflot, MJ; Yang, F; Byrd, D; Bowen, SR; Sandison, GA; Kinahan, PE [University of Washington, Seattle, WA (United States)

    2015-06-15

    Purpose: Despite increased use of heterogeneity metrics for PET imaging, standards for metrics such as textural features have yet to be developed. We evaluated the quantitative variability caused by image acquisition and reconstruction parameters on PET textural features. Methods: PET images of the NEMA IQ phantom were simulated with realistic image acquisition noise. 35 features based on intensity histograms (IH), co-occurrence matrices (COM), neighborhood-difference matrices (NDM), and zone-size matrices (ZSM) were evaluated within lesions (13, 17, 22, 28, 33 mm diameter). Variability in metrics across 50 independent images was evaluated as percent difference from mean for three phantom girths (850, 1030, 1200 mm) and two OSEM reconstructions (2 iterations, 28 subsets, 5 mm FWHM filtration vs 6 iterations, 28 subsets, 8.6 mm FWHM filtration). Also, patient sample size to detect a clinical effect of 30% with Bonferroni-corrected α=0.001 and 95% power was estimated. Results: As a class, NDM features demonstrated greatest sensitivity in means (5–50% difference for medium girth and reconstruction comparisons and 10–100% for large girth comparisons). Some IH features (standard deviation, energy, entropy) had variability below 10% for all sensitivity studies, while others (kurtosis, skewness) had variability above 30%. COM and ZSM features had complex sensitivities; correlation, energy, entropy (COM) and zone percentage, short-zone emphasis, zone-size non-uniformity (ZSM) had variability less than 5% while other metrics had differences up to 30%. Trends were similar for sample size estimation; for example, coarseness, contrast, and strength required 12, 38, and 52 patients to detect a 30% effect for the small girth case but 38, 88, and 128 patients in the large girth case. Conclusion: The sensitivity of PET textural features to image acquisition and reconstruction parameters is large and feature-dependent. Standards are needed to ensure that prospective trials

  18. Optimization of sampling parameters for standardized exhaled breath sampling.

    Science.gov (United States)

    Doran, Sophie; Romano, Andrea; Hanna, George B

    2017-09-05

    The lack of standardization of breath sampling is a major contributing factor to the poor repeatability of results and hence represents a barrier to the adoption of breath tests in clinical practice. On-line and bag breath sampling have advantages but do not suit multicentre clinical studies whereas storage and robust transport are essential for the conduct of wide-scale studies. Several devices have been developed to control sampling parameters and to concentrate volatile organic compounds (VOCs) onto thermal desorption (TD) tubes and subsequently transport those tubes for laboratory analysis. We conducted three experiments to investigate (i) the fraction of breath sampled (whole vs. lower expiratory exhaled breath); (ii) breath sample volume (125, 250, 500 and 1000ml) and (iii) breath sample flow rate (400, 200, 100 and 50 ml/min). The target VOCs were acetone and potential volatile biomarkers for oesophago-gastric cancer belonging to the aldehyde, fatty acids and phenol chemical classes. We also examined the collection execution time and the impact of environmental contamination. The experiments showed that the use of exhaled breath-sampling devices requires the selection of optimum sampling parameters. The increase in sample volume has improved the levels of VOCs detected. However, the influence of the fraction of exhaled breath and the flow rate depends on the target VOCs measured. The concentration of potential volatile biomarkers for oesophago-gastric cancer was not significantly different between the whole and lower airway exhaled breath. While the recovery of phenols and acetone from TD tubes was lower when breath sampling was performed at a higher flow rate, other VOCs were not affected. A dedicated 'clean air supply' overcomes the contamination from ambient air, but the breath collection device itself can be a source of contaminants. In clinical studies using VOCs to diagnose gastro-oesophageal cancer, the optimum parameters are 500mls sample volume

  19. The Elusive Standard of Care.

    Science.gov (United States)

    Cooke, Brian K; Worsham, Elizabeth; Reisfield, Gary M

    2017-09-01

    In medical negligence cases, the forensic expert must explain to a trier of fact what a defendant physician should have done, or not done, in a specific set of circumstances and whether the physician's conduct constitutes a breach of duty. The parameters of the duty are delineated by the standard of care. Many facets of the standard of care have been well explored in the literature, but gaps remain in a complete understanding of this concept. We examine the standard of care, its origins, and who determines the prevailing standard, beginning with an overview of the historical roots of the standard of care and, using case law, tracing its evolution from the 19th century through the early 21st century. We then analyze the locality rule and consider local, state, and national standards of care. The locality rule requires a defendant physician to provide the same degree of skill and care that is required of a physician practicing in the same or similar community. This rule remains alive in some jurisdictions in the United States. Last, we address the relationship between the standard of care and clinical practice guidelines. © 2017 American Academy of Psychiatry and the Law.

  20. Punica granatum L. Hydrogel for Wound Care Treatment: From Case Study to Phytomedicine Standardization.

    Science.gov (United States)

    Fleck, Aline; Cabral, Patrik F G; Vieira, Felipe F M; Pinheiro, Deo A; Pereira, Carlos R; Santos, Wilson C; Machado, Thelma B

    2016-08-22

    The pharmacological activities of many Punica granatum L. components suggest a wide range of clinical applications for the prevention and treatment of diseases where chronic inflammation is believed to play an essential etiologic role. The current work reports a case study analyzing the effect produced by a magistral formulation of ethanolic extracts of Punica granatum peels on a non-healing chronic ulcer. The complete closure of the chronic ulcer that was initially not responsive to standard medical care was observed. A 2% (w/w) P. granatum peels ethanolic extract hydrogel-based formulation (PGHF) was standardized and subjected to physicochemical studies to establish the quality control parameters using, among others, assessment criteria such as optimum appearance, pH range, viscosity and hydrogel disintegration. The stability and quantitative chromatographic data was assessed in storage for six months under two temperature regimes. An efficient HPLC-DAD method was established distinguishing the biomarkers punicalin and punicalagin simultaneously in a single 8 min run. PGHF presented suitable sensorial and physicochemical performance, showing that punicalagin was not significantly affected by storage (p > 0.05). Formulations containing extracts with not less than 0.49% (w/w) total punicalagin might find good use in wound healing therapy.

  1. Maintenance Research in SOA Towards a Standard Case Study

    NARCIS (Netherlands)

    Espinha, T.; Chen, C.; Zaidman, A.E.; Gross, H.G.

    2012-01-01

    Preprint of paper published in: 16th European Conference on Software Maintenance and Reengineering (CSMR), 27-30 March 2012; doi:10.1109/CSMR.2012.49 Maintenance research in the context of Service Oriented Architecture (SOA) is currently lacking a suitable standard case study that can be used by

  2. Application of extreme value distribution function in the determination of standard meteorological parameters for nuclear power plants

    International Nuclear Information System (INIS)

    Jiang Haimei; Liu Xinjian; Qiu Lin; Li Fengju

    2014-01-01

    Based on the meteorological data from weather stations around several domestic nuclear power plants, the statistical results of extreme minimum temperatures, minimum. central pressures of tropical cyclones and some other parameters are calculated using extreme value I distribution function (EV- I), generalized extreme value distribution function (GEV) and generalized Pareto distribution function (GP), respectively. The influence of different distribution functions and parameter solution methods on the statistical results of extreme values is investigated. Results indicate that generalized extreme value function has better applicability than the other two distribution functions in the determination of standard meteorological parameters for nuclear power plants. (authors)

  3. Study on Adaptive Parameter Determination of Cluster Analysis in Urban Management Cases

    Science.gov (United States)

    Fu, J. Y.; Jing, C. F.; Du, M. Y.; Fu, Y. L.; Dai, P. P.

    2017-09-01

    The fine management for cities is the important way to realize the smart city. The data mining which uses spatial clustering analysis for urban management cases can be used in the evaluation of urban public facilities deployment, and support the policy decisions, and also provides technical support for the fine management of the city. Aiming at the problem that DBSCAN algorithm which is based on the density-clustering can not realize parameter adaptive determination, this paper proposed the optimizing method of parameter adaptive determination based on the spatial analysis. Firstly, making analysis of the function Ripley's K for the data set to realize adaptive determination of global parameter MinPts, which means setting the maximum aggregation scale as the range of data clustering. Calculating every point object's highest frequency K value in the range of Eps which uses K-D tree and setting it as the value of clustering density to realize the adaptive determination of global parameter MinPts. Then, the R language was used to optimize the above process to accomplish the precise clustering of typical urban management cases. The experimental results based on the typical case of urban management in XiCheng district of Beijing shows that: The new DBSCAN clustering algorithm this paper presents takes full account of the data's spatial and statistical characteristic which has obvious clustering feature, and has a better applicability and high quality. The results of the study are not only helpful for the formulation of urban management policies and the allocation of urban management supervisors in XiCheng District of Beijing, but also to other cities and related fields.

  4. STUDY ON ADAPTIVE PARAMETER DETERMINATION OF CLUSTER ANALYSIS IN URBAN MANAGEMENT CASES

    Directory of Open Access Journals (Sweden)

    J. Y. Fu

    2017-09-01

    Full Text Available The fine management for cities is the important way to realize the smart city. The data mining which uses spatial clustering analysis for urban management cases can be used in the evaluation of urban public facilities deployment, and support the policy decisions, and also provides technical support for the fine management of the city. Aiming at the problem that DBSCAN algorithm which is based on the density-clustering can not realize parameter adaptive determination, this paper proposed the optimizing method of parameter adaptive determination based on the spatial analysis. Firstly, making analysis of the function Ripley's K for the data set to realize adaptive determination of global parameter MinPts, which means setting the maximum aggregation scale as the range of data clustering. Calculating every point object’s highest frequency K value in the range of Eps which uses K-D tree and setting it as the value of clustering density to realize the adaptive determination of global parameter MinPts. Then, the R language was used to optimize the above process to accomplish the precise clustering of typical urban management cases. The experimental results based on the typical case of urban management in XiCheng district of Beijing shows that: The new DBSCAN clustering algorithm this paper presents takes full account of the data’s spatial and statistical characteristic which has obvious clustering feature, and has a better applicability and high quality. The results of the study are not only helpful for the formulation of urban management policies and the allocation of urban management supervisors in XiCheng District of Beijing, but also to other cities and related fields.

  5. THE STATUS OF STUDENTS OF THE FACULTY OF PHYSICAL EDUCATION AND SPORTS IN COMPARISON WITH STANDARD PARAMETERS OF THE ILLINOIS AGILITY TEST

    Directory of Open Access Journals (Sweden)

    Malsor Gjonbalaj

    2015-05-01

    Full Text Available The aim of this study is to verify the current agility status of students of the Faculty of Physical Education and Sports of the University of Prishtina. Also, another aim was to compare results of the students with the international norms of the standard agility test. In this study were included 92 students of FPES. The agility tests were done based on the Illinois Agility Test as a standard test. The methods used to analyse the data obtained from the research, are standard methods, basic statistical parameters and comparative methods. From the basic statistical parameters, it was noticed a homogenous distribution of results. The distribution between the minimal and maximal result is from 15.15 – 20.16, with the average 16.54 and standard deviation 0.92. Based on the parameters on distribution, it is noticeable that the value of skew parameters is 1.43. Comparing to international norms of standard agility test, it is noticeable that the students if the Faculty of Physical Education and Sports have a satisfactory level and belong to the category of average results 16.54 sec, the international standard norms are 16.2 - 18.1 sec, which implies average achievement of results. Comparing to other groups of students, our sample showed almost the same results with the group of students tested by Mehmet Kutlu, Hakan Yapýcý, Oğuzhan Yoncalık, Serkan Çelik, 2012, where the results of their students are 16.54 ± 0.41, but the testing took place in synthetic carpet.

  6. Deficiency of Standard Effective-Medium Approximation for Ellipsometry of Layers of Nanoparticles

    Directory of Open Access Journals (Sweden)

    E. G. Bortchagovsky

    2015-01-01

    Full Text Available Correct description of optical properties of layers of disordered interacting nanoparticles is the problem. Contrary to volumes of nanocomposites, when standard models of effective-medium approximations (EMA work well, two-dimensional case of layers has intrinsic anisotropy, which influences interparticle interactions. The deficiency of standard Maxwell-Garnett model in the application to the ellipsometry of layers of gold nanoparticles is demonstrated. It demands the modification of EMA models and one way of this is considered in this paper. Contrary to existing 2D models with phenomenological parameters, the proposed Green function approach uses the same number of parameters as standard 3D EMA models for explicit calculations of effective parameters of layers of disordered nanoparticles.

  7. Better temperature predictions in geothermal modelling by improved quality of input parameters: a regional case study from the Danish-German border region

    Science.gov (United States)

    Fuchs, Sven; Bording, Thue S.; Balling, Niels

    2015-04-01

    Thermal modelling is used to examine the subsurface temperature field and geothermal conditions at various scales (e.g. sedimentary basins, deep crust) and in the framework of different problem settings (e.g. scientific or industrial use). In such models, knowledge of rock thermal properties is prerequisites for the parameterisation of boundary conditions and layer properties. In contrast to hydrogeological ground-water models, where parameterization of the major rock property (i.e. hydraulic conductivity) is generally conducted considering lateral variations within geological layers, parameterization of thermal models (in particular regarding thermal conductivity but also radiogenic heat production and specific heat capacity) in most cases is conducted using constant parameters for each modelled layer. For such constant thermal parameter values, moreover, initial values are normally obtained from rare core measurements and/or literature values, which raise questions for their representativeness. Some few studies have considered lithological composition or well log information, but still keeping the layer values constant. In the present thermal-modelling scenario analysis, we demonstrate how the use of different parameter input type (from literature, well logs and lithology) and parameter input style (constant or laterally varying layer values) affects the temperature model prediction in sedimentary basins. For this purpose, rock thermal properties are deduced from standard petrophysical well logs and lithological descriptions for several wells in a project area. Statistical values of thermal properties (mean, standard deviation, moments, etc.) are calculated at each borehole location for each geological formation and, moreover, for the entire dataset. Our case study is located at the Danish-German border region (model dimension: 135 x115 km, depth: 20 km). Results clearly show that (i) the use of location-specific well-log derived rock thermal properties and (i

  8. SITE SPECIFIC REFERENCE PERSON PARAMETERS AND DERIVED CONCENTRATION STANDARDS FOR THE SAVANNAH RIVER SITE

    Energy Technology Data Exchange (ETDEWEB)

    Jannik, T.

    2013-03-14

    The purpose of this report is twofold. The first is to develop a set of behavioral parameters for a reference person specific for the Savannah River Site (SRS) such that the parameters can be used to determine dose to members of the public in compliance with Department of Energy (DOE) Order 458.1 “Radiation Protection of the Public and the Environment.” A reference person is a hypothetical, gender and age aggregation of human physical and physiological characteristics arrived at by international consensus for the purpose of standardizing radiation dose calculations. DOE O 458.1 states that compliance with the annual dose limit of 100 mrem (1 mSv) to a member of the public may be demonstrated by calculating the dose to the maximally exposed individual (MEI) or to a representative person. Historically, for dose compliance, SRS has used the MEI concept, which uses adult dose coefficients and adult male usage parameters. Beginning with the 2012 annual site environmental report, SRS will be using the representative person concept for dose compliance. The dose to a representative person will be based on 1) the SRS-specific reference person usage parameters at the 95th percentile of appropriate national or regional data, which are documented in this report, 2) the reference person (gender and age averaged) ingestion and inhalation dose coefficients provided in DOE Derived Concentration Technical Standard (DOE-STD-1196-2011), and 3) the external dose coefficients provided in the DC_PAK3 toolbox. The second purpose of this report is to develop SRS-specific derived concentration standards (DCSs) for all applicable food ingestion pathways, ground shine, and water submersion. The DCS is the concentration of a particular radionuclide in water, in air, or on the ground that results in a member of the public receiving 100 mrem (1 mSv) effective dose following continuous exposure for one year. In DOE-STD-1196-2011, DCSs were developed for the ingestion of water, inhalation of

  9. Critical parameters affecting the design of high frequency transmission lines in standard CMOS technology

    KAUST Repository

    Al Attar, Talal; Alshehri, Abdullah; Almansouri, Abdullah Saud Mohammed; Al-Turki, Abdullah Turki

    2017-01-01

    Different structures of transmission lines were designed and fabricated in standard CMOS technology to estimate some critical parameters including the RMS value of the surface roughness and the loss tangent. The input impedances for frequencies up to 50 GHz were modeled and compared with measurements. The results demonstrated a strong correlation between the used model with the proposed coefficients and the measured results, attesting the robustness of the model and the reliability of the incorporated coefficients values.

  10. Critical parameters affecting the design of high frequency transmission lines in standard CMOS technology

    KAUST Repository

    Al Attar, Talal

    2017-05-13

    Different structures of transmission lines were designed and fabricated in standard CMOS technology to estimate some critical parameters including the RMS value of the surface roughness and the loss tangent. The input impedances for frequencies up to 50 GHz were modeled and compared with measurements. The results demonstrated a strong correlation between the used model with the proposed coefficients and the measured results, attesting the robustness of the model and the reliability of the incorporated coefficients values.

  11. Tropical Fruit Pulps: Processing, Product Standardization and Main Control Parameters for Quality Assurance

    Directory of Open Access Journals (Sweden)

    Carlos Eduardo de Farias Silva

    2017-05-01

    Full Text Available ABSTRACT Fruit pulp is the most basic food product obtained from fresh fruit processing. Fruit pulps can be cold stored for long periods of time, but they also can be used to fabricate juices, ice creams, sweets, jellies and yogurts. The exploitation of tropical fruits has leveraged the entire Brazilian fruit pulp sector due mainly to the high acceptance of their organoleptic properties and remarkable nutritional facts. However, several works published in the last decades have pointed out unfavorable conditions regarding the consumption of tropical fruit pulps. This negative scenario has been associated with unsatisfactory physico-chemical and microbiological parameters of fruits pulps as outcomes of little knowledge and improper management within the fruit pulp industry. There are protocols for delineating specific identity and quality standards (IQSs and standardized good manufacturing practices (GMP for fruit pulps, which also embrace standard operating procedures (SOPs and hazard analysis and critical control points (HACCP, although this latter is not considered mandatory by the Brazilian legislation. Unfortunately, the lack of skilled labor, along with failures in complying established protocols have impaired quality of fruit pulps. It has been necessary to collect all information available with the aim to identify the most important hazards within fruit pulp processing lines. Standardizing methods and practices within the Brazilian fruit pulp industry would assurance high quality status to tropical fruit pulps and the commercial growth of this vegetal product towards international markets.

  12. Vertically Differentiating Environmental Standards: The Case of the Marine Stewardship Council

    Directory of Open Access Journals (Sweden)

    Simon R. Bush

    2015-02-01

    Full Text Available This paper explores the externally-led vertical differentiation of third-party certification standards using the case of the Marine Stewardship Council (MSC. We analyze this process in two dimensions. First, fisheries employ strategies to capture further market value from fishing practices that go beyond their initial conditions for certification and seek additional recognition for these activities through co-labelling with, amongst others, international NGOs. Second, fisheries not yet able to meet the requirements of MSC standards are being enrolled in NGO and private sector sponsored Fisheries Improvement Projects (FIPs, providing an alternative route to global markets. In both cases the credibility and authority of the MSC is challenged by new coalitions of market actors opening up new strategies for capturing market value and/or improving the conditions of international market access. Through the lens of global value chains, the results offer new insights on how such standards not only influence trade and markets, but are also starting to change their internal governance in response to threats to their credibility by actors and modes of coordination in global value chains.

  13. 29 CFR 1904.9 - Recording criteria for cases involving medical removal under OSHA standards.

    Science.gov (United States)

    2010-07-01

    ... surveillance requirements of an OSHA standard, you must record the case on the OSHA 300 Log. (b) Implementation—(1) How do I classify medical removal cases on the OSHA 300 Log? You must enter each medical removal case on the OSHA 300 Log as either a case involving days away from work or a case involving restricted...

  14. A case of standardization?

    DEFF Research Database (Denmark)

    Rod, Morten Hulvej; Høybye, Mette Terp

    2016-01-01

    the ones envisioned by the makers of standards. In 2012, the Danish National Health Authorities introduced a set of health promotion guidelines that were meant to guide the decision making and priority setting of Denmark's 98 local governments. The guidelines provided recommendations for health promotion...... and standardization. It remains an open question whether or not the guidelines lead to more standardized policies and interventions, but we suggest that the guidelines promote a risk factor-oriented approach as the dominant frame for knowledge, reasoning, decision making and priority setting in health promotion. We...

  15. On distributed parameter control systems in the abnormal case and in the case of nonoperator equality constraints

    Directory of Open Access Journals (Sweden)

    Urszula Ledzewicz

    1993-01-01

    Full Text Available In this paper, a general distributed parameter control problem in Banach spaces with integral cost functional and with given initial and terminal data is considered. An extension of the Dubovitskii-Milyutin method to the case of nonregular operator equality constraints, based on Avakov's generalization of the Lusternik theorem, is presented. This result is applied to obtain an extension of the Extremum Principle for the case of abnormal optimal control problems. Then a version of this problem with nonoperator equality constraints is discussed and the Extremum Principle for this problem is presented.

  16. GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE

    International Nuclear Information System (INIS)

    Mikkelsen, K.; Næss, S. K.; Eriksen, H. K.

    2013-01-01

    We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3) better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N par . One of the main goals of the present paper is to determine how large N par can be, while still maintaining reasonable computational efficiency; we find that N par = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme

  17. Correlation between muscle electrical impedance data and standard neurophysiologic parameters after experimental neurogenic injury

    International Nuclear Information System (INIS)

    Ahad, M; Rutkove, S B

    2010-01-01

    Previous work has shown that electrical impedance measurements of muscle can assist in quantifying the degree of muscle atrophy resulting from neuronal injury, with impedance values correlating strongly with standard clinical parameters. However, the relationship between such data and neurophysiologic measurements is unexplored. In this study, 24 Wistar rats underwent sciatic crush, with measurement of the 2–1000 kHz impedance spectrum, standard electrophysiological measures, including nerve conduction studies, needle electromyography, and motor unit number estimation (MUNE) before and after sciatic crush, with animals assessed weekly for 4 weeks post-injury. All electrical impedance values, including a group of 'collapsed' variables, in which the spectral characteristics were reduced to single values, showed reductions as high as 47.2% after sciatic crush, paralleling and correlating with changes in compound motor action potential amplitude, conduction velocity and most closely to MUNE, but not to the presence of fibrillation potentials observed on needle electromyography. These results support the concept that localized impedance measurements can serve as surrogate makers of nerve injury; these measurements may be especially useful in assessing nerve injury impacting proximal or axial muscles where standard quantitative neurophysiologic methods such as nerve conduction or MUNE cannot be readily performed

  18. Forecasts of non-Gaussian parameter spaces using Box-Cox transformations

    Science.gov (United States)

    Joachimi, B.; Taylor, A. N.

    2011-09-01

    Forecasts of statistical constraints on model parameters using the Fisher matrix abound in many fields of astrophysics. The Fisher matrix formalism involves the assumption of Gaussianity in parameter space and hence fails to predict complex features of posterior probability distributions. Combining the standard Fisher matrix with Box-Cox transformations, we propose a novel method that accurately predicts arbitrary posterior shapes. The Box-Cox transformations are applied to parameter space to render it approximately multivariate Gaussian, performing the Fisher matrix calculation on the transformed parameters. We demonstrate that, after the Box-Cox parameters have been determined from an initial likelihood evaluation, the method correctly predicts changes in the posterior when varying various parameters of the experimental setup and the data analysis, with marginally higher computational cost than a standard Fisher matrix calculation. We apply the Box-Cox-Fisher formalism to forecast cosmological parameter constraints by future weak gravitational lensing surveys. The characteristic non-linear degeneracy between matter density parameter and normalization of matter density fluctuations is reproduced for several cases, and the capabilities of breaking this degeneracy by weak-lensing three-point statistics is investigated. Possible applications of Box-Cox transformations of posterior distributions are discussed, including the prospects for performing statistical data analysis steps in the transformed Gaussianized parameter space.

  19. Airside HVAC BESTEST. Adaptation of ASHRAE RP 865 Airside HVAC Equipment Modeling Test Cases for ASHRAE Standard 140. Volume 1, Cases AE101-AE445

    Energy Technology Data Exchange (ETDEWEB)

    Neymark, J. [J. Neymark & Associates, Golden, CO (United States); Kennedy, M. [Mike D. Kennedy, Inc., Townsend, WA (United States); Judkoff, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gall, J. [AAON, Inc., Tulsa, OK (United States); Knebel, D. [AAON, Inc., Tulsa, OK (United States); Henninger, R. [GARD Analytics, Inc., Arlington Heights, IL (United States); Witte, M. [GARD Analytics, Inc., Arlington Heights, IL (United States); Hong, T. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McDowell, T. [Thermal Energy System Specialists, Madison, WI (United States); Yan, D. [Tsinghua Univ., Beijing (China); Zhou, X. [Tsinghua Univ., Beijing (China)

    2016-03-01

    This report documents a set of diagnostic analytical verification cases for testing the ability of whole building simulation software to model the air distribution side of typical heating, ventilating and air conditioning (HVAC) equipment. These cases complement the unitary equipment cases included in American National Standards Institute (ANSI)/American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs, which test the ability to model the heat-transfer fluid side of HVAC equipment.

  20. Shielding standards: a case history

    International Nuclear Information System (INIS)

    Battat, M.E.

    1977-01-01

    A discussion of the sequence of events leading from the inception of a standard to its transmittal to the ANSI Board of Standards Review will be given. The standard referred to is entitled, ''Neutron and Gamma-Ray Flux-to-Dose-Rate Factors,'' (ANSI N666). Certain questions raised during the review process and the compatibility of this standard with the USA Code of Federal Regulations will be discussed

  1. Study of vector boson decay and determination of the Standard Model parameters at hadronic colliders

    International Nuclear Information System (INIS)

    Amidei, D.

    1991-01-01

    The power of the detectors and the datasets at hadronic colliders begins to allow measurement of the electroweak parameters with a precision that confronts the perturbative corrections to the theory. Recent measurements of M z , M w , and sin θ w by CDF and UA2 are reviewed, with some emphasis on how experimental precision is achieved, and some discussion of the import for the specifications of the Standard Model. 14 refs., 10 figs., 4 tabs

  2. Double standards and the international trade of pesticides: the Brazilian case.

    Science.gov (United States)

    Porto, Marcelo Firpo; Milanez, Bruno; Soares, Wagner Lopes; Meyer, Armando

    2010-01-01

    Despite bans on certain pesticides and their replacement by others considered less hazardous, the widespread use of these substances in agriculture continues to threaten the environment and the health of millions of people. This article discusses the current double standard in the international trade of pesticides and focuses on Brazil, one of the main users of pesticides in the world, analyzing the trends in foreign trade (imports and exports) of selected pesticides as a function of changes in legislation in the United States, the European Union, and Brazil from 1989 to 2006. We applied time line analysis to eight organochlorines already banned in Brazil and conducted a case-by-case qualitative and quantitative analysis of nine other pesticides. The results indicate the existence of double standards, as demonstrated by the continued exports to Brazil of some pesticides banned in the United States and Europe.

  3. EMBEDding the CEFR in Academic Writing Assessment : A case study in training and standardization

    NARCIS (Netherlands)

    Haines, Kevin; Lowie, Wander; Jansma, Petra; Schmidt, Nicole

    2013-01-01

    The CEFR is increasingly being used as the framework of choice for the assessment of language proficiency at universities across Europe. However, to attain consistent assessment, familiarization and standardization are essential. In this paper we report a case study of embedding a standardization

  4. Bounds on the Higgs mass in the standard model and the minimal supersymmetric standard model

    CERN Document Server

    Quiros, M.

    1995-01-01

    Depending on the Higgs-boson and top-quark masses, M_H and M_t, the effective potential of the {\\bf Standard Model} can develop a non-standard minimum for values of the field much larger than the weak scale. In those cases the standard minimum becomes metastable and the possibility of decay to the non-standard one arises. Comparison of the decay rate to the non-standard minimum at finite (and zero) temperature with the corresponding expansion rate of the Universe allows to identify the region, in the (M_H, M_t) plane, where the Higgs field is sitting at the standard electroweak minimum. In the {\\bf Minimal Supersymmetric Standard Model}, approximate analytical expressions for the Higgs mass spectrum and couplings are worked out, providing an excellent approximation to the numerical results which include all next-to-leading-log corrections. An appropriate treatment of squark decoupling allows to consider large values of the stop and/or sbottom mixing parameters and thus fix a reliable upper bound on the mass o...

  5. Fulfillment of GMP standard, halal standard, and applying HACCP for production process of beef floss (Case study: Ksatria enterprise)

    Science.gov (United States)

    A'diat, Arkan Addien Al; Liquiddanu, Eko; Laksono, Pringgo Widyo; Sutopo, Wahyudi; Suletra, I. Wayan

    2018-02-01

    Along with the increasing number of the modern retail business in Indonesia, give an opportunity to small and medium enterprise (SME) to sell its products through the modern retailer. There are some obstacles faced by the SMEs, one of them is about product standard. Product standard that must be owned by SMEs are GMP standard and halal standard. This research was conducted to know the fulfillment by the beef floss enterprise in jagalan in fulfilling the GMP standard and halal. In addition, Hazard Analysis and Critical Control Points (HACCP) system was applied to analyze the process. HACCP which used in this research was based on the seven principles in SNI (Indonesian National Standard) 01-4852-1998. The seven principles included hazard analysis, critical control point (CCP) determination, critical limit establishment, CCP monitor system establishment, corrective action establishment, verification, and also documentation establishment that must be applied in preparing HACCP plan. Based on this case study, it is concluded that there were 5 CCPs : the boiling process, roasting process, frying process, the beef floss draining process, and the packaging process.

  6. GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE

    Energy Technology Data Exchange (ETDEWEB)

    Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no [Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029, Blindern, NO-0315 Oslo (Norway)

    2013-11-10

    We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3) better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.

  7. Safety standards for near surface disposal and the safety case and supporting safety assessment for demonstrating compliance with the standards

    International Nuclear Information System (INIS)

    Metcalf, P.

    2003-01-01

    The report presents the safety standards for near surface disposal (ICRP guidance and IAEA standards) and the safety case and supporting safety assessment for demonstrating compliance with the standards. Special attention is paid to the recommendations for disposal of long-lived solid radioactive waste. The requirements are based on the principle for the same level of protection of future individuals as for the current generation. Two types of exposure are considered: human intrusion and natural processes and protection measures are discussed. Safety requirements for near surface disposal are discussed including requirements for protection of human health and environment, requirements or safety assessments, waste acceptance and requirements etc

  8. Impact of operator on determining functional parameters of nuclear medicine procedures.

    Science.gov (United States)

    Mohammed, A M; Naddaf, S Y; Mahdi, F S; Al-Mutawa, Q I; Al-Dossary, H A; Elgazzar, A H

    2006-01-01

    The study was designed to assess the significance of the interoperator variability in the estimation of functional parameters for four nuclear medicine procedures. Three nuclear medicine technologists with varying years of experience processed the following randomly selected 20 cases with diverse functions of each study type: renography, renal cortical scans, myocardial perfusion gated single-photon emission computed tomography (MP-GSPECT) and gated blood pool ventriculography (GBPV). The technologists used the same standard processing routines and were blinded to the results of each other. The means of the values and the means of differences calculated case by case were statistically analyzed by one-way ANOVA. The values were further analyzed using Pearson correlation. The range of the mean values and standard deviation of relative renal function obtained by the three technologists were 50.65 +/- 3.9 to 50.92 +/- 4.4% for renography, 51.43 +/- 8.4 to 51.55 +/- 8.8% for renal cortical scans, 57.40 +/- 14.3 to 58.30 +/- 14.9% for left ventricular ejection fraction from MP-GSPECT and 54.80 +/- 12.8 to 55.10 +/- 13.1% for GBPV. The difference was not statistically significant, p > 0.9. The values showed a high correlation of more than 0.95. Calculated case by case, the mean of differences +/- SD was found to range from 0.42 +/- 0.36% in renal cortical scans to 1.35 +/- 0.87% in MP-GSPECT with a maximum difference of 4.00%. The difference was not statistically significant, p > 0.19. The estimated functional parameters were reproducible and operator independent as long as the standard processing instructions were followed. Copyright 2006 S. Karger AG, Basel.

  9. Predictive value of EndTidalCO2, lung mechanics and other standard parameters for weaning neurological patients from mechanical ventilation

    Directory of Open Access Journals (Sweden)

    Hala A. Mohammad

    2016-01-01

    Conclusion: We concluded that measurements of RSBI, MIP (maximum inspiratory pressure, EndTidalCO2 and dynamic compliance were more accurate predictors of extubation failure in patients with neurological insults than other standard weaning parameters.

  10. Dynamic Parameter Identification of Subject-Specific Body Segment Parameters Using Robotics Formalism: Case Study Head Complex.

    Science.gov (United States)

    Díaz-Rodríguez, Miguel; Valera, Angel; Page, Alvaro; Besa, Antonio; Mata, Vicente

    2016-05-01

    Accurate knowledge of body segment inertia parameters (BSIP) improves the assessment of dynamic analysis based on biomechanical models, which is of paramount importance in fields such as sport activities or impact crash test. Early approaches for BSIP identification rely on the experiments conducted on cadavers or through imaging techniques conducted on living subjects. Recent approaches for BSIP identification rely on inverse dynamic modeling. However, most of the approaches are focused on the entire body, and verification of BSIP for dynamic analysis for distal segment or chain of segments, which has proven to be of significant importance in impact test studies, is rarely established. Previous studies have suggested that BSIP should be obtained by using subject-specific identification techniques. To this end, our paper develops a novel approach for estimating subject-specific BSIP based on static and dynamics identification models (SIM, DIM). We test the validity of SIM and DIM by comparing the results using parameters obtained from a regression model proposed by De Leva (1996, "Adjustments to Zatsiorsky-Seluyanov's Segment Inertia Parameters," J. Biomech., 29(9), pp. 1223-1230). Both SIM and DIM are developed considering robotics formalism. First, the static model allows the mass and center of gravity (COG) to be estimated. Second, the results from the static model are included in the dynamics equation allowing us to estimate the moment of inertia (MOI). As a case study, we applied the approach to evaluate the dynamics modeling of the head complex. Findings provide some insight into the validity not only of the proposed method but also of the application proposed by De Leva (1996, "Adjustments to Zatsiorsky-Seluyanov's Segment Inertia Parameters," J. Biomech., 29(9), pp. 1223-1230) for dynamic modeling of body segments.

  11. Comparing definitions in guidelines and written standards - a case study: 'Trueness'

    International Nuclear Information System (INIS)

    Pavese, F

    2010-01-01

    This paper describes the structure of a repository initiated by IMEKO TC21 to allow the comparison of different definitions and use of the same term or concept in written standards and guidelines available internationally. The method used is illustrated for a case study: the critical concept of 'trueness' and its definitions.

  12. Optimization of the reconstruction parameters in [123I]FP-CIT SPECT

    Science.gov (United States)

    Niñerola-Baizán, Aida; Gallego, Judith; Cot, Albert; Aguiar, Pablo; Lomeña, Francisco; Pavía, Javier; Ros, Domènec

    2018-04-01

    The aim of this work was to obtain a set of parameters to be applied in [123I]FP-CIT SPECT reconstruction in order to minimize the error between standardized and true values of the specific uptake ratio (SUR) in dopaminergic neurotransmission SPECT studies. To this end, Monte Carlo simulation was used to generate a database of 1380 projection data-sets from 23 subjects, including normal cases and a variety of pathologies. Studies were reconstructed using filtered back projection (FBP) with attenuation correction and ordered subset expectation maximization (OSEM) with correction for different degradations (attenuation, scatter and PSF). Reconstruction parameters to be optimized were the cut-off frequency of a 2D Butterworth pre-filter in FBP, and the number of iterations and the full width at Half maximum of a 3D Gaussian post-filter in OSEM. Reconstructed images were quantified using regions of interest (ROIs) derived from Magnetic Resonance scans and from the Automated Anatomical Labeling map. Results were standardized by applying a simple linear regression line obtained from the entire patient dataset. Our findings show that we can obtain a set of optimal parameters for each reconstruction strategy. The accuracy of the standardized SUR increases when the reconstruction method includes more corrections. The use of generic ROIs instead of subject-specific ROIs adds significant inaccuracies. Thus, after reconstruction with OSEM and correction for all degradations, subject-specific ROIs led to errors between standardized and true SUR values in the range [‑0.5, +0.5] in 87% and 92% of the cases for caudate and putamen, respectively. These percentages dropped to 75% and 88% when the generic ROIs were used.

  13. Standardization of the Descemet membrane endothelial keratoplasty technique: Outcomes of the first 450 consecutive cases.

    Science.gov (United States)

    Satué, M; Rodríguez-Calvo-de-Mora, M; Naveiras, M; Cabrerizo, J; Dapena, I; Melles, G R J

    2015-08-01

    To evaluate the clinical outcome of the first 450 consecutive cases after Descemet membrane endothelial keratoplasty (DMEK), as well as the effect of standardization of the technique. Comparison between 3 groups: Group I: (cases 1-125), as the extended learning curve; Group II: (cases 126-250), transition to technique standardization; Group III: (cases 251-450), surgery with standardized technique. Best corrected visual acuity, endothelial cell density, pachymetry and intra- and postoperative complications were evaluated before, and 1, 3 and 6 months after DMEK. At 6 months after surgery, 79% of eyes reached a best corrected visual acuity of≥0.8 and 43%≥1.0. Mean preoperative endothelial cell density was 2,530±220 cells/mm2 and 1,613±495 at 6 months after surgery. Mean pachymetry measured 668±92 μm and 526±46 μm pre- and (6 months) postoperatively, respectively. There were no significant differences in best corrected visual acuity, endothelial cell density and pachymetry between the 3 groups (P > .05). Graft detachment presented in 17.3% of the eyes. The detachment rate declined from 24% to 12%, and the rate of secondary surgeries from 9.6% to 3.5%, from group I to III respectively. Visual outcomes and endothelial cell density after DMEK are independent of the technique standardization. However, technique standardization may have contributed to a lower graft detachment rate and a relatively low number of secondary interventions required. As such, DMEK may become the first choice of treatment in corneal endothelial disease. Copyright © 2014 Sociedad Española de Oftalmología. Published by Elsevier España, S.L.U. All rights reserved.

  14. Delayed recombination and cosmic parameters

    International Nuclear Information System (INIS)

    Galli, Silvia; Melchiorri, Alessandro; Bean, Rachel; Silk, Joseph

    2008-01-01

    Current cosmological constraints from cosmic microwave background anisotropies are typically derived assuming a standard recombination scheme, however additional resonance and ionizing radiation sources can delay recombination, altering the cosmic ionization history and the cosmological inferences drawn from the cosmic microwave background data. We show that for recent observations of the cosmic microwave background anisotropy, from the Wilkinson microwave anisotropy probe satellite mission (WMAP) 5-year survey and from the arcminute cosmology bolometer array receiver experiment, additional resonance radiation is nearly degenerate with variations in the spectral index, n s , and has a marked effect on uncertainties in constraints on the Hubble constant, age of the universe, curvature and the upper bound on the neutrino mass. When a modified recombination scheme is considered, the redshift of recombination is constrained to z * =1078±11, with uncertainties in the measurement weaker by 1 order of magnitude than those obtained under the assumption of standard recombination while constraints on the shift parameter are shifted by 1σ to R=1.734±0.028. From the WMAP5 data we obtain the following constraints on the resonance and ionization sources parameters: ε α i <0.058 at 95% c.l.. Although delayed recombination limits the precision of parameter estimation from the WMAP satellite, we demonstrate that this should not be the case for future, smaller angular scales measurements, such as those by the Planck satellite mission.

  15. Temporal variability in water quality parameters--a case study of drinking water reservoir in Florida, USA.

    Science.gov (United States)

    Toor, Gurpal S; Han, Lu; Stanley, Craig D

    2013-05-01

    Our objective was to evaluate changes in water quality parameters during 1983-2007 in a subtropical drinking water reservoir (area: 7 km(2)) located in Lake Manatee Watershed (area: 338 km(2)) in Florida, USA. Most water quality parameters (color, turbidity, Secchi depth, pH, EC, dissolved oxygen, total alkalinity, cations, anions, and lead) were below the Florida potable water standards. Concentrations of copper exceeded the potable water standard of water quality threshold of 20 μg l(-1). Concentrations of total N showed significant increase from 1983 to 1994 and a decrease from 1997 to 2007. Total P showed significant increase during 1983-2007. Mean concentrations of total N (n = 215; 1.24 mg l(-1)) were lower, and total P (n = 286; 0.26 mg l(-1)) was much higher than the EPA numeric criteria of 1.27 mg total N l(-1) and 0.05 mg total P l(-1) for Florida's colored lakes, respectively. Seasonal trends were observed for many water quality parameters where concentrations were typically elevated during wet months (June-September). Results suggest that reducing transport of organic N may be one potential option to protect water quality in this drinking water reservoir.

  16. Indigenous Peoples’ Natural Resources and Business: Inter-American Standards and Chilean Case Law

    Directory of Open Access Journals (Sweden)

    Gonzalo Aguilar Cavallo

    2015-10-01

    Full Text Available In this brief analysis we will review the difficulties faced in establishing responsibility for human rights violation to companies as well as the Inter-American Court of Human Rights’ case law in the field. We will analyze the international standards established in corporate responsibility. Finally, we will examine if the Chilean national courts incorporate the standards set by the Inter-American Court of Human Rights, especially concerning private companies.

  17. Examining the Functional Specification of Two-Parameter Model under Location and Scale Parameter Condition

    OpenAIRE

    Nakashima, Takahiro

    2006-01-01

    The functional specification of mean-standard deviation approach is examined under location and scale parameter condition. Firstly, the full set of restrictions imposed on the mean-standard deviation function under the location and scale parameter condition are made clear. Secondly, the examination based on the restrictions mentioned in the previous sentence derives the new properties of the mean-standard deviation function on the applicability of additive separability and the curvature of ex...

  18. Human exposure standards in the frequency range 1 Hz To 100 kHz: the case for adoption of the IEEE standard.

    Science.gov (United States)

    Patrick Reilly, J

    2014-10-01

    Differences between IEEE C95 Standards (C95.6-2002 and C95.1-2005) in the low-frequency (1 Hz-100 kHz) and the ICNIRP-2010 guidelines appear across the frequency spectrum. Factors accounting for lack of convergence include: differences between the IEEE standards and the ICNIRP guidelines with respect to biological induction models, stated objectives, data trail from experimentally derived thresholds through physical and biological principles, selection and justification of safety/reduction factors, use of probability models, compliance standards for the limbs as distinct from the whole body, defined population categories, strategies for central nervous system protection below 20 Hz, and correspondence of environmental electric field limits with contact currents. This paper discusses these factors and makes the case for adoption of the limits in the IEEE standards.

  19. Biomarkers of exposition to ionizing radiation and hematology parameters in fitness for work. Case Report

    International Nuclear Information System (INIS)

    Djokovic, J.; Milacic, S.; Rakic, B.; Pajic, J.; Petrovic, D.; Vuckovic, J.

    2009-01-01

    Ionizing radiation is frequently used in medicine, especially during diagnostic procedures. The workers who are exposed to radiation have obligation for periodic check up. Presented case shows changes in hematological parameters and biomarkers of exposition to ionizing radiation (chromosome aberrations, structural changes and micronucleus test). The aim of this case report is to indicate metodology of diagnostic procedures for chronicle radiation syndrome. (author) [sr

  20. Astrophysical neutrinos flavored with beyond the Standard Model physics

    Energy Technology Data Exchange (ETDEWEB)

    Rasmussen, Rasmus W.; Ackermann, Markus; Winter, Walter [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Lechner, Lukas [Vienna Univ. of Technology (Austria). Dept. of Physics; Kowalski, Marek [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik

    2017-07-15

    We systematically study the allowed parameter space for the flavor composition of astrophysical neutrinos measured at Earth, including beyond the Standard Model theories at production, during propagation, and at detection. One motivation is to illustrate the discrimination power of the next-generation neutrino telescopes such as IceCube-Gen2. We identify several examples that lead to potential deviations from the standard neutrino mixing expectation such as significant sterile neutrino production at the source, effective operators modifying the neutrino propagation at high energies, dark matter interactions in neutrino propagation, or non-standard interactions in Earth matter. IceCube-Gen2 can exclude about 90% of the allowed parameter space in these cases, and hence will allow to efficiently test and discriminate models. More detailed information can be obtained from additional observables such as the energy-dependence of the effect, fraction of electron antineutrinos at the Glashow resonance, or number of tau neutrino events.

  1. Astrophysical neutrinos flavored with beyond the Standard Model physics

    International Nuclear Information System (INIS)

    Rasmussen, Rasmus W.; Ackermann, Markus; Winter, Walter; Lechner, Lukas; Kowalski, Marek; Humboldt-Universitaet, Berlin

    2017-07-01

    We systematically study the allowed parameter space for the flavor composition of astrophysical neutrinos measured at Earth, including beyond the Standard Model theories at production, during propagation, and at detection. One motivation is to illustrate the discrimination power of the next-generation neutrino telescopes such as IceCube-Gen2. We identify several examples that lead to potential deviations from the standard neutrino mixing expectation such as significant sterile neutrino production at the source, effective operators modifying the neutrino propagation at high energies, dark matter interactions in neutrino propagation, or non-standard interactions in Earth matter. IceCube-Gen2 can exclude about 90% of the allowed parameter space in these cases, and hence will allow to efficiently test and discriminate models. More detailed information can be obtained from additional observables such as the energy-dependence of the effect, fraction of electron antineutrinos at the Glashow resonance, or number of tau neutrino events.

  2. Dacryocystorhinostomy ostium: parameters to evaluate and DCR ostium scoring

    Directory of Open Access Journals (Sweden)

    Ali MJ

    2014-12-01

    Full Text Available Mohammad Javed Ali,1 Alkis James Psaltis,2 Peter John Wormald2 1Dacryology Service, L V Prasad Eye Institute, Hyderabad, Telangana, India; 2Department of Surgery–Otorhinolaryngology, Head and Neck Surgery, The University of Adelaide, Adelaide, SA, Australia Aim: This study aims to provide a systematic protocol for the evaluation of a dacryocystorhinostomy (DCR ostium and to propose a scoring system to standardize the assessment.Methods: Retrospective evaluation of 125 consecutive lacrimal ostia post-DCR was performed. Medical records were screened, and photographs and videos were assessed to note the details of various ostial parameters. The major time points in evaluation were 4 weeks, 6 weeks, 3 months, and 6 months post-DCR. The ostia were defined and parameters like shape, size, location, and evolution of ostium were noted. Evaluation parameters were defined for internal common opening (ICO, ostium stents, and ostium granulomas. Ostium cicatrix and synechiae were graded based on their significance. Surgical success rates were computed and ostium characteristics in failed cases were studied.Results: A total of 125 ostia were evaluated on the aforementioned ostium parameters. Because five ostia showed a complete cicatricial closure with no recognizable features, the remaining 120 ostia were studied. The ostium location was anterior to the axilla of middle turbinate in 85.8% (103/120 of the cases. Moreover, 76.6% (92/120 of the ostia were circular to oval in shape, with a shallow base. The ostium size was >8×5 mm in 78.3% (94/120 of the cases. The ICO was found to be located in the central or paracentral basal area in 75.8% (91/120. The anatomical and functional success rates achieved were 96% and 93.6%, respectively. All the five cases with anatomical failures showed a complete cicatrization and the ICO movements were poor in all the three cases of functional failure.Conclusion: The article attempts to standardize the postoperative

  3. Clinical Parameters following Multiple Oral Dose Administration of a Standardized Andrographis paniculata Capsule in Healthy Thai Subjects.

    Science.gov (United States)

    Suriyo, Tawit; Pholphana, Nanthanit; Ungtrakul, Teerapat; Rangkadilok, Nuchanart; Panomvana, Duangchit; Thiantanawat, Apinya; Pongpun, Wanwisa; Satayavivad, Jutamaad

    2017-06-01

    Andrographis paniculata has been widely used in Scandinavian and Asian counties for the treatment of the common cold, fever, and noninfectious diarrhea. The present study was carried out to investigate the physiological effects of short-term multiple dose administration of a standardized A. paniculata capsule used for treatment of the common cold and uncomplicated upper respiratory tract infections, including blood pressure, electrocardiogram, blood chemistry, hematological profiles, urinalysis, and blood coagulation in healthy Thai subjects. Twenty healthy subjects (10 males and 10 females) received 12 capsules per day orally of 4.2 g of a standardized A. paniculata crude powder (4 capsules of 1.4 g of A. paniculata , 3 times per day, 8 h intervals) for 3 consecutive days. The results showed that all of the measured clinical parameters were found to be within normal ranges for a healthy person. However, modulation of some parameters was observed after the third day of treatment, for example, inductions of white blood cells and absolute neutrophil count in the blood, a reduction of plasma alkaline phosphatase, and an induction of urine pH. A rapid and transient reduction in blood pressure was observed at 30 min after capsule administration, resulting in a significant reduction of mean systolic blood pressure. There were no serious adverse events observed in the subjects during the treatment period. In conclusion, this study suggests that multiple oral dosing of A. paniculata at the normal therapeutic dose for the common cold and uncomplicated upper respiratory tract infections modulates various clinical parameters within normal ranges for a healthy person. Georg Thieme Verlag KG Stuttgart · New York.

  4. Comparison of the Time Domain Windows Specified in the ISO 18431 Standards Used to Estimate Modal Parameters in Steel Plates

    Directory of Open Access Journals (Sweden)

    Jhonatan Camacho-Navarro

    2016-01-01

    Full Text Available The procedures used to estimate structural modal parameters as natural frequency, damping ratios, and mode shapes are generally based on frequency methods. However, methods of time-frequency analysis are highly sensible to the parameters used to calculate the discrete Fourier transform: windowing, resolution, and preprocessing. Thus, the uncertainty of the modal parameters is increased if a proper parameter selection is not considered. In this work, the influence of three different time domain windows functions (Hanning, flat-top, and rectangular used to estimate modal parameters are discussed in the framework of ISO 18431 standard. Experimental results are conducted over an AISI 1020 steel plate, which is excited by means of a hammer element. Vibration response is acquired by using acceleration records according to the ISO 7626-5 reference guides. The results are compared with a theoretical method and it is obtained that the flat-top window is the best function for experimental modal analysis.

  5. Assessment of histological response of paediatric bone sarcomas using FDG PET in comparison to morphological volume measurement and standardized MRI parameters

    Energy Technology Data Exchange (ETDEWEB)

    Denecke, Timm; Misch, Daniel; Steffen, Ingo G.; Plotkin, Michail; Stoever, Brigitte [Charite - Universitaetsmedizin Berlin, Klinik fuer Strahlenheilkunde, Campus Virchow-Klinikum, Berlin (Germany); Hundsdoerfer, Patrick; Henze, Guenter [Charite - Universitaetsmedizin Berlin, Klinik fuer Paediatrie m.S. Onkologie und Haematologie, Otto-Heubner-Zentrum, Campus Virchow-Klinikum, Berlin (Germany); Schoenberger, Stefan [Universitaetsklinikum der Heinrich-Heine-Universitaet Duesseldorf, Klinik fuer Kinder-Onkologie, -Haematologie und -Immunologie, Duesseldorf (Germany); Furth, Christian; Ruf, Juri [Otto-von-Guericke-Universitaet Magdeburg, Klinik fuer Radiologie und Nuklearmedizin, Universitaetsklinikum Magdeburg A.oe.R., Magdeburg (Germany); Hautzel, Hubertus [Universitaetsklinikum der Heinrich Heine Universitaet Duesseldorf, Nuklearmedizinische Klinik, Duesseldorf (Germany); Kluge, Regine [Universitaetsklinikum Leipzig A.oe.R., Klinik und Poliklinik fuer Nuklearmedizin, Leipzig (Germany); Bierbach, Uta [Universitaetsklinikum Leipzig A.oe.R., Abteilung fuer Kinder-Haematologie, -Onkologie und -Haemostaseologie, Leipzig (Germany); Otto, Sylke [Universitaetsklinikum Greifswald, Institut fuer Diagnostische Radiologie und Neuroradiologie, Greifswald (Germany); Beck, James F. [Universitaetsklinikum Greifswald, Abteilung fuer Paediatrische Haematologie und Onkologie, Greifswald (Germany); Franzius, Christiane [MR- und PET/CT-Zentrum, Bremen-Mitte (Germany); Universitaetsklinikum Muenster, Klinik und Poliklinik fuer Nuklearmedizin, Muenster (Germany); Amthauer, Holger [Charite - Universitaetsmedizin Berlin, Klinik fuer Strahlenheilkunde, Campus Virchow-Klinikum, Berlin (Germany); Otto-von-Guericke-Universitaet Magdeburg, Klinik fuer Radiologie und Nuklearmedizin, Universitaetsklinikum Magdeburg A.oe.R., Magdeburg (Germany)

    2010-10-15

    The objective of this study was to evaluate positron emission tomography (PET) using {sup 18}F-fluoro-2-deoxy-D-glucose (FDG) in comparison to volumetry and standardized magnetic resonance imaging (MRI) parameters for the assessment of histological response in paediatric bone sarcoma patients. FDG PET and local MRI were performed in 27 paediatric sarcoma patients [Ewing sarcoma family of tumours (EWS), n = 16; osteosarcoma (OS), n = 11] prior to and after neoadjuvant chemotherapy before local tumour resection. Several parameters for assessment of response of the primary tumour to therapy by FDG PET and MRI were evaluated and compared with histopathological regression of the resected tumour as defined by Salzer-Kuntschik. FDG PET significantly discriminated responders from non-responders using the standardized uptake value (SUV) reduction and the absolute post-therapeutic SUV (SUV2) in the entire patient population ({nabla}SUV, p = 0.005; SUV2, p = 0.011) as well as in the subgroup of OS patients ({nabla}SUV, p = 0.009; SUV2, p = 0.028), but not in the EWS subgroup. The volume reduction measured by MRI/CT did not significantly discriminate responders from non-responders either in the entire population (p = 0.170) or in both subgroups (EWS, p = 0.950; OS, p = 1.000). The other MRI parameters alone or in combination were unreliable and did not improve the results. Comparing diagnostic parameters of FDG PET and local MRI, metabolic imaging showed high superiority in the subgroup of OS patients, while similar results were observed in the population of EWS. FDG PET appears to be a useful tool for non-invasive response assessment in the group of OS patients and is superior to MRI. In EWS patients, however, neither FDG PET nor volumetry or standardized MRI criteria enabled a reliable response assessment to be made after neoadjuvant treatment. (orig.)

  6. Standard model parameters determination and validity tests in Z{sup 0} hadronic disintegrations; Determination des parametres du modele standard et tests de sa validite dans les desintegrations hadroniques du Z{sup 0}

    Energy Technology Data Exchange (ETDEWEB)

    Todorov, T

    1993-05-01

    This thesis describes the determination of the electroweak parameters from the measurements of the total hadronic cross-section by the DELPHI experiment at LEP-I. The analysed data was taken in the years 1991 and 1992; a previous analysis of the data taken in 1990 is included in the final fits. The first part of the thesis describes the interest of the measurement of the Z{sup 0} resonance parameters in the framework of the Standard Model as well as their implications for alternative models. The Standard Model predictions are described in some detail, and their precision is estimated. Then follows a brief description of the LEP collider, of the measurement of the collision energy, and of the experimental setup. A chapter is devoted to the description of the luminosity measurement, essential for the determination of total cross-sections. The measurement of the hadronic cross-section (event selection, study of backgrounds, study of sources of systematic uncertainties) is described in detail in the next chapter. Then follows a description of the method of the extraction of the resonance parameters, and a discussion of the uncertainties in their determination. The values obtained are interpreted in the framework of the Standard Model, as well as in the framework of some more general theories. Finally, the event generator for hadron production in two-photon collisions is described in the appendix. (author). 69 refs., 51 figs., 9 tabs., 1 ann.

  7. On Drift Parameter Estimation in Models with Fractional Brownian Motion by Discrete Observations

    Directory of Open Access Journals (Sweden)

    Yuliya Mishura

    2014-06-01

    Full Text Available We study a problem of an unknown drift parameter estimation in a stochastic differen- tial equation driven by fractional Brownian motion. We represent the likelihood ratio as a function of the observable process. The form of this representation is in general rather complicated. However, in the simplest case it can be simplified and we can discretize it to establish the a. s. convergence of the discretized version of maximum likelihood estimator to the true value of parameter. We also investigate a non-standard estimator of the drift parameter showing further its strong consistency. 

  8. Starworld: Preparing Accountants for the Future: A Case-Based Approach to Teach International Financial Reporting Standards Using ERP Software

    Science.gov (United States)

    Ragan, Joseph M.; Savino, Christopher J.; Parashac, Paul; Hosler, Jonathan C.

    2010-01-01

    International Financial Reporting Standards now constitute an important part of educating young professional accountants. This paper looks at a case based process to teach International Financial Reporting Standards using integrated Enterprise Resource Planning software. The case contained within the paper can be used within a variety of courses…

  9. Asymptotic scaling laws for precision of parameter estimates in dynamical systems

    International Nuclear Information System (INIS)

    Horbelt, W.; Timmer, J.

    2003-01-01

    When parameters are estimated from noisy data, the uncertainty of the estimates in terms of their standard deviation typically scales like the inverse square root of the number of data points. In the case of deterministic dynamical systems with added observation noise, superior scaling laws can be achieved. This is demonstrated numerically for the logistic map, the van der Pol oscillator and the Lorenz system, where exponential scaling laws and power laws have been found, depending on the number of degrees of freedom. For some special cases, analytical expressions are derived

  10. Premise for Standardized Sepsis Models.

    Science.gov (United States)

    Remick, Daniel G; Ayala, Alfred; Chaudry, Irshad; Coopersmith, Craig M; Deutschman, Clifford; Hellman, Judith; Moldawer, Lyle; Osuchowski, Marcin

    2018-06-05

    Sepsis morbidity and mortality exacts a toll on patients and contributes significantly to healthcare costs. Preclinical models of sepsis have been used to study disease pathogenesis and test new therapies, but divergent outcomes have been observed with the same treatment even when using the same sepsis model. Other disorders such as diabetes, cancer, malaria, obesity and cardiovascular diseases have used standardized, preclinical models that allow laboratories to compare results. Standardized models accelerate the pace of research and such models have been used to test new therapies or changes in treatment guidelines. The National Institutes of Health (NIH) mandated that investigators increase data reproducibility and the rigor of scientific experiments and has also issued research funding announcements about the development and refinement of standardized models. Our premise is that refinement and standardization of preclinical sepsis models may accelerate the development and testing of potential therapeutics for human sepsis, as has been the case with preclinical models for other disorders. As a first step towards creating standardized models, we suggest 1) standardizing the technical standards of the widely used cecal ligation and puncture model and 2) creating a list of appropriate organ injury and immune dysfunction parameters. Standardized sepsis models could enhance reproducibility and allow comparison of results between laboratories and may accelerate our understanding of the pathogenesis of sepsis.

  11. Development of standard reference samples for diffractometry

    International Nuclear Information System (INIS)

    Galvao, Antonio de Sant'Ana

    2011-01-01

    In this work, samples of standard reference materials for diffractometry of polycrystals were developed. High-purity materials were submitted to mechanical and thermal treatments in order to present the adequate properties to be used as high-quality standard reference materials for powder diffraction, comparable to the internationally recognized produced by the USA National Institute of Standards and Technology NIST, but at lower costs. The characterization of the standard materials was performed by measurements in conventional X-ray diffraction diffractometers, high resolution neutron diffraction and high-resolution synchrotron diffraction. The lattice parameters were calculated by extrapolation of the values obtained from each X-ray reflection against cos 2 θ by the Least-Squares Method. The adjustments were compared to the values obtained by the Rietveld Method, using program GSAS. The materials thus obtained were the α-alumina, yttrium oxide, silicon, cerium oxide, lanthanum hexaboride and lithium fluoride. The standard reference materials produced present quality similar or, in some cases, superior to the standard reference materials produced and commercialized by the NIST. (author)

  12. ASTM F1717 standard for the preclinical evaluation of posterior spinal fixators: can we improve it?

    Science.gov (United States)

    La Barbera, Luigi; Galbusera, Fabio; Villa, Tomaso; Costa, Francesco; Wilke, Hans-Joachim

    2014-10-01

    Preclinical evaluation of spinal implants is a necessary step to ensure their reliability and safety before implantation. The American Society for Testing and Materials reapproved F1717 standard for the assessment of mechanical properties of posterior spinal fixators, which simulates a vertebrectomy model and recommends mimicking vertebral bodies using polyethylene blocks. This set-up should represent the clinical use, but available data in the literature are few. Anatomical parameters depending on the spinal level were compared to published data or measurements on biplanar stereoradiography on 13 patients. Other mechanical variables, describing implant design were considered, and all parameters were investigated using a numerical parametric finite element model. Stress values were calculated by considering either the combination of the average values for each parameter or their worst-case combination depending on the spinal level. The standard set-up represents quite well the anatomy of an instrumented average thoracolumbar segment. The stress on the pedicular screw is significantly influenced by the lever arm of the applied load, the unsupported screw length, the position of the centre of rotation of the functional spine unit and the pedicular inclination with respect to the sagittal plane. The worst-case combination of parameters demonstrates that devices implanted below T5 could potentially undergo higher stresses than those described in the standard suggestions (maximum increase of 22.2% at L1). We propose to revise F1717 in order to describe the anatomical worst case condition we found at L1 level: this will guarantee higher safety of the implant for a wider population of patients. © IMechE 2014.

  13. Use of standardized multidimensional evaluation tools and the emergence of the case manager's professional identity in France.

    Science.gov (United States)

    Nugue, Mathilde; De Stampa, Matthieu; Couturier, Yves; Somme, Dominique

    2012-01-01

    In France, the national public health plan proposes a group of innovations including the initiation of case management for older adults in complex situations, particularly those with cognitive disorders. In this context, public authorities asked case managers to use a standardized multidimensional evaluation tool. The results of a qualitative study on the pertinence of such a tool relative to the emergence of this new professional field are described. Early use of an evaluation tool seems to be linked to the emergence of a new professional identity for recently recruited case managers. Factors determining the strength of this link are training tool standardization, computerization, and local structure's involvement. Our results contribute to identifying one of the ways by which professional identity can be changed to become a case manager.

  14. THE ADAPTATION VS. STANDARDIZATION DILEMMA: THE CASE OF AN AMERICAN COMPANY IN BRAZIL

    Directory of Open Access Journals (Sweden)

    Thelma Valéria Rocha

    2011-01-01

    Full Text Available This study analyzes the adaptation versus standardization dilemma in International Marketing in subsidiaries of multinationals corporations. It highlights the importance of GMS – global marketing strategies – in the ability to innovate in subsidiaries in emerging economies, as Brazil. The objective is to find out how the level of autonomy displayed by subsidiaries influences the adaptation vs. standardization dilemma, and, consequently, the marketing-mix program. The methodology followed is qualitative research using a case-study approach in an American multinational from the food sector. In this case, we found out that firm's brands are very important to this firm's success overseas, which sustains that brand policies should be defined carefully at both levels: subsidiaries and headquarters. This brand policy influences direct the autonomy to innovate in marketing at subsidiaries level. This study is useful for managers at subsidiaries who need to understand the importance of global marketing strategies, and also for managers at headquarters who need to verify in which circumstances autonomy pays off.

  15. Gold standards and expert panels: a pulmonary nodule case study with challenges and solutions

    Science.gov (United States)

    Miller, Dave P.; O'Shaughnessy, Kathryn F.; Wood, Susan A.; Castellino, Ronald A.

    2004-05-01

    Comparative evaluations of reader performance using different modalities, e.g. CT with computer-aided detection (CAD) vs. CT without CAD, generally require a "truth" definition based on a gold standard. There are many situations in which a true invariant gold standard is impractical or impossible to obtain. For instance, small pulmonary nodules are generally not assessed by biopsy or resection. In such cases, it is common to use a unanimous consensus or majority agreement from an expert panel as a reference standard for actionability in lieu of the unknown gold standard for disease. Nonetheless, there are three major concerns about expert panel reference standards: (1) actionability is not synonymous with disease (2) it may be possible to obtain different conclusions about which modality is better using different rules (e.g. majority vs. unanimous consensus), and (3) the variability associated with the panelists is not formally captured in the p-values or confidence intervals that are generally produced for estimating the extent to which one modality is superior to the other. A multi-reader-multi-case (MRMC) receiver operating characteristic (ROC) study was performed using 90 cases, 15 readers, and a reference truth based on 3 experienced panelists. The primary analyses were conducted using a reference truth of unanimous consensus regarding actionability (3 out of 3 panelists). To assess the three concerns noted above: (1) additional data from the original radiology reports were compared to the panel (2) the complete analysis was repeated using different definitions of truth, and (3) bootstrap analyses were conducted in which new truth panels were constructed by picking 1, 2, or 3 panelists at random. The definition of the reference truth affected the results for each modality (CT with CAD and CT without CAD) considered by itself, but the effects were similar, so the primary analysis comparing the modalities was robust to the choice of the reference truth.

  16. Histopathological Parameters predicting Occult Nodal Metastases in Tongue Carcinoma Cases: An Indian Perspective.

    Science.gov (United States)

    Jacob, Tina Elizabeth; Malathi, N; Rajan, Sharada T; Augustine, Dominic; Manish, N; Patil, Shankargouda

    2016-01-01

    It is a well-established fact that in squamous cell carcinoma cases, the presence of lymph node metastases decreased the 5-year survival rate by 50% and also caused the recurrence of the primary tumor with development of distant metastases. Till date, the predictive factors for occult cervical lymph nodes metastases in cases of tongue squamous cell carcinoma remain inconclusive. Therefore, it is imperative to identify patients who are at the greatest risk for occult cervical metastases. This study was thus performed with the aim to identify various histopathologic parameters of the primary tumor that predict occult nodal metastases. The clinicopathologic features of 56 cases of lateral tongue squamous cell carcinoma with cT1NoMo/cT2NoMo as the stage and without prior radiotherapy or chemotherapy were considered. The surgical excision of primary tumor was followed by elective neck dissection. The glossectomy specimen along with the neck nodes were fixed in formalin and 5 urn thick sections were obtained. The hematoxylin & eosin stained sections were then subjected to microscopic examination. The primary tumor characteristics that were analyzed include tumor grade, invading front, depth of tumor, lymphovascular invasion, perineural invasion and inflammatory response. The nodes were examined for possible metastases using hematoxylin & eosin followed by cytokeratin immunohistochemistry. A total of 12 cases were found with positive occult nodal metastases. On performing univariate analysis, the histopathologic parameters that were found to be statistically significant were lymphovascular invasion (p = 0.004) and perineural invasion (p = 0.003) along with a cut-off depth of infiltration more than 5 mm (p = 0.01). Histopathologic assessment of the primary tumor specimen therefore continues to provide information that is central to guide clinical management, particularly in cases of occult nodal metastases. Clinical significance The study highlights the importance of

  17. Measuring the Michel parameter ξ''

    International Nuclear Information System (INIS)

    Knowles, P.; Deutsch, J.; Egger, J.; Fetscher, W.; Foroughi, F.; Govaerts, J.; Hadri, M.; Kirch, K.; Kistryn, S.; Lang, J.; Morelle, X.; Naviliat, O.; Ninane, A.; Prieels, R.; Severijns, N.; Simons, L.; Sromicki, J.; Vandormael, S.; Hove, P. van

    1999-01-01

    Unlike the majority of Michel parameters which are consistent with the Standard Model V-A interaction, the experimental value of ξ''(=0.65±0.36) [1] is poorly known. Our experiment will measure the longitudinal polarization, P L , of positrons emitted from the decay of polarized muons. The value of P L , equal to unity in the Standard Model, will decrease for high energy positrons emitted antiparallel to the muon spin if the combination of Michel parameters ξ''/ξξ' - 1 deviates from the Standard Model value of zero

  18. Effect of pesticide fate parameters and their uncertainty on the selection of 'worst-case' scenarios of pesticide leaching to groundwater.

    Science.gov (United States)

    Vanderborght, Jan; Tiktak, Aaldrik; Boesten, Jos J T I; Vereecken, Harry

    2011-03-01

    For the registration of pesticides in the European Union, model simulations for worst-case scenarios are used to demonstrate that leaching concentrations to groundwater do not exceed a critical threshold. A worst-case scenario is a combination of soil and climate properties for which predicted leaching concentrations are higher than a certain percentile of the spatial concentration distribution within a region. The derivation of scenarios is complicated by uncertainty about soil and pesticide fate parameters. As the ranking of climate and soil property combinations according to predicted leaching concentrations is different for different pesticides, the worst-case scenario for one pesticide may misrepresent the worst case for another pesticide, which leads to 'scenario uncertainty'. Pesticide fate parameter uncertainty led to higher concentrations in the higher percentiles of spatial concentration distributions, especially for distributions in smaller and more homogeneous regions. The effect of pesticide fate parameter uncertainty on the spatial concentration distribution was small when compared with the uncertainty of local concentration predictions and with the scenario uncertainty. Uncertainty in pesticide fate parameters and scenario uncertainty can be accounted for using higher percentiles of spatial concentration distributions and considering a range of pesticides for the scenario selection. Copyright © 2010 Society of Chemical Industry.

  19. Randomization and Data-Analysis Items in Quality Standards for Single-Case Experimental Studies

    Science.gov (United States)

    Heyvaert, Mieke; Wendt, Oliver; Van den Noortgate, Wim; Onghena, Patrick

    2015-01-01

    Reporting standards and critical appraisal tools serve as beacons for researchers, reviewers, and research consumers. Parallel to existing guidelines for researchers to report and evaluate group-comparison studies, single-case experimental (SCE) researchers are in need of guidelines for reporting and evaluating SCE studies. A systematic search was…

  20. Study of inflationary generalized cosmic Chaplygin gas for standard and tachyon scalar fields

    Energy Technology Data Exchange (ETDEWEB)

    Sharif, M.; Saleem, Rabia [University of the Punjab, Department of Mathematics, Lahore (Pakistan)

    2014-07-15

    We consider an inflationary universe model in the context of the generalized cosmic Chaplygin gas by taking the matter field as standard and tachyon scalar fields. We evaluate the corresponding scalar fields and scalar potentials during the intermediate and logamediate inflationary regimes by modifying the first Friedmann equation. In each case, we evaluate the number of e-folds, scalar as well as tensor power spectra, scalar spectral index, and the important observational parameter, the tensor-scalar ratio in terms of inflation. The graphical behavior of this parameter shows that the model remains incompatible with WMAP7 and Planck observational data in each case. (orig.)

  1. Study of inflationary generalized cosmic Chaplygin gas for standard and tachyon scalar fields

    International Nuclear Information System (INIS)

    Sharif, M.; Saleem, Rabia

    2014-01-01

    We consider an inflationary universe model in the context of the generalized cosmic Chaplygin gas by taking the matter field as standard and tachyon scalar fields. We evaluate the corresponding scalar fields and scalar potentials during the intermediate and logamediate inflationary regimes by modifying the first Friedmann equation. In each case, we evaluate the number of e-folds, scalar as well as tensor power spectra, scalar spectral index, and the important observational parameter, the tensor-scalar ratio in terms of inflation. The graphical behavior of this parameter shows that the model remains incompatible with WMAP7 and Planck observational data in each case. (orig.)

  2. Consideration of rainwater quality parameters for drinking purposes: A case study in rural Vietnam.

    Science.gov (United States)

    Lee, Minju; Kim, Mikyeong; Kim, Yonghwan; Han, Mooyoung

    2017-09-15

    Rainwater, which is used for drinking purposes near Hanoi, Vietnam, was analysed for water quality based on 1.5 years of monitoring data. In total, 23 samples were collected from different points within two rainwater harvesting systems (RWHSs). Most parameters met the standard except micro-organisms. Coliform and Escherichia coli (E. coli) were detected when the rainwater was not treated with ultraviolet (UV) light; however, analysis of rainwater after UV sterilisation showed no trace of micro-organisms. The RWHSs appear to provide drinking water of relatively good quality compared with surface water and groundwater. The superior quality of the rainwater suggests the necessity for new drinking rainwater standards because applying all of the drinking water quality standards to rainwater is highly inefficient. The traditionally implemented standards could cause more difficulties for developing countries using RWHSs installed decentralized as a source of drinking water, particularly in areas not well supplied with testing equipment, because such countries must bear the expense and time for these measures. This paper proposes the necessity of rainwater quality guideline, which could serve as a safe and cost-effective alternative to provide an access to safe drinking water. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Nuclear microprobe analysis of the standard reference materials

    International Nuclear Information System (INIS)

    Jaksic, M.; Fazinic, S.; Bogdanovic, I.; Tadic, T.

    2002-01-01

    Most of the presently existing Standard Reference Materials (SRM) for nuclear analytical methods are certified for the analyzed mass of the order of few hundred mg. Typical mass of sample which is analyzed by PIXE or XRF methods is very often below 1 mg. By the development of focused proton or x-ray beams, masses which can be typically analyzed go down to μg or even ng level. It is difficult to make biological or environmental SRMs which can give desired homogeneity at such low scale. However, use of fundamental parameter quantitative evaluation procedures (absolute method), minimize needs for SRMs. In PIXE and micro PIXE setup at our Institute, fundamental parameter approach is used. For exact calibration of the quantitative analysis procedure just one standard sample is needed. In our case glass standards which showed homogeneity down to micron scale were used. Of course, it is desirable to use SRMs for quality assurance, and therefore need for homogenous materials can be justified even for micro PIXE method. In this presentation, brief overview of PIXE setup calibration is given, along with some recent results of tests of several SRMs

  4. Realistic sampling of anisotropic correlogram parameters for conditional simulation of daily rainfields

    Science.gov (United States)

    Gyasi-Agyei, Yeboah

    2018-01-01

    This paper has established a link between the spatial structure of radar rainfall, which more robustly describes the spatial structure, and gauge rainfall for improved daily rainfield simulation conditioned on the limited gauged data for regions with or without radar records. A two-dimensional anisotropic exponential function that has parameters of major and minor axes lengths, and direction, is used to describe the correlogram (spatial structure) of daily rainfall in the Gaussian domain. The link is a copula-based joint distribution of the radar-derived correlogram parameters that uses the gauge-derived correlogram parameters and maximum daily temperature as covariates of the Box-Cox power exponential margins and Gumbel copula. While the gauge-derived, radar-derived and the copula-derived correlogram parameters reproduced the mean estimates similarly using leave-one-out cross-validation of ordinary kriging, the gauge-derived parameters yielded higher standard deviation (SD) of the Gaussian quantile which reflects uncertainty in over 90% of cases. However, the distribution of the SD generated by the radar-derived and the copula-derived parameters could not be distinguished. For the validation case, the percentage of cases of higher SD by the gauge-derived parameter sets decreased to 81.2% and 86.6% for the non-calibration and the calibration periods, respectively. It has been observed that 1% reduction in the Gaussian quantile SD can cause over 39% reduction in the SD of the median rainfall estimate, actual reduction being dependent on the distribution of rainfall of the day. Hence the main advantage of using the most correct radar correlogram parameters is to reduce the uncertainty associated with conditional simulations that rely on SD through kriging.

  5. Existence and Multiplicity Results for Nonlinear Differential Equations Depending on a Parameter in Semipositone Case

    Directory of Open Access Journals (Sweden)

    Hailong Zhu

    2012-01-01

    Full Text Available The existence and multiplicity of solutions for second-order differential equations with a parameter are discussed in this paper. We are mainly concerned with the semipositone case. The analysis relies on the nonlinear alternative principle of Leray-Schauder and Krasnosel'skii's fixed point theorem in cones.

  6. Teaching Undergraduate Accounting Majors How to Interpret the Accounting Standards Codification: An Alternative to Research Cases

    Science.gov (United States)

    Toerner, Michael C.; Swindle, C. Bruce; Burckel, Daryl V.

    2014-01-01

    Professional accountants regularly search the FASB'S Accounting Standards Codification to find answers to financial accounting questions. Accounting educators know this and frequently use research cases in an attempt to help students begin developing this ability. But many students struggle with these cases because they have not been taught how to…

  7. Feasibility and efficacy of an isocaloric high-protein vs. standard diet on insulin requirement, body weight and metabolic parameters in patients with type 2 diabetes on insulin therapy

    DEFF Research Database (Denmark)

    Luger, M; Holstein, B; Schindler, K

    2013-01-01

    To determine the feasibility and efficacy of a high-protein diet compared with a standard diet aiming for weight maintenance in insulin treated type-2 diabetic patients on insulin requirement, body weight and metabolic parameters over 12 weeks.......To determine the feasibility and efficacy of a high-protein diet compared with a standard diet aiming for weight maintenance in insulin treated type-2 diabetic patients on insulin requirement, body weight and metabolic parameters over 12 weeks....

  8. Immediate extubation versus standard postoperative ventilation: Our experience in on pump open heart surgery

    Directory of Open Access Journals (Sweden)

    Srikanta Gangopadhyay

    2010-01-01

    Full Text Available Elective postoperative ventilation in patients undergoing "on pump" open heart surgery has been a standard practice. Ultra fast-track extubation in the operating room is now an accepted technique for "off pump" coronary artery bypass grafting. We tried to incorporate these experiences in on pump open heart surgery and compare the haemodynamic and respiratory parameters in the immediate postoperative period, in patients on standard postoperative ventilation for 8-12 hours. After ethical committee′s approval and informed consent were obtained, 72 patients, between 28 and 45 years of age, undergoing on pump open heart surgery, were selected for our study. We followed same standard anaesthetic, cardiopulmonary bypass (CPB and cardioplegic protocol. Thirty-six patients (Group E were randomly allocated for immediate extubation following operation, after fulfillment of standard extubation criteria. Those who failed to meet these criteria were not extubated and were excluded from the study. The remaining 36 patients (Group V were electively ventilated and extubated after 8-12 hours. Standard monitoring for on pump open heart surgery, including bispectral index was done. The demographic data, surgical procedures, preoperative parameters, aortic cross clamp and cardiopulmonary bypass times were comparable in both the groups. Extubation was possible in more than 88% of cases (n=32 out of 36 cases in Group E and none required reintubation for respiratory insufficiency. Respiratory, haemodynamic parameters and postoperative complications were comparable in both the groups in the postoperative period. Therefore, we can safely conclude that immediate extubation in the operating room after on pump open heart surgery is an alternative acceptable method to avoid postoperative ventilation and its related complications in selected patients.

  9. Biofuel sustainability standards and public policy: A case study of Swedish ethanol imports from Brazil

    DEFF Research Database (Denmark)

    Bolwig, Simon; Gibbon, Peter

    sustainability standards for those fuels. Central to these standards are criteria addressing the direct, and sometimes also indirect, greenhouse gas emissions resulting from the production, transport and use of the biofuels. This case study examines the first scheme applied to a traded biofuel, the Verified...... Sustainable Ethanol Initiative (VSEI), a private initiative of the Swedish fuel-ethanol supplier, SEKAB. VSEI went into operation in August 2008 to verify that the ethanol it was importing from Brazil met its own minimum standards for ―field-to-wheel‖ (life-cycle) greenhouse-gas emission standards...... is that it reduces consumer doubts about their product, and reduces competition from producers not participating in the Initiative; for SEKAB it increases the company’s credibility in various private and public forums working on sustainability standards for biofuels, and gives it a first-mover advantage once...

  10. Treatment simulation approaches for the estimation of the distributions of treatment quality parameters generated by geometrical uncertainties

    International Nuclear Information System (INIS)

    Baum, C; Alber, M; Birkner, M; Nuesslin, F

    2004-01-01

    Geometric uncertainties arise during treatment planning and treatment and mean that dose-dependent parameters such as EUD are random variables with a patient specific probability distribution. Treatment planning with highly conformal treatment techniques such as intensity modulated radiation therapy requires new evaluation tools which allow us to estimate this influence of geometrical uncertainties on the probable treatment dose for a planned dose distribution. Monte Carlo simulations of treatment courses with recalculation of the dose according to the daily geometric errors are a gold standard for such an evaluation. Distribution histograms which show the relative frequency of a treatment quality parameter in the treatment simulations can be used to evaluate the potential risks and chances of a planned dose distribution. As treatment simulations with dose recalculation are very time consuming for sufficient statistical accuracy, it is proposed to do treatment simulations in the dose parameter space where the result is mainly determined by the systematic and random component of the geometrical uncertainties. Comparison of the parameter space simulation method with the gold standard for prostate cases and a head and neck case shows good agreement as long as the number of fractions is high enough and the influence of tissue inhomogeneities and surface curvature on the dose is small

  11. GENERIC QUALITY STANDARDS VS. SPECIFIC QUALITY STANDARDS: THE CASE OF HIGHER EDUCATION

    Directory of Open Access Journals (Sweden)

    Laila El Abbadi

    2011-06-01

    Full Text Available Quality as a new requirement for the field of higher education leads institutions to seek to satisfy generic or specific quality standards imposed directly or indirectly by its customers. The aim of this study is to compare between ISO9001, as a generic quality standard, and the Code of Practice of the Quality Assurance Agency for Higher Education (QAA, as a specific quality standard. A correlation matrix is drawn and correlation rates are calculated to show similarities and differences between them. This paper shows, first, that ISO9001 and QAA Code of Practice are compatible. Second, implementing a quality management system in accordance with ISO9001 requirements can constitute an adequate framework for the application of the QAA Code of Practice requirements. Third, to make the ISO9001 requirements closer to a specific quality standard in the field of higher education, it is recommended to complete these standards by specific requirements to the field of higher education.

  12. MISTRA facility for containment lumped parameter and CFD codes validation. Example of the International Standard Problem ISP47

    International Nuclear Information System (INIS)

    Tkatschenko, I.; Studer, E.; Paillere, H.

    2005-01-01

    During a severe accident in a Pressurized Water Reactor (PWR), the formation of a combustible gas mixture in the complex geometry of the reactor depends on the understanding of hydrogen production, the complex 3D thermal-hydraulics flow due to gas/steam injection, natural convection, heat transfer by condensation on walls and effect of mitigation devices. Numerical simulation of such flows may be performed either by Lumped Parameter (LP) or by Computational Fluid Dynamics (CFD) codes. Advantages and drawbacks of LP and CFD codes are well-known. LP codes are mainly developed for full size containment analysis but they need improvements, especially since they are not able to accurately predict the local gas mixing within the containment. CFD codes require a process of validation on well-instrumented experimental data before they can be used with a high degree of confidence. The MISTRA coupled effect test facility has been built at CEA to fulfil this validation objective: with numerous measurement points in the gaseous volume - temperature, gas concentration, velocity and turbulence - and with well controlled boundary conditions. As illustration of both experimental and simulation areas of this topic, a recent example in the use of MISTRA test data is presented for the case of the International Standard Problem ISP47. The proposed experimental work in the MISTRA facility provides essential data to fill the gaps in the modelling/validation of computational tools. (author)

  13. Determination of reference intervals and comparison of venous blood gas parameters using standard and non-standard collection methods in 24 cats.

    Science.gov (United States)

    Bachmann, Karin; Kutter, Annette Pn; Schefer, Rahel Jud; Marly-Voquer, Charlotte; Sigrist, Nadja

    2017-08-01

    Objectives The aim of this study was to determine in-house reference intervals (RIs) for venous blood analysis with the RAPIDPoint 500 blood gas analyser using blood gas syringes (BGSs) and to determine whether immediate analysis of venous blood collected into lithium heparin (LH) tubes can replace anaerobic blood sampling into BGSs. Methods Venous blood was collected from 24 healthy cats and directly transferred into a BGS and an LH tube. The BGS was immediately analysed on the RAPIDPoint 500 followed by the LH tube. The BGSs and LH tubes were compared using paired t-test or Wilcoxon matched-pairs signed-rank test, Bland-Altman and Passing-Bablok analysis. To assess clinical relevance, bias or percentage bias between BGSs and LH tubes was compared with the allowable total error (TEa) recommended for the respective parameter. Results Based on the values obtained from the BGSs, RIs were calculated for the evaluated parameters, including blood gases, electrolytes, glucose and lactate. Values derived from LH tubes showed no significant difference for standard bicarbonate, whole blood base excess, haematocrit, total haemoglobin, sodium, potassium, chloride, glucose and lactate, while pH, partial pressure of carbon dioxide and oxygen, actual bicarbonate, extracellular base excess, ionised calcium and anion gap were significantly different to the samples collected in BGSs ( P glucose and lactate can be made based on blood collected in LH tubes and analysed within 5 mins. For pH, partial pressure of carbon dioxide and oxygen, extracellular base excess, anion gap and ionised calcium the clinically relevant alterations have to be considered if analysed in LH tubes.

  14. Experiences from Implementation of Lean Production: Standardization versus Self-management: A Swedish Case Study

    Directory of Open Access Journals (Sweden)

    Margareta Oudhuis

    2013-01-01

    Full Text Available In this article, we discuss important aspects of the perceived problematic relationship between self-management and standardization. The article presents data from three case studies conducted within manufacturing companies in Sweden, where the popularity of lean production has led to a renaissance for short-cycle and standardized assembly work in settings that traditionally have made use of sociotechnical production design. The data suggest that the implementation has not contributed to an increased commitment, smooth operations, and capacity for change and innovation. Despite these not so positive results, it is argued that it is possible to combine self-management principles with lean production and standardization if 1 the implementation of lean is done with a contextual sensitivity, 2 a balance is reached between the use of standards on the one hand and work enrichment on the other, and 3 a feeling of ownership as regards both implementation and production process is upheld among the product on personnel.

  15. Complexity, parameter sensitivity and parameter transferability in the modelling of floodplain inundation

    Science.gov (United States)

    Bates, P. D.; Neal, J. C.; Fewtrell, T. J.

    2012-12-01

    In this we paper we consider two related questions. First, we address the issue of how much physical complexity is necessary in a model in order to simulate floodplain inundation to within validation data error. This is achieved through development of a single code/multiple physics hydraulic model (LISFLOOD-FP) where different degrees of complexity can be switched on or off. Different configurations of this code are applied to four benchmark test cases, and compared to the results of a number of industry standard models. Second we address the issue of how parameter sensitivity and transferability change with increasing complexity using numerical experiments with models of different physical and geometric intricacy. Hydraulic models are a good example system with which to address such generic modelling questions as: (1) they have a strong physical basis; (2) there is only one set of equations to solve; (3) they require only topography and boundary conditions as input data; and (4) they typically require only a single free parameter, namely boundary friction. In terms of complexity required we show that for the problem of sub-critical floodplain inundation a number of codes of different dimensionality and resolution can be found to fit uncertain model validation data equally well, and that in this situation Occam's razor emerges as a useful logic to guide model selection. We find also find that model skill usually improves more rapidly with increases in model spatial resolution than increases in physical complexity, and that standard approaches to testing hydraulic models against laboratory data or analytical solutions may fail to identify this important fact. Lastly, we find that in benchmark testing studies significant differences can exist between codes with identical numerical solution techniques as a result of auxiliary choices regarding the specifics of model implementation that are frequently unreported by code developers. As a consequence, making sound

  16. [Studies on the standardization of parameters for jaw movement analysis--6 degree-of-freedom jaw movements analysis].

    Science.gov (United States)

    Takeuchi, Hisahiro; Bando, Eiichi; Abe, Susumu

    2008-07-01

    To establish standardized evaluating methods for jaw movements analysis. In this paper, we investigated evaluating parameters for 6 degree-of-freedom jaw movements data. Recorded data of jaw border movements from 20 male adults were employed as basic samples. The main parameters were as follows: 1. The displacement of an intercondylar midpoint: the length of a straight line between 2 positions of this point, the intercuspal position and other jaw position. 2. The angle of intercondylar axes: the angle between 2 position of the intercondylar axis, the intercuspal position and other jaw position. 3. The angle of incisal-condylar planes: the angle between 2 position of the plane, the intercuspal position and other jaw position (this plane was defined with the incisal point and condylar points of both sides 4. The mandibular motion range index: quantitative values calculated with 2 of 3 parameters described above. The mandibular motion range index showed a close correlation with respective projected areas of the incisal paths, with the projected area of sagittal border movements on the sagittal plane r = 0.82 (p movements on the frontal plane: left lateral border movements r = 0.92 (p movements r = 0.84 (p movements data and relative relationship between the intercuspal position and other jaw position. They were independent of reference coordinate systems and could measure jaw movement quantitatively.

  17. Exploring Chinese cultural standards through the lens of German managers: A case study approach

    Directory of Open Access Journals (Sweden)

    Roger Moser

    2011-06-01

    Full Text Available The ability to understand one’s own culture and to deal with specificities of foreign cultures is one of the core requirements in today’s international business. Management skills are partially culture specific and a management approach that is appropriate in one cultural context may not be appropriate in another. Several business activities of companies nowadays take place abroad, which requires managers to interact with different cultures. This paper aims to analyse cultural characteristics, especially in a Sino-German business context. Based on literature analysis and case study research, relevant cultural standards in China were identified from the German perspective. The result differentiates three superordinate cultural areas and five specific cultural standards and analyses different influence factors on the dimensions of the identified Chinese cultural standards.

  18. Mathematical determination of setup parameters for carcinoma breast cases

    International Nuclear Information System (INIS)

    Prasad, P.B.L.D.; Suresh, P.; Sridhar, A.

    2008-01-01

    Determining proper patient set up parameters like IFD, Gantry angles and field width in Ca Breast are prime important to achieve precise treatment. In a center where 3D Treatment Planning Systems (TPS) and simulator are not available to determine the set up parameters, contouring of target region is essential which is time consuming. The mathematical formula described here provides instant patient set up parameters using machine parameters. (author)

  19. Predicting surgical outcome in cases of cervical myelopathy with magnetic resonance imaging. Critical parameters

    International Nuclear Information System (INIS)

    Akiyama, Takashi

    1997-01-01

    In this study, the author attempted to correlate clinical factors significant in cases of cervical myelopathy with postoperative recovery. It is hoped that the results will aid in the preoperative prediction of surgical outcomes. The factors considered were the transverse area of the spinal cord, the cord compression rate, the presence of a high intensity area in T2-weighted MRI, the duration of symptoms before surgery, and age at surgery. Because there are variations in the transverse area of the spinal cord, 100 normal individuals were selected and the standard transverse area was calculated. The transverse area of the spinal cord and the cord constriction rate in the myelopathy cases was then measured and compared to the standard. The data indicated that the constriction rate was most relevant to recovery rate. Clinical thresholds found to correlate with a better than average rate of recovery in cases of cervical spondylotic myelopathy (CSM) were: a cord constriction rate; under 28.7%, cord compression rate; over 0.38, duration of symptoms before surgery; less than 9.2 months, and age at surgery; under 59.2 yrs. In patients with ossification of the longitudinal ligament (OPLL), cord constriction rate; under 36.2%, cord compression rate; over 0.30, duration of symptoms before surgery; less than 14.2 months, and age at surgery; under 57.6 yrs., all correlated with superior recovery, as did cord constriction rate; under 22.3%, and duration of symptoms before surgery; less than 3.7 months with patients suffering from cervical disc herniation (CDH). Furthermore, the absence of a T2-weighted high intensity area in CSM and OPLL patients also correlated with improved recovery. These results suggest that a favorable postoperative recovery rate can be expected in cases of cervical myelopathy that conform to the above criteria. (author)

  20. Predictive significance of standardized uptake value parameters of FDG-PET in patients with non-small cell lung carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Duan, X-Y.; Wang, W.; Li, M.; Li, Y.; Guo, Y-M. [PET-CT Center, The First Affiliated Hospital of Xi' an, Jiaotong University, Xi' an, Shaanxi (China)

    2015-02-03

    {sup 18}F-fluoro-2-deoxyglucose (FDG) positron emission tomography (PET)/computed tomography (CT) is widely used to diagnose and stage non-small cell lung cancer (NSCLC). The aim of this retrospective study was to evaluate the predictive ability of different FDG standardized uptake values (SUVs) in 74 patients with newly diagnosed NSCLC. {sup 18}F-FDG PET/CT scans were performed and different SUV parameters (SUV{sub max}, SUV{sub avg}, SUV{sub T/L}, and SUV{sub T/A}) obtained, and their relationship with clinical characteristics were investigated. Meanwhile, correlation and multiple stepwise regression analyses were performed to determine the primary predictor of SUVs for NSCLC. Age, gender, and tumor size significantly affected SUV parameters. The mean SUVs of squamous cell carcinoma were higher than those of adenocarcinoma. Poorly differentiated tumors exhibited higher SUVs than well-differentiated ones. Further analyses based on the pathologic type revealed that the SUV{sub max}, SUV{sub avg}, and SUV{sub T/L} of poorly differentiated adenocarcinoma tumors were higher than those of moderately or well-differentiated tumors. Among these four SUV parameters, SUV{sub T/L} was the primary predictor for tumor differentiation. However, in adenocarcinoma, SUV{sub max} was the determining factor for tumor differentiation. Our results showed that these four SUV parameters had predictive significance related to NSCLC tumor differentiation; SUV{sub T/L} appeared to be most useful overall, but SUV{sub max} was the best index for adenocarcinoma tumor differentiation.

  1. Effect of textiles structural parameters on surgical healing; a case study

    Science.gov (United States)

    Marwa, A. Ali

    2017-10-01

    Medical Textiles is one of the most rapidly expanding sectors in the technical textile market. The huge growth of medical textiles applications was over the last 12 years. “Biomedical Textiles” is a subcategory of medical textiles that narrows the field down to those applications that are intended for active tissue contact, tissue regeneration or surgical implantation. Since the mid-1960s, the current wave of usage is coming as a result of new fibers and new technologies for textile materials construction. “Biotextiles” term include structures composed of textile fibers designed for use in specific biological environments. Medical Textile field was utilizing different materials, textile techniques and structures to provide new medical products with high functionality in the markets. There are other processes that are associated with textiles in terms of the various treatments and finishing. The aim of this article is to draw attention to the medical field in each of Vitro and Vivo trend, and its relation with textile structural parameters, with regard to the fiber material, production techniques, and fabric structures. Also, it is focusing on some cases studies which were applied in our research which produced with different textile parameters. Finally; an overview is presented about modern and innovative applications of the medical textiles.

  2. Standard method for economic analyses of inertial confinement fusion power plants

    International Nuclear Information System (INIS)

    Meier, W.R.

    1986-01-01

    A standard method for calculating the total capital cost and the cost of electricity for a typical inertial confinement fusion electric power plant has been developed. A standard code of accounts at the two-digit level is given for the factors making up the total capital cost of the power plant. Equations are given for calculating the indirect capital costs, the project contingency, and the time-related costs. Expressions for calculating the fixed charge rate, which is necessary to determine the cost of electricity, are also described. Default parameters are given to define a reference case for comparative economic analyses

  3. Probing non-standard interactions at Daya Bay

    Energy Technology Data Exchange (ETDEWEB)

    Agarwalla, Sanjib Kumar; Bagchi, Partha [Institute of Physics, Sachivalaya Marg,Sainik School Post, Bhubaneswar 751005 (India); Forero, David V. [AHEP Group, Institut de Física Corpuscular - C.S.I.C./Universitat de València,Parc Cientific de Paterna, C/ Catedratico José Beltrán, 2 E-46980 Paterna (València) (Spain); Center for Neutrino Physics, Virginia Tech,Blacksburg, VA 24061 (United States); Tórtola, Mariam [AHEP Group, Institut de Física Corpuscular - C.S.I.C./Universitat de València,Parc Cientific de Paterna, C/ Catedratico José Beltrán, 2 E-46980 Paterna (València) (Spain)

    2015-07-13

    In this article we consider the presence of neutrino non-standard interactions (NSI) in the production and detection processes of reactor antineutrinos at the Daya Bay experiment. We report for the first time, the new constraints on the flavor non-universal and flavor universal charged-current NSI parameters, estimated using the currently released 621 days of Daya Bay data. New limits are placed assuming that the new physics effects are just inverse of each other in the production and detection processes. With this special choice of the NSI parameters, we observe a shift in the oscillation amplitude without distorting the L/E pattern of the oscillation probability. This shift in the depth of the oscillation dip can be caused by the NSI parameters as well as by θ{sub 13}, making it quite difficult to disentangle the NSI effects from the standard oscillations. We explore the correlations between the NSI parameters and θ{sub 13} that may lead to significant deviations in the reported value of the reactor mixing angle with the help of iso-probability surface plots. Finally, we present the limits on electron, muon/tau, and flavor universal (FU) NSI couplings with and without considering the uncertainty in the normalization of the total event rates. Assuming a perfect knowledge of the event rates normalization, we find strong upper bounds ∼ 0.1% for the electron and FU cases improving the present limits by one order of magnitude. However, for a conservative error of 5% in the total normalization, these constraints are relaxed by almost one order of magnitude.

  4. Probing non-standard interactions at Daya Bay

    International Nuclear Information System (INIS)

    Agarwalla, Sanjib Kumar; Bagchi, Partha; Forero, David V.; Tórtola, Mariam

    2015-01-01

    In this article we consider the presence of neutrino non-standard interactions (NSI) in the production and detection processes of reactor antineutrinos at the Daya Bay experiment. We report for the first time, the new constraints on the flavor non-universal and flavor universal charged-current NSI parameters, estimated using the currently released 621 days of Daya Bay data. New limits are placed assuming that the new physics effects are just inverse of each other in the production and detection processes. With this special choice of the NSI parameters, we observe a shift in the oscillation amplitude without distorting the L/E pattern of the oscillation probability. This shift in the depth of the oscillation dip can be caused by the NSI parameters as well as by θ 13 , making it quite difficult to disentangle the NSI effects from the standard oscillations. We explore the correlations between the NSI parameters and θ 13 that may lead to significant deviations in the reported value of the reactor mixing angle with the help of iso-probability surface plots. Finally, we present the limits on electron, muon/tau, and flavor universal (FU) NSI couplings with and without considering the uncertainty in the normalization of the total event rates. Assuming a perfect knowledge of the event rates normalization, we find strong upper bounds ∼ 0.1% for the electron and FU cases improving the present limits by one order of magnitude. However, for a conservative error of 5% in the total normalization, these constraints are relaxed by almost one order of magnitude.

  5. The improvement of environmental performances by applying ISO 14001 standard: A case study

    Directory of Open Access Journals (Sweden)

    Živković Snežana

    2013-01-01

    Full Text Available This paper presents the analysis of the advantages of applying ISO 14001 system in an environmental protection management system. The environmental protection management system which is not licensed, i.e., compatible with the principles and standard pre-conditions considerably increases the plausibility for ecological risk. There are some issues that remain to be solved in the areas which are not expressed by financial values only but also have a non-financial character with the aim of expanding markets, company image improvement and improvement of the environmental performance indicators. By improving a company’s environmental management system efficiency we expect to achieve the minimization and elimination of damaging influences on the environment which are the consequence of company’s activities. A case study in the Oil Refinery Belgrade (RNB analyses the implementation of the standard ISO 14001:2004 into its environment protection management system, particularly emphasizing the company’s own way of evaluating the environment aspects with the aim of establishing results of ecological performances indicators improvement. The average values of the first ecological indicator of the plant, the total amount of the waste waters in m3 per a ton of product, clearly show the downturn trend, which is confirmed by the proportional reduction of the second ecological plant indicator, that is by the flocculants consumption (Al2(SO43, Na2CO3 in kg per m3 of the waste water of the Oil Refinery of Belgrade for the given period 2008-2010. Case study RNB confirms the improvement of environmental performances using the ISO 14001 standard.

  6. ESIP's Emerging Provenance and Context Content Standard Use Cases: Developing Examples and Models for Data Stewardship

    Science.gov (United States)

    Ramdeen, S.; Hills, D. J.

    2013-12-01

    Earth science data collections range from individual researchers' private collections to large-scale data warehouses, from computer-generated data to field or lab based observations. These collections require stewardship. Fundamentally, stewardship ensures long term preservation and the provision of access to the user community. In particular, stewardship includes capturing appropriate metadata and documentation--and thus the context of the data's creation and any changes they underwent over time --to enable data reuse. But scientists and science data managers must translate these ideas into practice. How does one balance the needs of current and (projected) future stakeholders? In 2011, the Data Stewardship Committee (DSC) of the Federation of Earth Science Information Partners (ESIP) began developing the Provenance and Context Content Standard (PCCS). As an emerging standard, PCCS provides a framework for 'what' must be captured or preserved as opposed to describing only 'how' it should be done. Originally based on the experiences of NASA and NOAA researchers within ESIP, the standard currently provides data managers with content items aligned to eight key categories. While the categories and content items are based on data life cycles of remote sensing missions, they can be generalized to cover a broader set of activities, for example, preservation of physical objects. These categories will include the information needed to ensure the long-term understandability and usability of earth science data products. In addition to the PCCS, the DSC is developing a series of use cases based on the perspectives of the data archiver, data user, and the data consumer that will connect theory and practice. These cases will act as specifications for developing PCCS-based systems. They will also provide for examination of the categories and content items covered in the PCCS to determine if any additions are needed to cover the various use cases, and also provide rationale and

  7. Chest X-rays and associated clinical parameters in pulmonary Tubercolosis cases from the National Tubercolosis Program, Mumbai, India

    Directory of Open Access Journals (Sweden)

    Yatin N. Dholakia

    2012-01-01

    Full Text Available The study was carried out in pulmonary tuberculosis (PTB patients from the local Tuberculosis control programme, Mumbai, India. It examined features of chest X-rays and their correlation with clinical parameters for possible application in suspected multidrug resistant TB (MDRTB and to predict outcome in new and treatment failure PTB cases. X-ray features (infiltrate, cavitation, miliary shadows, pleural effusion, mediastinal lymphadenopathy and extent of lesions were analyzed to identify associations with biological/clinical parameters through univariate and multivariate logistic regression. Failures demonstrated associations between extensive lesions and high glycosylated hemoglobin (GHb levels (P=0.028 and male gender (P=0.03. An association was also detected between cavitation and MDR (P=0.048. In new cases, bilateral cavities were associated with MDR (P=0.018 and male gender (P=0.01, low body mass index with infiltrates (P=0.008, and smoking with cavitation (P=0.0238. Strains belonging to the Manu1 spoligotype were associated with mild lesions (P=0.002. Poor outcome showed borderline significance with extensive lesions at onset (P=0.053. Furthermore, amongst new cases, smoking, the Central Asian Strain (CAS spoligotype and high GHb were associated with cavitation, whereas only CAS spoligotypes and high GHb were associated with extensive lesions. The study highlighted associations between certain clinical parameters and X-ray evidence which support the potential of X-rays to predict TB, MDRTB and poor outcome. The use of Xrays as an additional tool to shorten diagnostic delay and shortlist MDR suspects amongst nonresponders to TB treatment should be explored in a setting with limited resources coping with a high MDR case load such as Mumbai.

  8. Standardizing data exchange for clinical research protocols and case report forms: An assessment of the suitability of the Clinical Data Interchange Standards Consortium (CDISC) Operational Data Model (ODM).

    Science.gov (United States)

    Huser, Vojtech; Sastry, Chandan; Breymaier, Matthew; Idriss, Asma; Cimino, James J

    2015-10-01

    Efficient communication of a clinical study protocol and case report forms during all stages of a human clinical study is important for many stakeholders. An electronic and structured study representation format that can be used throughout the whole study life-span can improve such communication and potentially lower total study costs. The most relevant standard for representing clinical study data, applicable to unregulated as well as regulated studies, is the Operational Data Model (ODM) in development since 1999 by the Clinical Data Interchange Standards Consortium (CDISC). ODM's initial objective was exchange of case report forms data but it is increasingly utilized in other contexts. An ODM extension called Study Design Model, introduced in 2011, provides additional protocol representation elements. Using a case study approach, we evaluated ODM's ability to capture all necessary protocol elements during a complete clinical study lifecycle in the Intramural Research Program of the National Institutes of Health. ODM offers the advantage of a single format for institutions that deal with hundreds or thousands of concurrent clinical studies and maintain a data warehouse for these studies. For each study stage, we present a list of gaps in the ODM standard and identify necessary vendor or institutional extensions that can compensate for such gaps. The current version of ODM (1.3.2) has only partial support for study protocol and study registration data mainly because it is outside the original development goal. ODM provides comprehensive support for representation of case report forms (in both the design stage and with patient level data). Inclusion of requirements of observational, non-regulated or investigator-initiated studies (outside Food and Drug Administration (FDA) regulation) can further improve future revisions of the standard. Published by Elsevier Inc.

  9. Pattern statistics on Markov chains and sensitivity to parameter estimation

    Directory of Open Access Journals (Sweden)

    Nuel Grégory

    2006-10-01

    Full Text Available Abstract Background: In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,.... Results: In the particular case where pattern statistics (overlap counting only computed through binomial approximations we use the delta-method to give an explicit expression of σ, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. Conclusion: We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation.

  10. Investigation of metrological parameters of measuring system for small temperature changes

    Directory of Open Access Journals (Sweden)

    Samynina M. G.

    2014-02-01

    Full Text Available Metrological parameters of the non-standard contact device were investigated to characterize its performance in temperature change measurements in the specified temperature range. Several series thermistors with a negative temperature coefficient of resistance connected into a linearization circuit were used as the sensing element of the semiconductor device. Increasing the number of thermistors leads to improved circuitry resolving power and reduced dispersion of this parameter. However, there is the question of optimal ratio of the number of thermistors and implemented temperature resolution, due to the nonlinear resolution dependence of the number of series-connected thermoelements. An example of scheme of four similar thermistors as the primary sensor and of a standard measuring instrument, which is working in ohmmeter mode, shows the ability to measure temperature changes at the level of hundredth of a Celsius degree. In this case, a quantization error, which is determined by a resolution of the measuring system, and the ohmmeter accuracy make the main contribution to the overall accuracy of measuring small temperature changes.

  11. Estimating kinetic parameters from dynamic contrast-enhanced T(1)-weighted MRI of a diffusable tracer: standardized quantities and symbols

    DEFF Research Database (Denmark)

    Tofts, P.S.; Brix, G; Buckley, D.L.

    1999-01-01

    We describe a standard set of quantity names and symbols related to the estimation of kinetic parameters from dynamic contrast-enhanced T(1)-weighted magnetic resonance imaging data, using diffusable agents such as gadopentetate dimeglumine (Gd-DTPA). These include a) the volume transfer constant K......-limited conditions K(trans) equals the blood plasma flow per unit volume of tissue; under permeability-limited conditions K(trans) equals the permeability surface area product per unit volume of tissue. We relate these quantities to previously published work from our groups; our future publications will refer...

  12. A Systematic Approach to Analyse Critical Tribological Parameters in an Industrial Case Study of Progressive Die Sequence Production

    DEFF Research Database (Denmark)

    Üstünyagiz, Esmeray; Nielsen, Chris V.; Bay, Niels

    the tribologically critical parameters in an industrial production line in which a progressive tool sequence is used. The current industrial case is based on multistage deep drawing followed by an ironing operation. Severe reduction in the ironing stage leads to high interface temperature and pressure. As a result......, subsequent lubricant film breakdown in the production line occurs. The methodology combines finite element simulations and experimental measurements to determine tribological parameters which will later be used in laboratory testing of possible tribology systems....

  13. Cosmological parameter estimation using Particle Swarm Optimization

    Science.gov (United States)

    Prasad, J.; Souradeep, T.

    2014-03-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

  14. Cosmological parameter estimation using Particle Swarm Optimization

    International Nuclear Information System (INIS)

    Prasad, J; Souradeep, T

    2014-01-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite

  15. Calculations of standard-Higgs-boson production cross sections in e+e- collisions by means of a reasonable set of parameters

    International Nuclear Information System (INIS)

    Biyajima, M.; Shirane, K.; Terazawa, O.

    1987-01-01

    We calculate cross sections for production of the standard Higgs boson in e + e - collisions and compare our results with those of several authors. It is found that there are appreciable differences among them which can be attributed to the coupling constants used, α(0) ( = (1/137) and G/sub F/. We also observe that cross sections depend on the magnitudes of the total width of the Z particle. The use of a reasonable set of parameters in calculations is emphasized

  16. Planck 2013 results. XVI. Cosmological parameters

    CERN Document Server

    Ade, P.A.R.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A.J.; Barreiro, R.B.; Bartlett, J.G.; Battaner, E.; Benabed, K.; Benoit, A.; Benoit-Levy, A.; Bernard, J.P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J.J.; Bonaldi, A.; Bond, J.R.; Borrill, J.; Bouchet, F.R.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R.C.; Calabrese, E.; Cappellini, B.; Cardoso, J.F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.R.; Chen, X.; Chiang, L.Y.; Chiang, H.C.; Christensen, P.R.; Church, S.; Clements, D.L.; Colombi, S.; Colombo, L.P.L.; Couchot, F.; Coulais, A.; Crill, B.P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R.D.; Davis, R.J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.M.; Desert, F.X.; Dickinson, C.; Diego, J.M.; Dolag, K.; Dole, H.; Donzelli, S.; Dore, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Ensslin, T.A.; Eriksen, H.K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A.A.; Franceschi, E.; Gaier, T.C.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Heraud, Y.; Gjerlow, E.; Gonzalez-Nuevo, J.; Gorski, K.M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J.E.; Haissinski, J.; Hamann, J.; Hansen, F.K.; Hanson, D.; Harrison, D.; Henrot-Versille, S.; Hernandez-Monteagudo, C.; Herranz, D.; Hildebrandt, S.R.; Hivon, E.; Hobson, M.; Holmes, W.A.; Hornstrup, A.; Hou, Z.; Hovest, W.; Huffenberger, K.M.; Jaffe, T.R.; Jaffe, A.H.; Jewell, J.; Jones, W.C.; Juvela, M.; Keihanen, E.; Keskitalo, R.; Kisner, T.S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lahteenmaki, A.; Lamarre, J.M.; Lasenby, A.; Lattanzi, M.; Laureijs, R.J.; Lawrence, C.R.; Leach, S.; Leahy, J.P.; Leonardi, R.; Leon-Tavares, J.; Lesgourgues, J.; Lewis, A.; Liguori, M.; Lilje, P.B.; Linden-Vornle, M.; Lopez-Caniego, M.; Lubin, P.M.; Macias-Perez, J.F.; Maffei, B.; Maino, D.; Mandolesi, N.; Maris, M.; Marshall, D.J.; Martin, P.G.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P.R.; Melchiorri, A.; Melin, J.B.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Millea, M.; Mitra, S.; Miville-Deschenes, M.A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C.B.; Norgaard-Nielsen, H.U.; Noviello, F.; Novikov, D.; Novikov, I.; O'Dwyer, I.J.; Osborne, S.; Oxborrow, C.A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, D.; Pearson, T.J.; Peiris, H.V.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Platania, P.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G.W.; Prezeau, G.; Prunet, S.; Puget, J.L.; Rachen, J.P.; Reach, W.T.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rubino-Martin, J.A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M.D.; Shellard, E.P.S.; Spencer, L.D.; Starck, J.L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.S.; Sygnet, J.F.; Tauber, J.A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Turler, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L.A.; Wandelt, B.D.; Wehus, I.K.; White, M.; White, S.D.M.; Wilkinson, A.; Yvon, D.; Zacchei, A.; Zonca, A.

    2014-10-29

    We present the first results based on Planck measurements of the CMB temperature and lensing-potential power spectra. The Planck spectra at high multipoles are extremely well described by the standard spatially-flat six-parameter LCDM cosmology. In this model Planck data determine the cosmological parameters to high precision. We find a low value of the Hubble constant, H0=67.3+/-1.2 km/s/Mpc and a high value of the matter density parameter, Omega_m=0.315+/-0.017 (+/-1 sigma errors) in excellent agreement with constraints from baryon acoustic oscillation (BAO) surveys. Including curvature, we find that the Universe is consistent with spatial flatness to percent-level precision using Planck CMB data alone. We present results from an analysis of extensions to the standard cosmology, using astrophysical data sets in addition to Planck and high-resolution CMB data. None of these models are favoured significantly over standard LCDM. The deviation of the scalar spectral index from unity is insensitive to the additi...

  17. Standardized Index of Shape (SIS): a quantitative DCE-MRI parameter to discriminate responders by non-responders after neoadjuvant therapy in LARC

    Energy Technology Data Exchange (ETDEWEB)

    Petrillo, Antonella; Fusco, Roberta; Petrillo, Mario; Granata, Vincenza [Istituto Nazionale Tumori Fondazione Giovanni Pascale - IRCCS, Naples (Italy). Div. of Radiology; Sansone, Mario [Naples Univ. ' ' Federico II' ' (Italy). Dept. of Biomedical, Electronics and Telecommunications Engineering; Avallone, Antonio [Istituto Nazionale Tumori Fondazione Giovanni Pascale - IRCCS, Naples (Italy). Div. of Gastrointestinal Medical Oncology; Delrio, Paolo [Istituto Nazionale Tumori Fondazione Giovanni Pascale - IRCCS, Naples (Italy). Div. of Gastrointestinal surgical Oncology; Pecori, Biagio [Istituto Nazionale Tumori Fondazione Giovanni Pascale - IRCCS, Naples (Italy). Div. of Radiotherapy; Tatangelo, Fabiana [Istituto Nazionale Tumori Fondazione Giovanni Pascale - IRCCS, Naples (Italy). Div. of Diagnostic Pathology; Ciliberto, Gennaro [Istituto Nazionale Tumori Fondazione Giovanni Pascale - IRCCS, Naples (Italy)

    2015-07-15

    To investigate the potential of DCE-MRI to discriminate responders from non-responders after neoadjuvant chemo-radiotherapy (CRT) for locally advanced rectal cancer (LARC). We investigated several shape parameters for the time-intensity curve (TIC) in order to identify the best combination of parameters between two linear parameter classifiers. Seventy-four consecutive patients with LARC were enrolled in a prospective study approved by our ethics committee. Each patient gave written informed consent. After surgery, pathological TNM and tumour regression grade (TRG) were estimated. DCE-MRI semi-quantitative analysis (sqMRI) was performed to identify the best parameter or parameter combination to discriminate responders from non-responders in response monitoring to CRT. Percentage changes of TIC shape descriptors from the baseline to the presurgical scan were assessed and correlated with TRG. Receiver operating characteristic analysis and linear classifier were applied. Forty-six patients (62.2 %) were classified as responders, while 28 subjects (37.8 %) were considered as non-responders. sqMRI reached a sensitivity of 93.5 % and a specificity of 82.1 % combining the percentage change in Maximum Signal Difference (ΔMSD) and Wash-out Slope (ΔWOS), the Standardized Index of Shape (SIS). SIS obtains the best result in discriminating responders from non-responders after CRT in LARC, with a cut-off value of -3.0 %. (orig.)

  18. ORAL CLINICAL LONG CASE PRESENTATION, THE NEED FOR STANDARDIZATION AND DOCUMENTATION.

    Science.gov (United States)

    Agodirin, S O; Olatoke, S A; Rahman, G A; Agbakwuru, E A; Kolawole, O A

    2015-01-01

    The oral presentation of the clinical long case is commonly an implied knowledge. The challenge of the presentation is compounded by the examiners' preferences and sometimes inadequate understanding of what should be assessed. To highlight the different opinions and misconceptions of trainers as the basis for improving our understanding and assessment of oral presentation of the clinical long case. Questionnaire was administered during the West African College of Surgeons fellowship clinical examinations and at their workplaces. Eligibility criteria included being a surgeon, a trainer and responding to all questions. Of the 72 questionnaires that were returned, 36(50%) were eligible for the analysis. The 36 respondents were from 14 centers in Nigeria and Ghana. Fifty-two percent were examiners at the postgraduate medical colleges and 9(25%) were professors. Eight(22.2%) indicated they were unaware of the separate methods of oral presentation for different occasions while 21( 58.3%) respondents were aware that candidate used the "5Cs" method and the traditional compartmentalized method in long case oral presentation. Eleven(30.6%) wanted postgraduates to present differently on a much higher level than undergraduate despite not encountering same in literature and 21(58.3%) indicated it was an unwritten rule. Seventeen (47.2%) had not previously encountered the "5Cs" of history of presenting complaint in literature also 17(47.2%) teach it to medical students and their junior residents. This study has shown that examiners definitely have varying opinions on what form the oral presentation of the clinical long case at surgery fellowship/professional examination should be and it translates to their expectations of the residents or clinical students. This highlights the need for standardization and consensus of what is expected at a formal oral presentation during the clinical long case examination in order to avoid subjectivity and bias.

  19. Fermionic extensions of the Standard Model in light of the Higgs couplings

    Science.gov (United States)

    Bizot, Nicolas; Frigerio, Michele

    2016-01-01

    As the Higgs boson properties settle, the constraints on the Standard Model extensions tighten. We consider all possible new fermions that can couple to the Higgs, inspecting sets of up to four chiral multiplets. We confront them with direct collider searches, electroweak precision tests, and current knowledge of the Higgs couplings. The focus is on scenarios that may depart from the decoupling limit of very large masses and vanishing mixing, as they offer the best prospects for detection. We identify exotic chiral families that may receive a mass from the Higgs only, still in agreement with the hγγ signal strength. A mixing θ between the Standard Model and non-chiral fermions induces order θ 2 deviations in the Higgs couplings. The mixing can be as large as θ ˜ 0 .5 in case of custodial protection of the Z couplings or accidental cancellation in the oblique parameters. We also notice some intriguing effects for much smaller values of θ, especially in the lepton sector. Our survey includes a number of unconventional pairs of vector-like and Majorana fermions coupled through the Higgs, that may induce order one corrections to the Higgs radiative couplings. We single out the regions of parameters where hγγ and hgg are unaffected, while the hγZ signal strength is significantly modified, turning a few times larger than in the Standard Model in two cases. The second run of the LHC will effectively test most of these scenarios.

  20. Investigating Electrostatic Precipitator Design Parameters for Efficient Control of Particulate Matter in Thermal Power Plant: A Case Study

    Science.gov (United States)

    Rai, P.; Gautam, N.; Chandra, H.

    2018-02-01

    This work deals with the analysis and modification of operational parameters for meeting the emission standards, set by Central Pollution Control Board (CPCB)/State Pollution Control Board (SPCB) from time to time of electrostatic precipitator (ESP). The analysis is carried out by using standard chemical analysis supplemented by the relevant data collected from Korba East Phase (Ph)-III thermal power plant, under Chhattisgarh State Electricity Board (CSEB) operating at Korba, Chhattisgarh. Chemical analysis is used to predict the emission level for different parameters of ESP. The results reveal that for a constant outlet PM concentration and fly ash percentage, the total collection area decreases with the increase in migration velocity. For constant migration velocity and outlet PM concentration, the total collection area increases with the increase in the fly ash percent. For constant migration velocity and outlet e PM concentration, the total collection area increases with the ash content in the coal. i.e. from minimum ash to maximum ash. As far as the efficiency is concerned, it increases with the fly ash percent, ash content and the inlet dust concentration but decreases with the outlet PM concentration at constant migration velocity, fly ash and ash content.

  1. Investigating Electrostatic Precipitator Design Parameters for Efficient Control of Particulate Matter in Thermal Power Plant: A Case Study

    Science.gov (United States)

    Rai, P.; Gautam, N.; Chandra, H.

    2018-06-01

    This work deals with the analysis and modification of operational parameters for meeting the emission standards, set by Central Pollution Control Board (CPCB)/State Pollution Control Board (SPCB) from time to time of electrostatic precipitator (ESP). The analysis is carried out by using standard chemical analysis supplemented by the relevant data collected from Korba East Phase (Ph)-III thermal power plant, under Chhattisgarh State Electricity Board (CSEB) operating at Korba, Chhattisgarh. Chemical analysis is used to predict the emission level for different parameters of ESP. The results reveal that for a constant outlet PM concentration and fly ash percentage, the total collection area decreases with the increase in migration velocity. For constant migration velocity and outlet PM concentration, the total collection area increases with the increase in the fly ash percent. For constant migration velocity and outlet e PM concentration, the total collection area increases with the ash content in the coal. i.e. from minimum ash to maximum ash. As far as the efficiency is concerned, it increases with the fly ash percent, ash content and the inlet dust concentration but decreases with the outlet PM concentration at constant migration velocity, fly ash and ash content.

  2. Fungus-mediated synthesis of gold nanoparticles and standardization of parameters for its biosynthesis.

    Science.gov (United States)

    Tidke, Pritish R; Gupta, Indarchand; Gade, Aniket K; Rai, Mahendra

    2014-12-01

    We report the extracellular biosynthesis of gold nanoparticles (AuNPs) using a fungus Fusarium acuminatum. Mycosynthesis of Au-NPs was carried out by challenging the fungal cells filtrate with HAuCl 4 solution (1 mM), as nanoparticles synthesizing enzyme secrete extracellularly by the fungi. The AuNPs were characterized with the help of UV-Visible spectrophotometer, Fourier Transform Infrared spectroscopy, Zeta Potential, X-ray diffraction (XRD) and Transmission electron microscopy (TEM). We observed absorbance peak in between 520 nm-550 nm corresponding to the surface plasmon absorbance of the gold nanoparticles. The nanoparticles synthesized in the present investigation were found to be capped by proteins. XRD results showed that the distinctive formation of crystalline gold nanoparticles in the solution. The spherical and polydispersed AuNPs in the range 8 to 28 nm with average size of 17 nm were observed by TEM analysis. We also standardized the parameters like the effect of pH, temperature and salt concentration on the biosynthesis of gold nanoparticles. It was found that acidic pH, 1 mM salt concentration and 37 (°)C temperature were found to be optimum for the synthesis of Au-NPs. Therefore, the present study introduces the easy, better and cheaper method for biosynthesis of AuNPs.

  3. Calibration of surface roughness standards

    DEFF Research Database (Denmark)

    Thalmann, R.; Nicolet, A.; Meli, F.

    2016-01-01

    organisations. Five surface texture standards of different type were circulated and on each of the standards several roughness parameters according to the standard ISO 4287 had to be determined. 32 out of 395 individual results were not consistent with the reference value. After some corrective actions...

  4. The Diffusion of Labour Standards: The Case of the US and Guatemala

    Directory of Open Access Journals (Sweden)

    Gerda van Roozendaal

    2015-05-01

    Full Text Available The number of free trade agreements (FTAs concluded by the United States of America (US has grown vastly over the past two decades. While FTAs contribute to increased global competition and as such may also contribute to socially-undesirable practices in the area of working conditions and the environment, the proliferation in FTAs has paradoxically also augmented the potential for making free trade more fair as some of these agreements now include labour provisions. However, the question is whether these trade agreements have also actually diffused internationally recognised labour standards. This article studies the FTA the US signed in 2004 with a number of Central American countries and which, at a later stage, also included the Dominican Republic. This FTA is commonly referred to as CAFTA-DR and includes a chapter on labour standards. The article argues that the effects of the inclusion of labour standards in CAFTA-DR have been limited and therefore should be viewed as an unsuccessful attempt at policy transfer. This is illustrated by the case of Guatemala, a country known for its lack of respect for labour standards and which is currently the subject of a complaints procedure under the CAFTA-DR. It is maintained that this lack of effectiveness is the result of many factors. Among these is the weakness of the labour chapter of CAFTA-DR resulting from the fact that the chapter is the outcome of bargaining processes both within the US and between the US and Guatemala, where symbolic results were valued more highly than actual substance.

  5. The case for improved HEPA-filter mechanical performance standards revisited

    Energy Technology Data Exchange (ETDEWEB)

    Ricketts, C.I.; Smith, P.R. [New Mexico State Univ., Las Cruces, NM (United States)

    1997-08-01

    Under benign operating conditions, High Efficiency Particulate Air (HEPA) filter units serve as reliable and relatively economical components in the air cleaning systems of nuclear facilities worldwide. Despite more than four decades of filter-unit evaluation and improvements, however, the material strength characteristics of the glass fiber filter medium continue to ultimately limit filter functional reliability. In worst-case scenarios involving fire suppression, loss-of-coolant accidents (LOCA`s), or exposure to shock waves or tornado induced flows, rupture of the filter medium of units meeting current qualification standards cannot be entirely ruled out. Even under so-called normal conditions of operation, instances of filter failure reported in the literature leave open questions of filter-unit reliability. Though developments of filter units with improved burst strengths have been pursued outside the United States, support for efforts in this country has been comparatively minimal. This despite user requests for filters with greater moisture resistance, for example. Or the fact that conventional filter designs result in not only the least robust component to be found in a nuclear air cleaning system, but also the one most sensitive to the adverse effects of conditions deviating from those of normal operation. Filter qualification-test specifications of current codes, standards, and regulatory guidelines in the United States are based primarily upon research performed in a 30-year period beginning in the 1950`s. They do not seem to reflect the benefits of the more significant developments and understanding of filter failure modes and mechanisms achieved since that time. One overseas design, based on such knowledge, has proven reliability under adverse operating conditions involving combined and serial challenges. Its widespread use, however, has faltered on a lack of consensus in upgrading filter performance standards. 34 refs., 2 figs., 3 tabs.

  6. Parameter and state estimation in a Neisseria meningitidis model: A study case of Niger

    Science.gov (United States)

    Bowong, S.; Mountaga, L.; Bah, A.; Tewa, J. J.; Kurths, J.

    2016-12-01

    Neisseria meningitidis (Nm) is a major cause of bacterial meningitidis outbreaks in Africa and the Middle East. The availability of yearly reported meningitis cases in the African meningitis belt offers the opportunity to analyze the transmission dynamics and the impact of control strategies. In this paper, we propose a method for the estimation of state variables that are not accessible to measurements and an unknown parameter in a Nm model. We suppose that the yearly number of Nm induced mortality and the total population are known inputs, which can be obtained from data, and the yearly number of new Nm cases is the model output. We also suppose that the Nm transmission rate is an unknown parameter. We first show how the recruitment rate into the population can be estimated using real data of the total population and Nm induced mortality. Then, we use an auxiliary system called observer whose solutions converge exponentially to those of the original model. This observer does not use the unknown infection transmission rate but only uses the known inputs and the model output. This allows us to estimate unmeasured state variables such as the number of carriers that play an important role in the transmission of the infection and the total number of infected individuals within a human community. Finally, we also provide a simple method to estimate the unknown Nm transmission rate. In order to validate the estimation results, numerical simulations are conducted using real data of Niger.

  7. Environmental effectiveness of GAEC cross-compliance standard 4.1 (b, c ‘Protection of permanent pasture land’ and economic evaluation of the competitiveness gap for farmers

    Directory of Open Access Journals (Sweden)

    Mauro Salis

    2015-11-01

    Full Text Available The paper presents the main results of the monitoring on the effectiveness of the cross-compliance Standard 4.1 ‘Permanent pasture protection: lett. b, c’ carried out in two case studies within the project MO.NA.CO. Soil, botanical, productive and economic (competitiveness gap parameters have been monitored. In the short term, the Standard 4.1 showed its effectiveness on soil quality, biomass productivity and competitiveness gap in both case studies. Botanical parameters showed differing results, therefore their generalization is not applicable to the heterogeneity of the pasture land Italian system. Shallow soil tillage could be suggested, every 40-50 years, when an appropriate soil organic matter content and the absence of runoff phenomena occur.

  8. Correcting binding parameters for interacting ligand-lattice systems

    Science.gov (United States)

    Hervy, Jordan; Bicout, Dominique J.

    2017-07-01

    Binding of ligands to macromolecules is central to many functional and regulatory biological processes. Key parameters characterizing ligand-macromolecule interactions are the stoichiometry, inducing the number of ligands per macromolecule binding site, and the dissociation constant, quantifying the ligand-binding site affinity. Both these parameters can be obtained from analyses of classical saturation experiments using the standard binding equation that offers the great advantage of mathematical simplicity but becomes an approximation for situations of interest when a ligand binds and covers more than one single binding site on the macromolecule. Using the framework of car-parking problem with latticelike macromolecules where each ligand can cover simultaneously several consecutive binding sites, we showed that employing the standard analysis leads to underestimation of binding parameters, i.e., ligands appear larger than they actually are and their affinity is also greater than it is. Therefore, we have derived expressions allowing to determine the ligand size and true binding parameters (stoichiometry and dissociation constant) as a function of apparent binding parameters retrieved from standard saturation experiments.

  9. Standardless quantification by parameter optimization in electron probe microanalysis

    International Nuclear Information System (INIS)

    Limandri, Silvina P.; Bonetto, Rita D.; Josa, Víctor Galván; Carreras, Alejo C.; Trincavelli, Jorge C.

    2012-01-01

    A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum® for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively. - Highlights: ► A method for standardless quantification in EPMA is presented. ► It gives better results than the commercial software GENESIS Spectrum. ► It gives better results than the software DTSA. ► It allows the determination of the conductive coating thickness. ► It gives an estimation for the concentration uncertainties.

  10. Treatise on water hammer in hydropower standards and guidelines

    International Nuclear Information System (INIS)

    Bergant, A; Mazij, J; Karney, B; Pejović, S

    2014-01-01

    This paper reviews critical water hammer parameters as they are presented in official hydropower standards and guidelines. A particular emphasize is given to a number of IEC standards and guidelines that are used worldwide. The paper critically assesses water hammer control strategies including operational scenarios (closing and opening laws), surge control devices (surge tank, pressure regulating valve, flywheel, etc.), redesign of the water conveyance system components (tunnel, penstock), or limitation of operating conditions (limited operating range) that are variably covered in standards and guidelines. Little information is given on industrial water hammer models and solutions elsewhere. These are briefly introduced and discussed in the light of capability (simple versus complex systems), availability of expertise (in house and/or commercial) and uncertainty. The paper concludes with an interesting water hammer case study referencing the rules and recommendations from existing hydropower standards and guidelines in a view of effective water hammer control. Recommendations are given for further work on development of a special guideline on water hammer (hydraulic transients) in hydropower plants

  11. Treatise on water hammer in hydropower standards and guidelines

    Science.gov (United States)

    Bergant, A.; Karney, B.; Pejović, S.; Mazij, J.

    2014-03-01

    This paper reviews critical water hammer parameters as they are presented in official hydropower standards and guidelines. A particular emphasize is given to a number of IEC standards and guidelines that are used worldwide. The paper critically assesses water hammer control strategies including operational scenarios (closing and opening laws), surge control devices (surge tank, pressure regulating valve, flywheel, etc.), redesign of the water conveyance system components (tunnel, penstock), or limitation of operating conditions (limited operating range) that are variably covered in standards and guidelines. Little information is given on industrial water hammer models and solutions elsewhere. These are briefly introduced and discussed in the light of capability (simple versus complex systems), availability of expertise (in house and/or commercial) and uncertainty. The paper concludes with an interesting water hammer case study referencing the rules and recommendations from existing hydropower standards and guidelines in a view of effective water hammer control. Recommendations are given for further work on development of a special guideline on water hammer (hydraulic transients) in hydropower plants.

  12. Craniospinal Germinomas in Patient with Down Syndrome Successfully Treated with Standard-Dose Chemotherapy and Craniospinal Irradiation: Case Report and Literature Review.

    Science.gov (United States)

    Miyake, Yohei; Adachi, Jun-Ichi; Suzuki, Tomonari; Mishima, Kazuhiko; Sasaki, Atsushi; Nishikawa, Ryo

    2017-12-01

    Patients with Down syndrome (DS) are more likely to develop chemotherapy-related complications. The standard treatment for these patients with cancer has not yet been established, and the risks of standard chemotherapy are unclear. In this paper, a rare case of multiple craniospinal germinomas in a patient with DS, which was successfully treated with standard-dose chemotherapy combined with craniospinal irradiation, is reported. The authors report a case of multiple craniospinal germinomas in a DS patient who presented with bilateral oculomotor and facial nerve palsy and hearing loss. The patient underwent 3 courses of combination chemotherapy using a standard dose of carboplatin and etoposide and 23.4 Gy of concurrent craniospinal irradiation. Posttreatment magnetic resonance imaging showed reduction of the tumors. Both fluorodeoxyglucose- and methionine-positron emission tomography demonstrated no uptake in the residual tumors. Follow-up magnetic resonance imaging and positron emission tomography did not reveal tumor recurrence for 18 months. As far as we know, this is the first case of multiple craniospinal germinomas in a patient with DS who achieved a successful treatment result without fatal adverse events. The literature review indicated that disseminated germinomas may need intensive treatment to reduce recurrence risk. However, intensive chemotherapy using a combination of 3 or more anticancer drugs can increase the rate of treatment-related death during the early stage. Our case indicated that multiple craniospinal germinoma of DS patients could be treated with a standard dose of carboplatin and etoposide regimen with concurrent craniospinal irradiation along with appropriate supportive therapy and careful observation. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. DICOM Standard Conformance in Veterinary Medicine in Germany: a Survey of Imaging Studies in Referral Cases.

    Science.gov (United States)

    Brühschwein, Andreas; Klever, Julius; Wilkinson, Tom; Meyer-Lindenberg, Andrea

    2018-02-01

    In 2016, the recommendations of the DICOM Standards Committee for the use of veterinary identification DICOM tags had its 10th anniversary. The goal of our study was to survey veterinary DICOM standard conformance in Germany regarding the specific identification tags veterinarians should use in veterinary diagnostic imaging. We hypothesized that most veterinarians in Germany do not follow the guidelines of the DICOM Standards Committee. We analyzed the metadata of 488 imaging studies of referral cases from 115 different veterinary institutions in Germany by computer-aided DICOM header readout. We found that 25 (5.1%) of the imaging studies fully complied with the "veterinary DICOM standard" in this survey. The results confirmed our hypothesis that the recommendations of the DICOM Standards Committee for the consistent and advantageous use of veterinary identification tags have found minimal acceptance amongst German veterinarians. DICOM does not only enable connectivity between machines, DICOM also improves communication between veterinarians by sharing correct and valuable metadata for better patient care. Therefore, we recommend that lecturers, universities, societies, authorities, vendors, and other stakeholders should increase their effort to improve the spread of the veterinary DICOM standard in the veterinary world.

  14. Behaviour of Lyapunov exponents near crisis points in the dissipative standard map

    Science.gov (United States)

    Pompe, B.; Leven, R. W.

    1988-11-01

    We numerically study the behaviour of the largest Lyapunov characteristic exponent λ1 in dependence on a control parameter in the 2D standard map with dissipation. In order to investigate the system's motion in parameter intervals slightly above crisis points we introduce "partial" Lyapunov exponents which characterize the average exponential divergence of nearby orbits on a semi-attractor at a boundary crisis and on distinct parts of a "large" chaotic attractor near an interior crisis. In the former case we find no significant difference between λ1 in the pre-crisis regime and the partial Lyapunov exponent describing transient chaotic motions slightly above the crisis. For the latter case we give a quantitative description of the drastic increase of λ1. Moreover, a formula which connects the critical exponent of a chaotic transient above a boundary crisis with a pointwise dimension is derived.

  15. The legitimacy of transnational private governance arrangements related to nanotechnologies: the case of international organization for standardization

    NARCIS (Netherlands)

    Kica, Evisa

    2015-01-01

    The core of this thesis consists of developing a comprehensive empirical assessment on the legitimacy of nanotechnology related transnational private governance arrangements (TPGAs), explored through the case study of the International Organization for Standardization (ISO) Technical Committee on

  16. Strategies for using international domain standards within a national context: The case of the Dutch temporary staffing industry

    NARCIS (Netherlands)

    Folmer, Erwin Johan Albert; van Bekkum, Michael; Verhoosel, Jack

    2009-01-01

    This paper will discuss strategies for using international domain standards within a national context. The various strategies are illustrated by means of a case study of the temporary staffing industry.

  17. The Solubility Parameters of Ionic Liquids

    Science.gov (United States)

    Marciniak, Andrzej

    2010-01-01

    The Hildebrand’s solubility parameters have been calculated for 18 ionic liquids from the inverse gas chromatography measurements of the activity coefficients at infinite dilution. Retention data were used for the calculation. The solubility parameters are helpful for the prediction of the solubility in the binary solvent mixtures. From the solubility parameters, the standard enthalpies of vaporization of ionic liquids were estimated. PMID:20559495

  18. The Solubility Parameters of Ionic Liquids

    Directory of Open Access Journals (Sweden)

    Andrzej Marciniak

    2010-04-01

    Full Text Available The Hildebrand’s solubility parameters have been calculated for 18 ionic liquids from the inverse gas chromatography measurements of the activity coefficients at infinite dilution. Retention data were used for the calculation. The solubility parameters are helpful for the prediction of the solubility in the binary solvent mixtures. From the solubility parameters, the standard enthalpies of vaporization of ionic liquids were estimated.

  19. IEEE C37.98-1987: IEEE standard seismic testing of relays

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    This standard specifies the procedures to be used in the seismic testing of relays used in power system facilities. The standard is concerned with the determination of the seismic fragility level of relays and also gives recommendations for proof testing. The purpose of this standard is to establish procedures for determining the seismic capabilities of protective and auxiliary relays. These procedures employ what has been called fragility testing in IEEE Std 344-1987. To define the conditions for fragility testing of relays, parameters in three separate areas must be specified. In general, they are (1) the electrical settings and inputs to the relay, and other information to define its conditions during the test; (2) the change in state, deviation in operating characteristics or tolerances, or other change of performance of the relay that constitutes failure; (3) the seismic vibration environment to be imposed during the test. Since it is not possible to define the conditions for every conceivable application for all relays, those parameters, which in practice encompass the majority of applications, have been specified in this standard. When the application of the relay is other than as specified under any of (1), (2), and (3), or if it is not practical to apply existing results of fragility tests to that new application, then proof testing must be performed for that new case

  20. IEEE C37.98-1978: IEEE standard seismic testing of relays

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    This standard specifies the procedures to be used in the seismic testing of relays used in power system facilities. The standard is concerned with the determination of the seismic fragility level of relays and also gives recommendations for proof testing. The purpose of this standard is to establish procedures for determining the seismic capabilities of protective and auxiliary relays. These procedures employ what has been called fragility testing in ANSI/IEEE Std 344-1975, Recommended Practices for Seismic Qualification of Class 1E Equipment for Nuclear Power Generating Stations. In order to define the conditions for fragility testing of relays, parameters in three separate areas must be specified. In general they are: (1) the electrical settings and inputs to the relay, and other information to define its conditions during the test; (2) the change in state, deviation in operating characteristics or tolerances, or other change of performance of the relay which constitutes failure; (3) the seismic vibration environment to be imposed during the test. Since it is not possible to define the conditions for every conceivable application for all relays, those parameters, which in practice encompass the majority of applications, have been specified in this standard. When the application of the relay is other than as specified under any of (1), (2), and (3), or if it is not practical to apply existing results of fragility tests to that new case

  1. Measurement of specific parameters for dose calculation after inhalation of aerols containing transuranium elements

    International Nuclear Information System (INIS)

    Ramounet-le Gall, B.; Fritsch, P.; Abram, M.C.; Rateau, G.; Grillon, G.; Guillet, K.; Baude, S.; Berard, P.; Ansoborlo, E.; Delforge, J.

    2002-01-01

    A review on specific parameter measurements to calculate doses per unit of incorporation according to recommendations of the International Commission of Radiological Protection has been performed for inhaled actinide oxides. Alpha activity distribution of the particles can be obtained by autoradiography analysis using aerosol sampling filters at the work places. This allows us to characterize granulometric parameters of 'pure' actinide oxides, but complementary analysis by scanning electron microscopy is needed for complex aerosols. Dissolution parameters with their standard deviation are obtained after rat inhalation exposure, taking into account both mechanical lung clearance and actinide transfer to the blood estimated from bone retention. In vitro experiments suggest that the slow dissolution rate might decrease as a function of time following exposure. Dose calculation software packages have been developed to take into account granulometry and dissolution parameters as well as specific physiological parameters of exposed individuals. In the case of poorly soluble actinide oxides, granulometry and physiology appear as the main parameters controlling dose value, whereas dissolution only alters dose distribution. Validation of these software packages are in progress. (author)

  2. 106-17 Telemetry Standards Front Matter

    Science.gov (United States)

    2017-07-01

    Frequency Division Multiplexing Telemetry Standards CHAPTER 4: Pulse Code Modulation Standards CHAPTER 5: Digitized Audio Telemetry Standard CHAPTER 6...Transfer Standard Chapter 9, Appendix 9-A Appendix I, Telemetry Attributes Transfer Standard Cover Sheet Chapter 9, Appendix 9-B Telemetry Standards...Derived Parameter Specification Chapter 9, Appendix 9-E Appendix Q, Extended Binary Golay Code Chapter 7, Appendix 7-A Appendix R, Low-Density Parity

  3. Metabolic management of glioblastoma multiforme using standard therapy together with a restricted ketogenic diet: Case Report

    Directory of Open Access Journals (Sweden)

    Servadei Franco

    2010-04-01

    Full Text Available Abstract Background Management of glioblastoma multiforme (GBM has been difficult using standard therapy (radiation with temozolomide chemotherapy. The ketogenic diet is used commonly to treat refractory epilepsy in children and, when administered in restricted amounts, can also target energy metabolism in brain tumors. We report the case of a 65-year-old woman who presented with progressive memory loss, chronic headaches, nausea, and a right hemisphere multi-centric tumor seen with magnetic resonance imaging (MRI. Following incomplete surgical resection, the patient was diagnosed with glioblastoma multiforme expressing hypermethylation of the MGMT gene promoter. Methods Prior to initiation of the standard therapy, the patient conducted water-only therapeutic fasting and a restricted 4:1 (fat: carbohydrate + protein ketogenic diet that delivered about 600 kcal/day. The patient also received the restricted ketogenic diet concomitantly during the standard treatment period. The diet was supplemented with vitamins and minerals. Steroid medication (dexamethasone was removed during the course of the treatment. The patient was followed using MRI and positron emission tomography with fluoro-deoxy-glucose (FDG-PET. Results After two months treatment, the patient's body weight was reduced by about 20% and no discernable brain tumor tissue was detected using either FDG-PET or MRI imaging. Biomarker changes showed reduced levels of blood glucose and elevated levels of urinary ketones. MRI evidence of tumor recurrence was found 10 weeks after suspension of strict diet therapy. Conclusion This is the first report of confirmed GBM treated with standard therapy together with a restricted ketogenic diet. As rapid regression of GBM is rare in older patients following incomplete surgical resection and standard therapy alone, the response observed in this case could result in part from the action of the calorie restricted ketogenic diet. Further studies are needed

  4. Report About a New Standard for Radiation Protection Training of Intervention Persons. In the Case of Radiological emergency Situations

    International Nuclear Information System (INIS)

    Geringer, T.; Steurer, A.; Schmitzer, C.

    2004-01-01

    In autumn 2003 the Austrian standard OENORM S 5207 with the title R adiation protection training of intervention persons in the case of radiological emergency situations w ill be published. The standard is directed to persons who have to invent in case of a radiological emergency, security forces and as well training centres. The standard has to fulfil three objectives: 1. Regulation of the minimum requirements for the radiation protection training and education of intervention persons. 2. Harmonization of the radiation protection and training of different security forces, for instance Austrian army, Red Cross Austria, Fire Department, Police Department. 3. Mutual recognition of parts of the education between the different security forces. To fulfil these aims the standard is structured in different education modules. If , for instance, a person attended a special training module at the Austrian military, this part of the education is also valid for a career at the Fire Department. Further the modular structure of the education gives the possibility for persons of a special security force to attend one or more modules at another security force. This will lead to an improved cooperation between the different security forces in case of a radiological emergency situation. The education is structured in four levels. The topics of the standard are: 1. Requirements for training centres 2. Guidelines for the examinations of the candidates 3. Topics and goals of the basic education 4. Topics and goals of the advanced education level one 5. Topics and goals of the advanced education level two 6. Topics and examples of specialised education 7. Obligatory further education once every year. (Author)

  5. Standard Errors for National Trends in International Large-Scale Assessments in the Case of Cross-National Differential Item Functioning

    Science.gov (United States)

    Sachse, Karoline A.; Haag, Nicole

    2017-01-01

    Standard errors computed according to the operational practices of international large-scale assessment studies such as the Programme for International Student Assessment's (PISA) or the Trends in International Mathematics and Science Study (TIMSS) may be biased when cross-national differential item functioning (DIF) and item parameter drift are…

  6. A medical-legal review regarding the standard of care for epidural injections, with particular reference to a closed case.

    Science.gov (United States)

    Helm, Standiford; Glaser, Scott; Falco, Frank; Henry, Brian

    2010-01-01

    Interventional pain management is an evolving field, with a primary focus on the safety of the patient. One major source of risk to patients is intraarterial or intraneural injections. Interventional pain physicians have considerable interest in identifying techniques which avoid these complications. A recent article has reviewed complications associated with interventional procedures and concluded that the complications were due to deviation from a specific prescribed protocol. One of the cases reviewed went to jury trial and the record of that case is in the public domain. Two of the authors of the recent review were expert witnesses in the trial. They provided conflicting testimony as to alleged violations of the standard of care. Their criticisms also differed from a third criticism contained in the article as well as the protocol being advocated in the article, thus contravening the claim that there is one prescribed protocol which must be followed. The definition of standard of care varies amongst jurisdictions, but is generally defined as either that care which a reasonably well-trained physician in that specialty would provide under similar circumstances or as what would constitute reasonable medical care under the circumstances presented. Analysis of the case which went to trial indicates that there is not one prescribed protocol which must be followed; the definition of standard of care is broader than that. Interventional pain management is an evolving field and the standard of care is broadly defined.

  7. Normal standards for kidney length as measured with US in premature infants

    International Nuclear Information System (INIS)

    Schlesinger, A.E.; Hedlund, G.L.; Pierson, W.P.; Null, D.M.

    1986-01-01

    In order to develop normal standards for kidney length in premature infants, the authors measured kidney length by US imaging in 39 (to date) premature infants less than 72 hours old and without known renal disease. Kidney length was compared with four different parameters of body size, including gestational age, birth weight, birth length, and body surface area. Similar standards have been generated previously for normal renal length as measured by US imaging in full-term infants and older children. These standards have proven utility in cases of congenital and acquired disorders that abnormally increase or decrease renal size. Scatter plots of kidney length versus body weight and kidney length versus body surface area conformed well to a logarithmic distribution, with a high correlation coefficient and close-fitting 95% confidence limits (SEE = 2.05)

  8. Using standardized patients versus video cases for representing clinical problems in problem-based learning.

    Science.gov (United States)

    Yoon, Bo Young; Choi, Ikseon; Choi, Seokjin; Kim, Tae-Hee; Roh, Hyerin; Rhee, Byoung Doo; Lee, Jong-Tae

    2016-06-01

    The quality of problem representation is critical for developing students' problem-solving abilities in problem-based learning (PBL). This study investigates preclinical students' experience with standardized patients (SPs) as a problem representation method compared to using video cases in PBL. A cohort of 99 second-year preclinical students from Inje University College of Medicine (IUCM) responded to a Likert scale questionnaire on their learning experiences after they had experienced both video cases and SPs in PBL. The questionnaire consisted of 14 items with eight subcategories: problem identification, hypothesis generation, motivation, collaborative learning, reflective thinking, authenticity, patient-doctor communication, and attitude toward patients. The results reveal that using SPs led to the preclinical students having significantly positive experiences in boosting patient-doctor communication skills; the perceived authenticity of their clinical situations; development of proper attitudes toward patients; and motivation, reflective thinking, and collaborative learning when compared to using video cases. The SPs also provided more challenges than the video cases during problem identification and hypotheses generation. SPs are more effective than video cases in delivering higher levels of authenticity in clinical problems for PBL. The interaction with SPs engages preclinical students in deeper thinking and discussion; growth of communication skills; development of proper attitudes toward patients; and motivation. Considering the higher cost of SPs compared with video cases, SPs could be used most advantageously during the preclinical period in the IUCM curriculum.

  9. Design and analysis of control charts for standard deviation with estimated parameters

    NARCIS (Netherlands)

    Schoonhoven, M.; Riaz, M.; Does, R.J.M.M.

    2011-01-01

    This paper concerns the design and analysis of the standard deviation control chart with estimated limits. We consider an extensive range of statistics to estimate the in-control standard deviation (Phase I) and design the control chart for real-time process monitoring (Phase II) by determining the

  10. Obsolete Laws: Economic and Moral Aspects, Case Study-Composting Standards.

    Science.gov (United States)

    Vochozka, Marek; Maroušková, Anna; Šuleř, Petr

    2017-12-01

    From the early days of philosophy, ethics and justice, there is wide consensus that the constancy of the laws establishes the legal system. On the other hand, the rate at which we accumulate knowledge is gaining speed like never before. Due to the recently increased attention of academics to climate change and other environmental issues, a lot of new knowledge has been obtained about carbon management, its role in nature and mechanisms regarding the formation and degradation of organic matter. A multidisciplinary techno-economic assessment of current composting standards and laws that took into account the current state of knowledge about carbon management was carried out as a case study. Economic and environmental damage caused by outdated laws was revealed. In addition, it was found that the introduction of the best composts into the market is permitted, causing additional negative environmental as well as economic impacts.

  11. Quantitative micro x-ray fluorescence analyses without reference standard material; Referenzprobenfreie quantitative Mikro-Roentgenfluoreszenzanalyse

    Energy Technology Data Exchange (ETDEWEB)

    Wolff, Timo

    2009-07-15

    X-ray fluorescence analysis (XRF) is a standard method for non-destructive investigations. Due to the development of polycapillary optics and SDDdetectors requiring no cooling with liquid nitrogen, XRF becomes a suitable method for a large number of applications, e. g. for the analysis of objects in arts and archaeology. Spectrometers developed for those purposes allow investigations outside of laboratories und provide excitation areas with diameters of 10-70 {mu}m. In most applications, quantification of XRF data is realized by the usage of standard reference materials. Due to absorption processes in the samples the accuracy of the results depends strongly on the similarity of the sample and the reference standard. In cases where no suitable references are available, quantification can be done based on the ''fundamental parameter (fp) method''. This quantification procedure is based on a set of equations describing the fluorescence production and detection mathematical. The cross sections for the interaction of x-rays with matter can be taken from different databases. During an iteration process the element concentrations can be determined. Quantitative XRF based on fundamental parameters requires an accurate knowledge of the excitation spectrum. In case of a conventional setup this spectrum is given by the X-ray tube spectrum and can be calculated. The use of polycapillary optics in micro-XRF spectrometers changes the spectral distribution of the excitation radiation. For this reason it is necessary to access the transmission function of the used optic. The aim of this work is to find a procedure to describe this function for routine quantification based on fundamental parameters. Most of the measurements have been carried out using a commercial spectrometer developed for applications in arts and archaeology. On the one hand the parameters of the lens, used in the spectrometer, have been investigated by different experimental characterization

  12. Strategy for a consistent selection of radionuclide migration parameters for the Belgian safety and feasibility case-1

    International Nuclear Information System (INIS)

    Bruggeman, C.; Maes, N.; Salah, S.; Brassinnes, S.; Van Geet, M.

    2010-01-01

    Document available in extended abstract form only. The purpose of this presentation is to describe the strategy for the selection of retention and migration parameters for safety-relevant nuclides that was developed in the framework of the Belgian Safety and Feasibility Case SFC-1. A geochemical database containing state-of-the-art retention and migration parameters of all safety-relevant radionuclides, is ideally based on a thermodynamic understanding and an ability to accurately describe the geochemical and transport behaviour of all these radionuclides under the geochemical conditions that are considered for a reference host formation. In Belgium, this reference formation is Boom Clay. The parameters will be used in Performance Assessment (PA) calculations, and therefore must also be adapted to PA models. Since these models currently use only a four parameters for every radionuclide, the whole geochemical and transport behaviour must be comprised to a very limited parameter set that describe on the one hand chemical retention within the Boom Clay formation, and on the other hand transport through the Boom Clay formation. Chemical retention considers two concepts: 1) a concentration limit (S), which represents the mobile concentration of a nuclide present in the aqueous phase under undisturbed far field Boom Clay conditions; 2) a retardation (R/Kd) factor, which represents the uptake of a mobile nuclide by the inorganic and organic phases present in the Boom Clay formation. For mobility/migration two additional concepts are introduced: 3) the diffusion accessible porosity (η), which is the total physical space available for transport of a nuclide. The maximum value of η is limited by the water content of the formation; 4) the pore diffusion coefficient (Dp), which represents the transport velocity of a nuclide in a diffusion-dominated system. Within the framework of SFC-1, primary focus is laid on the compilation of parameter ranges, instead of individual &apos

  13. Geological constraints for muon tomography: The world beyond standard rock

    Science.gov (United States)

    Lechmann, Alessandro; Mair, David; Ariga, Akitaka; Ariga, Tomoko; Ereditato, Antonio; Käser, Samuel; Nishiyama, Ryuichi; Scampoli, Paola; Vladymyrov, Mykhailo; Schlunegger, Fritz

    2017-04-01

    In present day muon tomography practice, one often encounters an experimental setup in which muons propagate several tens to a few hundreds of meters through a material to the detector. The goal of such an undertaking is usually centred on an attempt to make inferences from the measured muon flux to an anticipated subsurface structure. This can either be an underground interface geometry or a spatial material distribution. Inferences in this direction have until now mostly been done, thereby using the so called "standard rock" approximation. This includes a set of empirically determined parameters from several rocks found in the vicinity of physicist's laboratories. While this approach is reasonable to account for the effects of the tens of meters of soil/rock around a particle accelerator, we show, that for material thicknesses beyond that dimension, the elementary composition of the material (average atomic weight and atomic number) has a noticeable effect on the measured muon flux. Accordingly, the consecutive use of this approximation could potentially lead into a serious model bias, which in turn, might invalidate any tomographic inference, that base on this standard rock approximation. The parameters for standard rock are naturally close to a granitic (SiO2-rich) composition and thus can be safely used in such environments. As geophysical surveys are not restricted to any particular lithology, we investigated the effect of alternative rock compositions (carbonatic, basaltic and even ultramafic) and consequentially prefer to replace the standard rock approach with a dedicated geological investigation. Structural field data and laboratory measurements of density (He-Pycnometer) and composition (XRD) can be merged into an integrative geological model that can be used as an a priori constraint for the rock parameters of interest (density & composition) in the geophysical inversion. Modelling results show that when facing a non-granitic lithology the measured muon

  14. Isolating DNA from sexual assault cases: a comparison of standard methods with a nuclease-based approach

    Science.gov (United States)

    2012-01-01

    Background Profiling sperm DNA present on vaginal swabs taken from rape victims often contributes to identifying and incarcerating rapists. Large amounts of the victim’s epithelial cells contaminate the sperm present on swabs, however, and complicate this process. The standard method for obtaining relatively pure sperm DNA from a vaginal swab is to digest the epithelial cells with Proteinase K in order to solubilize the victim’s DNA, and to then physically separate the soluble DNA from the intact sperm by pelleting the sperm, removing the victim’s fraction, and repeatedly washing the sperm pellet. An alternative approach that does not require washing steps is to digest with Proteinase K, pellet the sperm, remove the victim’s fraction, and then digest the residual victim’s DNA with a nuclease. Methods The nuclease approach has been commercialized in a product, the Erase Sperm Isolation Kit (PTC Labs, Columbia, MO, USA), and five crime laboratories have tested it on semen-spiked female buccal swabs in a direct comparison with their standard methods. Comparisons have also been performed on timed post-coital vaginal swabs and evidence collected from sexual assault cases. Results For the semen-spiked buccal swabs, Erase outperformed the standard methods in all five laboratories and in most cases was able to provide a clean male profile from buccal swabs spiked with only 1,500 sperm. The vaginal swabs taken after consensual sex and the evidence collected from rape victims showed a similar pattern of Erase providing superior profiles. Conclusions In all samples tested, STR profiles of the male DNA fractions obtained with Erase were as good as or better than those obtained using the standard methods. PMID:23211019

  15. Study of electroweak parameters at LEP

    International Nuclear Information System (INIS)

    Blum, W.

    1991-10-01

    The measurement of the line shape and asymmetry parameters of the Z 0 in its leptonic and hadronic decays are reviewed. Progress is reported about a considerable increase in measurement accuracy. Several tests of the Standard Model confirm it to better than one per cent. New values for the effective mixing parameter are derived from the line shape parameters averaged over the four LEP experiments. The corresponding limits on the top mass are presented. (orig.)

  16. Standardless quantification by parameter optimization in electron probe microanalysis

    Energy Technology Data Exchange (ETDEWEB)

    Limandri, Silvina P. [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina); Bonetto, Rita D. [Centro de Investigacion y Desarrollo en Ciencias Aplicadas Dr. Jorge Ronco (CINDECA), CONICET, 47 Street 257, (1900) La Plata (Argentina); Facultad de Ciencias Exactas, Universidad Nacional de La Plata, 1 and 47 Streets (1900) La Plata (Argentina); Josa, Victor Galvan; Carreras, Alejo C. [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina); Trincavelli, Jorge C., E-mail: trincavelli@famaf.unc.edu.ar [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina)

    2012-11-15

    A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum Registered-Sign for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively. - Highlights: Black-Right-Pointing-Pointer A method for standardless quantification in EPMA is presented. Black-Right-Pointing-Pointer It gives better results than the commercial software GENESIS Spectrum. Black-Right-Pointing-Pointer It gives better results than the software DTSA. Black-Right-Pointing-Pointer It allows the determination of the conductive coating thickness. Black-Right-Pointing-Pointer It gives an estimation for the concentration uncertainties.

  17. Standardized uptake value in pediatric patients: an investigation to determine the optimum measurement parameter

    International Nuclear Information System (INIS)

    Yeung, H.W.; Squire, O.D.; Larson, S.M.; Erdi, Y.E.; Sanches, A.; Macapinlac, H.A.

    2002-01-01

    Although the standardized uptake value (SUV) is currently used in fluorine-18 fluorodeoxyglucose positron emission tomography (FDG-PET) imaging, concerns have been raised over its accuracy and clinical relevance. Dependence of the SUV on body weight has been observed in adults and this should be of concern in the pediatric population, since there are significant body changes during childhood. The aim of the present study was to compare SUV measurements based on body weight, body surface area and lean body mass in the pediatric population and to determine a more reliable parameter across all ages. Sixty-eight pediatric FDG-PET studies were evaluated. Age ranged from 2 to 17 years and weight from 11 to 77 kg. Regions of interest were drawn at the liver for physiologic comparison and at FDG-avid malignant lesions. SUV based on body weight (SUV bw ) varied across different weights, a phenomenon less evident when body surface area (SUV bsa ) normalization is applied. Lean body mass-based SUV (SUV lbm ) also showed a positive correlation with weight, which again was less evident when normalized to bsa (SUV bsa-lbm ). The measured liver SUV bw was 1.1±0.3, a much lower value than in our adult population (1.9±0.3). The liver SUV bsa was 7.3±1.3. The tumor sites had an SUV bw of 4.0±2.7 and an SUV bsa of 25.9±15.4 (65% of the patients had neuroblastoma). The bsa-based SUVs were more constant across the pediatric ages and were less dependent on body weight than the SUV bw . These results indicate that SUV calculated on the basis of body surface area is a more uniform parameter than SUV based on body weight in pediatric patients and is probably the most appropriate approach for the follow-up of these patients. (orig.)

  18. The minimal non-minimal standard model

    International Nuclear Information System (INIS)

    Bij, J.J. van der

    2006-01-01

    In this Letter I discuss a class of extensions of the standard model that have a minimal number of possible parameters, but can in principle explain dark matter and inflation. It is pointed out that the so-called new minimal standard model contains a large number of parameters that can be put to zero, without affecting the renormalizability of the model. With the extra restrictions one might call it the minimal (new) non-minimal standard model (MNMSM). A few hidden discrete variables are present. It is argued that the inflaton should be higher-dimensional. Experimental consequences for the LHC and the ILC are discussed

  19. Optimization-based particle filter for state and parameter estimation

    Institute of Scientific and Technical Information of China (English)

    Li Fu; Qi Fei; Shi Guangming; Zhang Li

    2009-01-01

    In recent years, the theory of particle filter has been developed and widely used for state and parameter estimation in nonlinear/non-Gaussian systems. Choosing good importance density is a critical issue in particle filter design. In order to improve the approximation of posterior distribution, this paper provides an optimization-based algorithm (the steepest descent method) to generate the proposal distribution and then sample particles from the distribution. This algorithm is applied in 1-D case, and the simulation results show that the proposed particle filter performs better than the extended Kalman filter (EKF), the standard particle filter (PF), the extended Kalman particle filter (PF-EKF) and the unscented particle filter (UPF) both in efficiency and in estimation precision.

  20. An Analysis of Anthropometric Indicators and Modifiable Lifestyle Parameters Associated with Hypertensive Nephropathy

    Directory of Open Access Journals (Sweden)

    Christiana Aryee

    2016-01-01

    Full Text Available The surge in prevalence of chronic noncommunicable diseases like hypertension and chronic kidney disease has been linked with modifiable lifestyle practices and increased body fat. This study sought to compare the association between different modifiable lifestyle practices, adiposity indices, renal function parameters, and hypertension as well as the predictive implications for levels of these parameters in target cardiac organ damage among an urban Ghanaian hypertensive population. Using a hospital-based case-control study design, 241 Ghanaian indigenes from the Kumasi metropolis were recruited for this study. The case group was made up of 180 hypertensives and 61 normotensives served as controls. In addition to sociodemographic data, standard haemodynamic, anthropometric, renal function, and cardiac organ damage assessments were done. The prevalence of chronic kidney disease (CKD ranged from 13.3% to 16.6% depending on the equation used in estimating the glomerular filtration rate (eGFR. Percentage cluster distribution by chronic kidney disease was observed to be significantly tilted toward the upper quartiles (3rd and 4th of the haemodynamic parameters measured. Chronic kidney disease was significantly higher among self-reported smokers and alcoholic hypertensives. In this urban population, adiposity was associated with hypertension and renal insufficiency. Chronic kidney disease was associated with hypertension and cardiac abnormalities.

  1. Effects of non-standard interactions in the MINOS experiment

    International Nuclear Information System (INIS)

    Blennow, Mattias; Ohlsson, Tommy; Skrotzki, Julian

    2008-01-01

    We investigate the effects of non-standard interactions on the determination of the neutrino oscillation parameters Δm 31 2 , θ 23 , and θ 13 in the MINOS experiment. We show that adding non-standard interactions to the analysis lead to an extension of the allowed parameter space to larger values of Δm 31 2 and smaller θ 23 , and basically removes all predictability for θ 13 . In addition, we discuss the sensitivities to the non-standard interaction parameters of the MINOS experiment alone. In particular, we examine the degeneracy between θ 13 and the non-standard interaction parameter ε eτ . We find that this degeneracy is responsible for the removal of the θ 13 predictability and that the possible bound on |ε eτ | is competitive with direct bounds only if a more stringent external bound on θ 13 is applied

  2. Removal of foot-and-mouth disease virus infectivity in salted natural casings by minor adaptation of standardized industrial procedures.

    Science.gov (United States)

    Wijnker, J J; Haas, B; Berends, B R

    2007-04-10

    Intestines are used for the production of natural casings as edible sausage containers. Derived from animals (pigs and sheep) experimentally infected with FMDV (initial dosage 10(7.3) PFU/ml, strain O(1Kaufbeuren)), these natural casings were treated with sodium chloride or a phosphate salts/sodium chloride mixture and the residual FMDV titres measured. After storage at about 20 degrees C, no remaining infectivity was found after either treatment, whereas casings stored at 4 degrees C still contained infectivity. Storage of salted casings at about 20 degrees C for 30 days is already part of the Standard Operating Procedures (included in HACCP) of the international casing industry and can therefore be considered as a protective measure for the international trade in natural casings.

  3. Setting Standards: Three Tales and a Proposition

    DEFF Research Database (Denmark)

    Knudsen, Mette Præst; Nielsen, Jørgen Ulff-Møller; Pedersen, Kurt

    2001-01-01

    Technological development has a tendency to be reflected in standards. In some cases, the market is the vehicle for defining standards that will be the outcome of a competitive battle between (groups of) firms. In other cases authorities will attempt to influence the technology process - and the ......Technological development has a tendency to be reflected in standards. In some cases, the market is the vehicle for defining standards that will be the outcome of a competitive battle between (groups of) firms. In other cases authorities will attempt to influence the technology process...... - and the standards. The background may be industry policy in combination with the nature of the technology and the products. The battle thus may turn into a battle between the market and the administrators. This paper presents three industry cases, one representing the pure market solution, and two presenting...... conflicts of interests; both leading to the final victory for the market driven solution....

  4. Governing through standards

    DEFF Research Database (Denmark)

    Brøgger, Katja

    This abstract adresses the ways in which new education standards have become integral to new modes of education governance. The paper explores the role of standards for accelerating the shift from national to transnational governance in higher education. Drawing on the case of higher education...

  5. Salivary and microbiological parameters of chronic periodontitis subjects with and without type 2 diabetes mellitus: a case-control study

    Directory of Open Access Journals (Sweden)

    José Roberto CORTELLI

    Full Text Available Background: Several studies have investigated the differences in salivary parameters and microbial composition between diabetic and non-diabetic patients, however, specific differences are still not clear mainly due to the effects of confounder. Aim: The aim of this case-control study was to evaluate the salivary and microbial parameters of chronic periodontitis subjects with and without type 2 diabetes mellitus. Material and method: This case-control study included 60 chronic periodontitis subjects, 30 diabetics (case group and 30 non-diabetics (control group, paired according to periodontitis severity, gender and age. Stimulated whole saliva was collected from all volunteers to measure the salivary pH and the salivary flow rate. Bacterial samples were collected with paper points from periodontal sites showing the deepest periodontal pocket depth associated with the highest clinical attachment loss. The frequency of A. actinomycetemcomitans, P. intermedia, P. gingivalis, T. forsythia and C. rectus was evaluated by PCR. Data was statistically analyzed by Student's t, Mann-Whitney and Chi-square (p<0.05. Result: Diabetic subjects showed higher salivary glucose levels and lower stimulated flow rates in comparison to non-diabetic controls. P. gingivalis and T. forsythia were the most frequent pathogens (p<0.05. Bacterial frequency did not differ between case and control groups. Conclusion: Diabetes status influenced salivary glucose levels and flow rate. Within the same severity of chronic periodontitis, diabetic subjects did not show higher frequency of periodontal pathogens in comparison to their paired controls.

  6. Application and optimization of input parameter spaces in mass flow modelling: a case study with r.randomwalk and r.ranger

    Science.gov (United States)

    Krenn, Julia; Zangerl, Christian; Mergili, Martin

    2017-04-01

    r.randomwalk is a GIS-based, multi-functional, conceptual open source model application for forward and backward analyses of the propagation of mass flows. It relies on a set of empirically derived, uncertain input parameters. In contrast to many other tools, r.randomwalk accepts input parameter ranges (or, in case of two or more parameters, spaces) in order to directly account for these uncertainties. Parameter spaces represent a possibility to withdraw from discrete input values which in most cases are likely to be off target. r.randomwalk automatically performs multiple calculations with various parameter combinations in a given parameter space, resulting in the impact indicator index (III) which denotes the fraction of parameter value combinations predicting an impact on a given pixel. Still, there is a need to constrain the parameter space used for a certain process type or magnitude prior to performing forward calculations. This can be done by optimizing the parameter space in terms of bringing the model results in line with well-documented past events. As most existing parameter optimization algorithms are designed for discrete values rather than for ranges or spaces, the necessity for a new and innovative technique arises. The present study aims at developing such a technique and at applying it to derive guiding parameter spaces for the forward calculation of rock avalanches through back-calculation of multiple events. In order to automatize the work flow we have designed r.ranger, an optimization and sensitivity analysis tool for parameter spaces which can be directly coupled to r.randomwalk. With r.ranger we apply a nested approach where the total value range of each parameter is divided into various levels of subranges. All possible combinations of subranges of all parameters are tested for the performance of the associated pattern of III. Performance indicators are the area under the ROC curve (AUROC) and the factor of conservativeness (FoC). This

  7. STUDY OF BIOCHEMICAL PARAMETERS IN AMNIOTIC FLUID FOR ASSESSMENT OF FOETAL MATURITY IN CASES OF NORMAL PREGNANCY

    Directory of Open Access Journals (Sweden)

    Leela

    2015-11-01

    Full Text Available Assessment of foetal maturity had been proven of value in evaluating the foetal condition. Accurate assessment of foetal maturity is essential for the proper timing of delivery in various risk pregnancies. Amniotic Fluid analysis for foetal maturity had been of proven value. In the present study, study of biochemical parameters in amniotic fluid in respect of Creatinine, Uric Acid, Urea, Total Proteins, and Electrolytes i.e. Sodium, Potassium and Chloride has been done, along with Serum Electrolytes. Standard methodologies were adopted. The observations in the present study correlated with the works of Chadick et al and Pitkin and Zwirek. The levels of Creatinine, Uric Acid and Urea in Amniotic Fluid showed elevation, while Total Proteins and Serum Sodium showed a decline, as gestation progressed. The Serum and Amniotic Fluid Potassium and Chloride levels remain almost constant throughout the pregnancy. Thus, it is observed that the use of multiple parameters is desirable for accurate assessment of foetal maturity.

  8. Photovoltaic module parameters acquisition model

    Energy Technology Data Exchange (ETDEWEB)

    Cibira, Gabriel, E-mail: cibira@lm.uniza.sk; Koščová, Marcela, E-mail: mkoscova@lm.uniza.sk

    2014-09-01

    Highlights: • Photovoltaic five-parameter model is proposed using Matlab{sup ®} and Simulink. • The model acquisits input sparse data matrix from stigmatic measurement. • Computer simulations lead to continuous I–V and P–V characteristics. • Extrapolated I–V and P–V characteristics are in hand. • The model allows us to predict photovoltaics exploitation in different conditions. - Abstract: This paper presents basic procedures for photovoltaic (PV) module parameters acquisition using MATLAB and Simulink modelling. In first step, MATLAB and Simulink theoretical model are set to calculate I–V and P–V characteristics for PV module based on equivalent electrical circuit. Then, limited I–V data string is obtained from examined PV module using standard measurement equipment at standard irradiation and temperature conditions and stated into MATLAB data matrix as a reference model. Next, the theoretical model is optimized to keep-up with the reference model and to learn its basic parameters relations, over sparse data matrix. Finally, PV module parameters are deliverable for acquisition at different realistic irradiation, temperature conditions as well as series resistance. Besides of output power characteristics and efficiency calculation for PV module or system, proposed model validates computing statistical deviation compared to reference model.

  9. Photovoltaic module parameters acquisition model

    International Nuclear Information System (INIS)

    Cibira, Gabriel; Koščová, Marcela

    2014-01-01

    Highlights: • Photovoltaic five-parameter model is proposed using Matlab ® and Simulink. • The model acquisits input sparse data matrix from stigmatic measurement. • Computer simulations lead to continuous I–V and P–V characteristics. • Extrapolated I–V and P–V characteristics are in hand. • The model allows us to predict photovoltaics exploitation in different conditions. - Abstract: This paper presents basic procedures for photovoltaic (PV) module parameters acquisition using MATLAB and Simulink modelling. In first step, MATLAB and Simulink theoretical model are set to calculate I–V and P–V characteristics for PV module based on equivalent electrical circuit. Then, limited I–V data string is obtained from examined PV module using standard measurement equipment at standard irradiation and temperature conditions and stated into MATLAB data matrix as a reference model. Next, the theoretical model is optimized to keep-up with the reference model and to learn its basic parameters relations, over sparse data matrix. Finally, PV module parameters are deliverable for acquisition at different realistic irradiation, temperature conditions as well as series resistance. Besides of output power characteristics and efficiency calculation for PV module or system, proposed model validates computing statistical deviation compared to reference model

  10. Towards automatic parameter tuning of stream processing systems

    KAUST Repository

    Bilal, Muhammad; Canini, Marco

    2017-01-01

    for automating parameter tuning for stream-processing systems. Our framework supports standard black-box optimization algorithms as well as a novel gray-box optimization algorithm. We demonstrate the multiple benefits of automated parameter tuning in optimizing

  11. Electroweak interaction parameters

    International Nuclear Information System (INIS)

    Marciano, W.J.

    1984-01-01

    After a presentation of the experimentally determined parameters of the standard SU(3) x SU(2) x U(1) model the author discusses the definition of the Weinberg angle. Then masses and widths of the intermediate vector bosons are considered in the framework of the Weinberg-Salam theory with radiative corrections. Furthermore the radiative decays of these bosons are discussed. Then the relations between the masses of the Higgs boson and the top quark are considered. Thereafter grand unification is briefly discussed with special regards to the SU(5) prediction of some observable parameters. Finally some speculations are made concerning the observation of radiative decays in the UA1 experiments. (HSI)

  12. Precision measurements of Standard Model parameters and Review of Drell-Yan and vector boson plus jets measurements with the ATLAS detector

    International Nuclear Information System (INIS)

    Calace, Noemi

    2016-01-01

    The inclusive productions of the W and the on- or off-shell Z/γ* bosons are standard candles at hadron colliders, while the productions of light and heavy-flavour jets in association with a W or a Z boson are important processes to study quantum-chromodynamics (QCD) in multi-scale environments. The measurements of their production cross-sections integrated and differential in several variables have been measured at 7 and 8 TeV centre-of-mass energies and are compared to high-order QCD calculations and Monte Carlo simulations. These measurements have an impact on our knowledge of the parton densities of the proton, and test soft resummation effects and hard emissions for small and large momentum transfers and in multi-scale processes. Precision measurements of fundamental Standard Model parameters in Drell-Yan final states are also performed to describe the angular distributions of the decay lepton. Run-1 studies carried out by the ATLAS Collaboration are re-viewed and first LHC Run-2 results are included

  13. Higgs detectability in the extended supersymmetric standard model

    International Nuclear Information System (INIS)

    Kamoshita, Jun-ichi

    1995-01-01

    Higgs detectability at a future linear collider are discussed in the minimal supersymmetric standard model (MSSM) and a supersymmetric standard model with a gauge singlet Higgs field (NMSSM). First, in the MSSM at least one of the neutral scalar Higgs is shown to be detectable irrespective of parameters of the model in a future e + e - linear collider at √s = 300-500 GeV. Next the Higgs sector of the NMSSM is considered, since the lightest Higgs boson can be singlet dominated and therefore decouple from Z 0 boson it is important to consider the production of heavier Higgses. It is shown that also in this case at least one of the neutral scalar Higgs will be detectable in a future linear collider. We extend the analysis and show that the same is true even if three singlets are included. Thus the detectability of these Higgs bosons of these models is guaranteed. (author)

  14. A standardized method for sampling and extraction methods for quantifying microplastics in beach sand.

    Science.gov (United States)

    Besley, Aiken; Vijver, Martina G; Behrens, Paul; Bosker, Thijs

    2017-01-15

    Microplastics are ubiquitous in the environment, are frequently ingested by organisms, and may potentially cause harm. A range of studies have found significant levels of microplastics in beach sand. However, there is a considerable amount of methodological variability among these studies. Methodological variation currently limits comparisons as there is no standard procedure for sampling or extraction of microplastics. We identify key sampling and extraction procedures across the literature through a detailed review. We find that sampling depth, sampling location, number of repeat extractions, and settling times are the critical parameters of variation. Next, using a case-study we determine whether and to what extent these differences impact study outcomes. By investigating the common practices identified in the literature with the case-study, we provide a standard operating procedure for sampling and extracting microplastics from beach sand. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. The problem of epistemic jurisdiction in global governance: The case of sustainability standards for biofuels.

    Science.gov (United States)

    Winickoff, David E; Mondou, Matthieu

    2017-02-01

    While there is ample scholarly work on regulatory science within the state, or single-sited global institutions, there is less on its operation within complex modes of global governance that are decentered, overlapping, multi-sectorial and multi-leveled. Using a co-productionist framework, this study identifies 'epistemic jurisdiction' - the power to produce or warrant technical knowledge for a given political community, topical arena or geographical territory - as a central problem for regulatory science in complex governance. We explore these dynamics in the arena of global sustainability standards for biofuels. We select three institutional fora as sites of inquiry: the European Union's Renewable Energy Directive, the Roundtable on Sustainable Biomaterials, and the International Organization for Standardization. These cases allow us to analyze how the co-production of sustainability science responds to problems of epistemic jurisdiction in the global regulatory order. First, different problems of epistemic jurisdiction beset different standard-setting bodies, and these problems shape both the content of regulatory science and the procedures designed to make it authoritative. Second, in order to produce global regulatory science, technical bodies must manage an array of conflicting imperatives - including scientific virtue, due process and the need to recruit adoptees to perpetuate the standard. At different levels of governance, standard drafters struggle to balance loyalties to country, to company or constituency and to the larger project of internationalization. Confronted with these sometimes conflicting pressures, actors across the standards system quite self-consciously maneuver to build or retain authority for their forum through a combination of scientific adjustment and political negotiation. Third, the evidentiary demands of regulatory science in global administrative spaces are deeply affected by 1) a market for standards, in which firms and states can

  16. Evaluation of the Reference Numerical Parameters of the Monthly Method in ISO 13790 Considering S/V Ratio

    Directory of Open Access Journals (Sweden)

    Hee-Jeong Kwak

    2015-01-01

    Full Text Available Many studies have investigated the accuracy of the numerical parameters in the application of the quasi steady-state calculation method. The aim of this study is to derive the reference numerical parameters of the ISO 13790 monthly method by reflecting the surface-to-volume (S/V ratio and the characteristics of the structures. The calculation process was established, and the parameters necessary to derive the reference numerical parameters were calculated based on the input data prepared for the established calculation processes. The reference numerical parameters were then derived through regression analyses of the calculated parameters and the time constant. The parameters obtained from an apartment building and the parameters of the international standard were both applied to the Passive House Planning Package (PHPP and EnergyPlus programs, and the results were analyzed in order to evaluate the validity of the results. The analysis revealed that the calculation results based on the parameters derived from this study yielded lower error rates than those based on the default parameters in ISO 13790. However, the differences were shown to be negligible in the case of high heat capacity.

  17. Status of conversion of NE standards to national consensus standards

    International Nuclear Information System (INIS)

    Jennings, S.D.

    1990-06-01

    One major goal of the Nuclear Standards Program is to convert existing NE standards into national consensus standards (where possible). This means that an NE standard in the same subject area using the national consensus process. This report is a summary of the activities that have evolved to effect conversion of NE standards to national consensus standards, and the status of current conversion activities. In some cases, all requirements in an NE standard will not be incorporated into the published national consensus standard because these requirements may be considered too restrictive or too specific for broader application by the nuclear industry. If these requirements are considered necessary for nuclear reactor program applications, the program standard will be revised and issued as a supplement to the national consensus standard. The supplemental program standard will contain only those necessary requirements not reflected by the national consensus standard. Therefore, while complete conversion of program standards may not always be realized, the standards policy has been fully supported in attempting to make maximum use of the national consensus standard. 1 tab

  18. Considerations about the correct evaluation of sorption thermodynamic parameters from equilibrium isotherms

    International Nuclear Information System (INIS)

    Salvestrini, Stefano; Leone, Vincenzo; Iovino, Pasquale; Canzano, Silvana; Capasso, Sante

    2014-01-01

    Highlights: • Different methods to derive sorption thermodynamic parameters have been discussed. • ΔG° and, ΔS° values depend on the selected standard states. • Isosteric heat values help in evaluating the applicability of the sorption models. -- Abstract: This is a comparative analysis of popular methods currently in use to derive sorption thermodynamic parameters from temperature dependence of sorption isotherms. It is emphasized that the standard and isosteric thermodynamic parameters have sharply different meanings. Moreover, it is shown with examples how the sorption model adopted conditions the standard state and consequently the value of ΔG° and ΔS°. These trivial but often neglected aspects should carefully be considered when comparing thermodynamic parameters from different literature sources. An effort by the scientific community is needed to define criteria for the choice of the standard state in sorption processes

  19. Testing Pneumonia Vaccines in the Elderly: Determining a Case Definition for Pneumococcal Pneumonia in the Absence of a Gold Standard.

    Science.gov (United States)

    Jokinen, Jukka; Snellman, Marja; Palmu, Arto A; Saukkoriipi, Annika; Verlant, Vincent; Pascal, Thierry; Devaster, Jeanne-Marie; Hausdorff, William P; Kilpi, Terhi M

    2017-12-15

    Clinical assessments of vaccines to prevent pneumococcal (Pnc) community-acquired pneumonia (CAP) require sensitive and specific case definitions, but there is no gold standard diagnostic test. To develop a new case definition suitable for vaccine efficacy studies, we applied latent class analysis (LCA) to the results from seven diagnostic tests for Pnc etiology on clinical specimens from 323 elderly radiologically-confirmed pneumonia cases enrolled in The Finnish Community-Acquired Pneumonia Epidemiology study during 2005-2007. Compared to the conventional use of LCA, which is mainly to determine sensitivities and specificities of different tests, we instead used LCA as an appropriate instrument to predict the probability of Pnc etiology for each CAP case based on their test profiles, and utilized the predictions to minimize the sample size that would be needed for a vaccine efficacy trial. When compared to the conventional laboratory criteria of encapsulated Pnc in blood culture or in high-quality sputum culture or urine antigen positivity, our optimized case definition for PncCAP resulted in a trial sample size which was almost 20,000 subjects smaller. We believe that our novel application of LCA detailed here to determine a case definition for PncCAP could also be similarly applied to other diseases without a gold standard. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.

  20. Monopolization by "Raising Rivals' Costs": The Standard Oil Case.

    OpenAIRE

    Granitz, Elizabeth; Klein, Benjamin

    1996-01-01

    Standard monopolized the petroleum industry during the 1870s by cartelizing the stage of production where entry was difficult--petroleum transportation. Standard enforced the transportation cartel by shifting its refinery shipments among railroads to stabilize individual railroad market shares at collusively agreed-on levels. This method of cartel policing was effective because Standard possessed a dominant share of refining, a dominance made possible with the assistance of the railroads. The...

  1. An Efficient Data Partitioning to Improve Classification Performance While Keeping Parameters Interpretable.

    Directory of Open Access Journals (Sweden)

    Kristjan Korjus

    Full Text Available Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier's generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term "Cross-validation and cross-testing" improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do.

  2. Method for matching customer and manufacturer positions for metal product parameters standardization

    Science.gov (United States)

    Polyakova, Marina; Rubin, Gennadij; Danilova, Yulija

    2018-04-01

    Decision making is the main stage of regulation the relations between customer and manufacturer during the design the demands of norms in standards. It is necessary to match the positions of the negotiating sides in order to gain the consensus. In order to take into consideration the differences of customer and manufacturer estimation of the object under standardization process it is obvious to use special methods of analysis. It is proposed to establish relationships between product properties and its functions using functional-target analysis. The special feature of this type of functional analysis is the consideration of the research object functions and properties. It is shown on the example of hexagonal head crew the possibility to establish links between its functions and properties. Such approach allows obtaining a quantitative assessment of the closeness the positions of customer and manufacturer at decision making during the standard norms establishment.

  3. Weibull Distribution for Estimating the Parameters and Application of Hilbert Transform in case of a Low Wind Speed at Kolaghat

    Directory of Open Access Journals (Sweden)

    P Bhattacharya

    2016-09-01

    Full Text Available The wind resource varies with of the day and the season of the year and even some extent from year to year. Wind energy has inherent variances and hence it has been expressed by distribution functions. In this paper, we present some methods for estimating Weibull parameters in case of a low wind speed characterization, namely, shape parameter (k, scale parameter (c and characterize the discrete wind data sample by the discrete Hilbert transform. We know that the Weibull distribution is an important distribution especially for reliability and maintainability analysis. The suitable values for both shape parameter and scale parameters of Weibull distribution are important for selecting locations of installing wind turbine generators. The scale parameter of Weibull distribution also important to determine whether a wind farm is good or not. Thereafter the use of discrete Hilbert transform (DHT for wind speed characterization provides a new era of using DHT besides its application in digital signal processing. Basically in this paper, discrete Hilbert transform has been applied to characterize the wind sample data measured on College of Engineering and Management, Kolaghat, East Midnapore, India in January 2011.

  4. Activities of Radiation Standard Section

    International Nuclear Information System (INIS)

    Kannan, A.; Rao, P.S.; Sachadev, R.N.; Shaha, V.V.; Sharma, D.; Srivastava, P.K.

    1992-01-01

    A brief account of the various facilities and services provided by the Radiation Standards Section (RSS) of the Bhabha Atomic Research Centre, Bombay is given. RSS maintains the primary and secondary standards of various parameters of radiation measurements. It ensures accurate radiological measurements as per international requirements, through periodic international intercomparisons of national standards. It also provides calibration services to various users of radiation sources and instruments. The activities of RSS are described under the headings: (1) Radiological Metrology Standards, (2) Radionuclide Standards, Neutron Metrology, (4) Instruments Calibration, (5) Non-ionizing Radiations, and (6) Instrumentation. (author). figs., tabs

  5. CP violation in the standard model and beyond

    International Nuclear Information System (INIS)

    Buras, A.J.

    1984-01-01

    The present status of CP violation in the standard six quark model is reviewed and a combined analysis with B-meson decays is presented. The theoretical uncertainties in the analysis are discussed and the resulting KM weak mixing angles, the phase delta and the ratio epsilon'/epsilon are given as functions of Tsub(B), GAMMA(b -> u)/GAMMA(b -> c), msub(t) and the B parameter. For certain ranges of the values of these parameters the standard model is not capable in reproducing the experimental values for epsilon' and epsilon parameters. Anticipating possible difficulties we discuss various alternatives to the standard explanation of CP violation such as horizontal interactions, left-right symmetric models and supersymmetry. CP violation outside the kaon system is also briefly discussed. (orig.)

  6. Non-standard interactions with high-energy atmospheric neutrinos at IceCube

    Energy Technology Data Exchange (ETDEWEB)

    Salvado, Jordi; Mena, Olga; Palomares-Ruiz, Sergio; Rius, Nuria [Instituto de Física Corpuscular (IFIC), CSIC-Universitat de València,Apartado de Correos 22085, E-46071 Valencia (Spain)

    2017-01-31

    Non-standard interactions in the propagation of neutrinos in matter can lead to significant deviations from expectations within the standard neutrino oscillation framework and atmospheric neutrino detectors have been considered to set constraints. However, most previous works have focused on relatively low-energy atmospheric neutrino data. Here, we consider the one-year high-energy through-going muon data in IceCube, which has been already used to search for light sterile neutrinos, to constrain new interactions in the μτ-sector. In our analysis we include several systematic uncertainties on both, the atmospheric neutrino flux and on the detector properties, which are accounted for via nuisance parameters. After considering different primary cosmic-ray spectra and hadronic interaction models, we improve over previous analysis by using the latest data and showing that systematics currently affect very little the bound on the off-diagonal ε{sub μτ}, with the 90% credible interval given by −6.0×10{sup −3}<ε{sub μτ}<5.4×10{sup −3}, comparable to previous results. In addition, we also estimate the expected sensitivity after 10 years of collected data in IceCube and study the precision at which non-standard parameters could be determined for the case of ε{sub μτ} near its current bound.

  7. A method for model identification and parameter estimation

    International Nuclear Information System (INIS)

    Bambach, M; Heinkenschloss, M; Herty, M

    2013-01-01

    We propose and analyze a new method for the identification of a parameter-dependent model that best describes a given system. This problem arises, for example, in the mathematical modeling of material behavior where several competing constitutive equations are available to describe a given material. In this case, the models are differential equations that arise from the different constitutive equations, and the unknown parameters are coefficients in the constitutive equations. One has to determine the best-suited constitutive equations for a given material and application from experiments. We assume that the true model is one of the N possible parameter-dependent models. To identify the correct model and the corresponding parameters, we can perform experiments, where for each experiment we prescribe an input to the system and observe a part of the system state. Our approach consists of two stages. In the first stage, for each pair of models we determine the experiment, i.e. system input and observation, that best differentiates between the two models, and measure the distance between the two models. Then we conduct N(N − 1) or, depending on the approach taken, N(N − 1)/2 experiments and use the result of the experiments as well as the previously computed model distances to determine the true model. We provide sufficient conditions on the model distances and measurement errors which guarantee that our approach identifies the correct model. Given the model, we identify the corresponding model parameters in the second stage. The problem in the second stage is a standard parameter estimation problem and we use a method suitable for the given application. We illustrate our approach on three examples, including one where the models are elliptic partial differential equations with different parameterized right-hand sides and an example where we identify the constitutive equation in a problem from computational viscoplasticity. (paper)

  8. Eating Habits and Standard Body Parameters Among Students at University of Banja Luka

    Directory of Open Access Journals (Sweden)

    Raseta Nela

    2018-03-01

    Full Text Available Poor dietary habits have become one of the most important concerns among public health policy makers in recent years, due to the impact they have on both economic and health systems of a country. The transitional period toward young adulthood, marked with high school graduation and the beginning of college years, has been identified as critical in terms of its influence on young people’s bad eating habits. The aim of this study was to assess whether the results obtained through Food Frequency Questionnaire significantly correlate with standard body parameters. Participants included 210 students from the University of Banja Luka, with the mean age of 21.94 ± 2.73 years. Factorization of Food Frequency Questionnaire Instrument extracted seven factors which were subjected to multiple regression analysis as independent variables, and correlated to dependent variables - anthropological measurements. This study shows that the factors labeled as consumption of bread, consumption of healthy food, and intake of carbohydrates, are significantly related to Body Fat Percentage, whereas factors labeled as intake of food of animal origin, and intake of fruits and vegetables, are statistically significant in terms of their relation to Waist-to-Hip Ratio. Only one factor, labeled as intake of unhealthy food, is significantly related to Body Mass Index; this is to suggest that Body Mass Index has again showed many limitations with regard to its research relevance. This research has also found that students of the University of Banja Luka typically consume white bread, known to have a direct link with overweight and obesity.

  9. IEEE Std 382-1985: IEEE standard for qualification of actuators for power operated valve assemblies with safety-related functions for nuclear power plants

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    This standard describes the qualification of all types of power-driven valve actuators, including damper actuators, for safety-related functions in nuclear power generating stations. This standard may also be used to separately qualify actuator components. This standard establishes the minimum requirements for, and guidance regarding, the methods and procedures for qualification of power-driven valve actuators with safety-related functions Part I describes the qualification process. Part II describes the standard qualification cases and their environmental parameters for the usual locations of safety-related equipment in a nuclear generating station. Part III describes the qualification tests outlined in 6.3.3

  10. Interchanging parameters and integrals in dynamical systems: the mapping case

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, John A.G. [Department of Mathematics, La Trobe University, Bundoora, VIC (Australia) and School of Mathematics, University of New South Wales, Sydney, NSW (Australia)]. E-mail: jagr@maths.unsw.edu.au; Apostolos, Iatrou; Quispel, G.R.W. [Department of Mathematics, La Trobe University, Bundoora, VIC (Australia)]. E-mails: A.Iatrou@latrobe.edu.au; R.Quispel@latrobe.edu.au

    2002-03-08

    We consider dynamical systems with discrete time (maps) that possess one or more integrals depending upon parameters. We show that integrals can be used to replace parameters in the original map so as to construct a different map with different integrals. We also highlight a process of reparametrization that can be used to increase the number of parameters in the original map prior to using integrals to replace them. Properties of the original map and the new map are compared. The theory is motivated by, and illustrated with, examples of a three-dimensional trace map and some four-dimensional maps previously shown to be integrable. (author)

  11. EVALUATION OF DIFFERENT INTERNAL STANDARDS FOR ...

    African Journals Online (AJOL)

    internal standard addition method was evaluated with the variation in solution ... background emissions involved in generating accurate and reliable .... The most suitable atomic and ionic wavelengths (λ) for the internal standards and precious ..... spectral interferences occur and using validation parameters is currently ...

  12. A simple but accurate procedure for solving the five-parameter model

    International Nuclear Information System (INIS)

    Mares, Oana; Paulescu, Marius; Badescu, Viorel

    2015-01-01

    Highlights: • A new procedure for extracting the parameters of the one-diode model is proposed. • Only the basic information listed in the datasheet of PV modules are required. • Results demonstrate a simple, robust and accurate procedure. - Abstract: The current–voltage characteristic of a photovoltaic module is typically evaluated by using a model based on the solar cell equivalent circuit. The complexity of the procedure applied for extracting the model parameters depends on data available in manufacture’s datasheet. Since the datasheet is not detailed enough, simplified models have to be used in many cases. This paper proposes a new procedure for extracting the parameters of the one-diode model in standard test conditions, using only the basic data listed by all manufactures in datasheet (short circuit current, open circuit voltage and maximum power point). The procedure is validated by using manufacturers’ data for six commercially crystalline silicon photovoltaic modules. Comparing the computed and measured current–voltage characteristics the determination coefficient is in the range 0.976–0.998. Thus, the proposed procedure represents a feasible tool for solving the five-parameter model applied to crystalline silicon photovoltaic modules. The procedure is described in detail, to guide potential users to derive similar models for other types of photovoltaic modules.

  13. Standardization as an Arena for Open Innovation

    Science.gov (United States)

    Grøtnes, Endre

    This paper argues that anticipatory standardization can be viewed as an arena for open innovation and shows this through two cases from mobile telecommunication standardization. One case is the Android initiative by Google and the Open Handset Alliance, while the second case is the general standardization work of the Open Mobile Alliance. The paper shows how anticipatory standardization intentionally uses inbound and outbound streams of research and intellectual property to create new innovations. This is at the heart of the open innovation model. The standardization activities use both pooling of R&D and the distribution of freely available toolkits to create products and architectures that can be utilized by the participants and third parties to leverage their innovation. The paper shows that the technology being standardized needs to have a systemic nature to be part of an open innovation process.

  14. Environmental standards as strategic outcomes: A simple model

    International Nuclear Information System (INIS)

    Bhattacharya, Rabindra N.; Pal, Rupayan

    2010-01-01

    This paper analyses the strategic nature of choice of environmental standards considering both local and global pollution under alternative regimes of international trade. It also compares and contrasts the strategic equilibrium environmental standards and levels of pollution, local and global, with the world optimum levels. It shows that, in case of open economies, environmental standards can be either strategic substitutes or strategic complements. On the contrary, in case of closed economies, environmental standards are always strategic substitutes. It also shows that the strategic equilibrium environmental standards in case of open economies are higher than the world optimum in certain situations. Whereas, in absence of international trade, countries set, in equilibrium, lower environmental standards than the world optimum. (author)

  15. Safeguards systems parameters

    International Nuclear Information System (INIS)

    Avenhaus, R.; Heil, J.

    1979-01-01

    In this paper analyses are made of the values of those parameters that characterize the present safeguards system that is applied to a national fuel cycle; those values have to be fixed quantitatively so that all actions of the safeguards authority are specified precisely. The analysis starts by introducing three categories of quantities: The design parameters (number of MBAs, inventory frequency, variance of MUF, verification effort and false-alarm probability) describe those quantities whose values have to be specified before the safeguards system can be implemented. The performance criteria (probability of detection, expected detection time, goal quantity) measure the effectiveness of a safeguards system; and the standards (threshold amount and critical time) characterize the magnitude of the proliferation problem. The means by which the values of the individual design parameters can be determined with the help of the performance criteria; which qualitative arguments can narrow down the arbitrariness of the choice of values of the remaining parameters; and which parameter values have to be fixed more or less arbitrarily, are investigated. As a result of these considerations, which include the optimal allocation of a given inspection effort, the problem of analysing the structure of the safeguards system is reduced to an evaluation of the interplay of only a few parameters, essentially the quality of the measurement system (variance of MUF), verification effort, false-alarm probability, goal quantity and probability of detection

  16. Effect of noise on the standard mapping

    International Nuclear Information System (INIS)

    Karney, C.F.F.; Rechester, A.B.; White, R.B.

    1981-03-01

    The effect of a small amount of noise on the standard mapping is considered. Whenever the standard mapping possesses accelerator models (where the action increases approximately linearly with time), the diffusion coefficient contains a term proportional to the reciprocal of the variance of the noise term. At large values of the stochasticity parameter, the accelerator modes exhibit a universal behavior. As a result the dependence of the diffusion coefficient on stochasticity parameter also shows some universal behavior

  17. Using standardized video cases for assessment of medical communication skills: reliability of an objective structured video examination by computer

    NARCIS (Netherlands)

    Hulsman, R. L.; Mollema, E. D.; Oort, F. J.; Hoos, A. M.; de Haes, J. C. J. M.

    2006-01-01

    OBJECTIVE: Using standardized video cases in a computerized objective structured video examination (OSVE) aims to measure cognitive scripts underlying overt communication behavior by questions on knowledge, understanding and performance. In this study the reliability of the OSVE assessment is

  18. Quantitative Analysis of Torso FDG-PET Scans by Using Anatomical Standardization of Normal Cases from Thorough Physical Examinations.

    Directory of Open Access Journals (Sweden)

    Takeshi Hara

    Full Text Available Understanding of standardized uptake value (SUV of 2-deoxy-2-[18F]fluoro-d-glucose positron emission tomography (FDG-PET depends on the background accumulations of glucose because the SUV often varies the status of patients. The purpose of this study was to develop a new method for quantitative analysis of SUV of FDG-PET scan images. The method included an anatomical standardization and a statistical comparison with normal cases by using Z-score that are often used in SPM or 3D-SSP approach for brain function analysis. Our scheme consisted of two approaches, which included the construction of a normal model and the determination of the SUV scores as Z-score index for measuring the abnormality of an FDG-PET scan image. To construct the normal torso model, all of the normal images were registered into one shape, which indicated the normal range of SUV at all voxels. The image deformation process consisted of a whole body rigid registration of shoulder to bladder region and liver registration and a non-linear registration of body surface by using the thin-plate spline technique. In order to validate usefulness of our method, we segment suspicious regions on FDG-PET images manually, and obtained the Z-scores of the regions based on the corresponding voxels that stores the mean and the standard deviations from the normal model. We collected 243 (143 males and 100 females normal cases to construct the normal model. We also extracted 432 abnormal spots from 63 abnormal cases (73 cancer lesions to validate the Z-scores. The Z-scores of 417 out of 432 abnormal spots were higher than 2.0, which statistically indicated the severity of the spots. In conclusions, the Z-scores obtained by our computerized scheme with anatomical standardization of torso region would be useful for visualization and detection of subtle lesions on FDG-PET scan images even when the SUV may not clearly show an abnormality.

  19. The Value of Data and Metadata Standardization for Interoperability in Giovanni

    Science.gov (United States)

    Smit, C.; Hegde, M.; Strub, R. F.; Bryant, K.; Li, A.; Petrenko, M.

    2017-12-01

    Giovanni (https://giovanni.gsfc.nasa.gov/giovanni/) is a data exploration and visualization tool at the NASA Goddard Earth Sciences Data Information Services Center (GES DISC). It has been around in one form or another for more than 15 years. Giovanni calculates simple statistics and produces 22 different visualizations for more than 1600 geophysical parameters from more than 90 satellite and model products. Giovanni relies on external data format standards to ensure interoperability, including the NetCDF CF Metadata Conventions. Unfortunately, these standards were insufficient to make Giovanni's internal data representation truly simple to use. Finding and working with dimensions can be convoluted with the CF Conventions. Furthermore, the CF Conventions are silent on machine-friendly descriptive metadata such as the parameter's source product and product version. In order to simplify analyzing disparate earth science data parameters in a unified way, we developed Giovanni's internal standard. First, the format standardizes parameter dimensions and variables so they can be easily found. Second, the format adds all the machine-friendly metadata Giovanni needs to present our parameters to users in a consistent and clear manner. At a glance, users can grasp all the pertinent information about parameters both during parameter selection and after visualization. This poster gives examples of how our metadata and data standards, both external and internal, have both simplified our code base and improved our users' experiences.

  20. Status of conversion of DOE standards to non-Government standards

    Energy Technology Data Exchange (ETDEWEB)

    Moseley, H.L.

    1992-07-01

    One major goal of the DOE Technical Standards Program is to convert existing DOE standards into non-Government standards (NGS's) where possible. This means that a DOE standard may form the basis for a standards-writing committee to produce a standard in the same subject area using the non-Government standards consensus process. This report is a summary of the activities that have evolved to effect conversion of DOE standards to NGSs, and the status of current conversion activities. In some cases, all requirements in a DOE standard will not be incorporated into the published non-Government standard because these requirements may be considered too restrictive or too specific for broader application by private industry. If requirements in a DOE standard are not incorporated in a non-Government standard and the requirements are considered necessary for DOE program applications, the DOE standard will be revised and issued as a supplement to the non-Government standard. The DOE standard will contain only those necessary requirements not reflected by the non-Government standard. Therefore, while complete conversion of DOE standards may not always be realized, the Department's technical standards policy as stated in Order 1300.2A has been fully supported in attempting to make maximum use of the non-Government standard.

  1. Status of conversion of DOE standards to non-Government standards

    Energy Technology Data Exchange (ETDEWEB)

    Moseley, H.L.

    1992-07-01

    One major goal of the DOE Technical Standards Program is to convert existing DOE standards into non-Government standards (NGS`s) where possible. This means that a DOE standard may form the basis for a standards-writing committee to produce a standard in the same subject area using the non-Government standards consensus process. This report is a summary of the activities that have evolved to effect conversion of DOE standards to NGSs, and the status of current conversion activities. In some cases, all requirements in a DOE standard will not be incorporated into the published non-Government standard because these requirements may be considered too restrictive or too specific for broader application by private industry. If requirements in a DOE standard are not incorporated in a non-Government standard and the requirements are considered necessary for DOE program applications, the DOE standard will be revised and issued as a supplement to the non-Government standard. The DOE standard will contain only those necessary requirements not reflected by the non-Government standard. Therefore, while complete conversion of DOE standards may not always be realized, the Department`s technical standards policy as stated in Order 1300.2A has been fully supported in attempting to make maximum use of the non-Government standard.

  2. Efficient Ensemble State-Parameters Estimation Techniques in Ocean Ecosystem Models: Application to the North Atlantic

    Science.gov (United States)

    El Gharamti, M.; Bethke, I.; Tjiputra, J.; Bertino, L.

    2016-02-01

    Given the recent strong international focus on developing new data assimilation systems for biological models, we present in this comparative study the application of newly developed state-parameters estimation tools to an ocean ecosystem model. It is quite known that the available physical models are still too simple compared to the complexity of the ocean biology. Furthermore, various biological parameters remain poorly unknown and hence wrong specifications of such parameters can lead to large model errors. Standard joint state-parameters augmentation technique using the ensemble Kalman filter (Stochastic EnKF) has been extensively tested in many geophysical applications. Some of these assimilation studies reported that jointly updating the state and the parameters might introduce significant inconsistency especially for strongly nonlinear models. This is usually the case for ecosystem models particularly during the period of the spring bloom. A better handling of the estimation problem is often carried out by separating the update of the state and the parameters using the so-called Dual EnKF. The dual filter is computationally more expensive than the Joint EnKF but is expected to perform more accurately. Using a similar separation strategy, we propose a new EnKF estimation algorithm in which we apply a one-step-ahead smoothing to the state. The new state-parameters estimation scheme is derived in a consistent Bayesian filtering framework and results in separate update steps for the state and the parameters. Unlike the classical filtering path, the new scheme starts with an update step and later a model propagation step is performed. We test the performance of the new smoothing-based schemes against the standard EnKF in a one-dimensional configuration of the Norwegian Earth System Model (NorESM) in the North Atlantic. We use nutrients profile (up to 2000 m deep) data and surface partial CO2 measurements from Mike weather station (66o N, 2o E) to estimate

  3. Towards an Interoperable Field Spectroscopy Metadata Standard with Extended Support for Marine Specific Applications

    Directory of Open Access Journals (Sweden)

    Barbara A. Rasaiah

    2015-11-01

    Full Text Available This paper presents an approach to developing robust metadata standards for specific applications that serves to ensure a high level of reliability and interoperability for a spectroscopy dataset. The challenges of designing a metadata standard that meets the unique requirements of specific user communities are examined, including in situ measurement of reflectance underwater, using coral as a case in point. Metadata schema mappings from seven existing metadata standards demonstrate that they consistently fail to meet the needs of field spectroscopy scientists for general and specific applications (μ = 22%, σ = 32% conformance with the core metadata requirements and μ = 19%, σ = 18% for the special case of a benthic (e.g., coral reflectance metadataset. Issues such as field measurement methods, instrument calibration, and data representativeness for marine field spectroscopy campaigns are investigated within the context of submerged benthic measurements. The implication of semantics and syntax for a robust and flexible metadata standard are also considered. A hybrid standard that serves as a “best of breed” incorporating useful modules and parameters within the standards is proposed. This paper is Part 3 in a series of papers in this journal, examining the issues central to a metadata standard for field spectroscopy datasets. The results presented in this paper are an important step towards field spectroscopy metadata standards that address the specific needs of field spectroscopy data stakeholders while facilitating dataset documentation, quality assurance, discoverability and data exchange within large-scale information sharing platforms.

  4. Building a gold standard to construct search filters: a case study with biomarkers for oral cancer.

    Science.gov (United States)

    Frazier, John J; Stein, Corey D; Tseytlin, Eugene; Bekhuis, Tanja

    2015-01-01

    To support clinical researchers, librarians and informationists may need search filters for particular tasks. Development of filters typically depends on a "gold standard" dataset. This paper describes generalizable methods for creating a gold standard to support future filter development and evaluation using oral squamous cell carcinoma (OSCC) as a case study. OSCC is the most common malignancy affecting the oral cavity. Investigation of biomarkers with potential prognostic utility is an active area of research in OSCC. The methods discussed here should be useful for designing quality search filters in similar domains. The authors searched MEDLINE for prognostic studies of OSCC, developed annotation guidelines for screeners, ran three calibration trials before annotating the remaining body of citations, and measured inter-annotator agreement (IAA). We retrieved 1,818 citations. After calibration, we screened the remaining citations (n = 1,767; 97.2%); IAA was substantial (kappa = 0.76). The dataset has 497 (27.3%) citations representing OSCC studies of potential prognostic biomarkers. The gold standard dataset is likely to be high quality and useful for future development and evaluation of filters for OSCC studies of potential prognostic biomarkers. The methodology we used is generalizable to other domains requiring a reference standard to evaluate the performance of search filters. A gold standard is essential because the labels regarding relevance enable computation of diagnostic metrics, such as sensitivity and specificity. Librarians and informationists with data analysis skills could contribute to developing gold standard datasets and subsequent filters tuned for their patrons' domains of interest.

  5. Assessment of severe malaria in a multicenter, phase III, RTS, S/AS01 malaria candidate vaccine trial: case definition, standardization of data collection and patient care.

    Science.gov (United States)

    Vekemans, Johan; Marsh, Kevin; Greenwood, Brian; Leach, Amanda; Kabore, William; Soulanoudjingar, Solange; Asante, Kwaku Poku; Ansong, Daniel; Evans, Jennifer; Sacarlal, Jahit; Bejon, Philip; Kamthunzi, Portia; Salim, Nahya; Njuguna, Patricia; Hamel, Mary J; Otieno, Walter; Gesase, Samwel; Schellenberg, David

    2011-08-04

    An effective malaria vaccine, deployed in conjunction with other malaria interventions, is likely to substantially reduce the malaria burden. Efficacy against severe malaria will be a key driver for decisions on implementation. An initial study of an RTS, S vaccine candidate showed promising efficacy against severe malaria in children in Mozambique. Further evidence of its protective efficacy will be gained in a pivotal, multi-centre, phase III study. This paper describes the case definitions of severe malaria used in this study and the programme for standardized assessment of severe malaria according to the case definition. Case definitions of severe malaria were developed from a literature review and a consensus meeting of expert consultants and the RTS, S Clinical Trial Partnership Committee, in collaboration with the World Health Organization and the Malaria Clinical Trials Alliance. The same groups, with input from an Independent Data Monitoring Committee, developed and implemented a programme for standardized data collection.The case definitions developed reflect the typical presentations of severe malaria in African hospitals. Markers of disease severity were chosen on the basis of their association with poor outcome, occurrence in a significant proportion of cases and on an ability to standardize their measurement across research centres. For the primary case definition, one or more clinical and/or laboratory markers of disease severity have to be present, four major co-morbidities (pneumonia, meningitis, bacteraemia or gastroenteritis with severe dehydration) are excluded, and a Plasmodium falciparum parasite density threshold is introduced, in order to maximize the specificity of the case definition. Secondary case definitions allow inclusion of co-morbidities and/or allow for the presence of parasitaemia at any density. The programmatic implementation of standardized case assessment included a clinical algorithm for evaluating seriously sick children

  6. Nuclear EMP: key suppression device parameters for EMP hardening

    International Nuclear Information System (INIS)

    Durgin, D.L.; Brown, R.M.

    1975-03-01

    The electrical transients induced by EMP exhibit unique characteristics which differ considerably from transients associated with other phenomena such as lightning, switching, and circuit malfunctions. The suppression techniques developed to handle more common transients, though not necessarily the same devices, can be used for EMP damage protection. The suppression devices used for circuit level EMP protection are referred to as Terminal Protection Devices (TPD). Little detailed data describing the response of TPD's to EMP-related transients have been published. While most vendors publish specifications for TPD performance, there is little standardization of parameters and TPD response models are not available. This lack of parameter standardization has resulted in a proliferation of test data that is sometimes conflicting and often not directly comparable. This paper derives and/or defines a consistent set of parameters based on EMP circuit hardening requirements and on measurable component parameters and is concerned only with use of TPD's to prevent permanent damage. Three sets of parameters pertaining to pertinent TPD functional characteristics were defined as follows: standby parameters, protection parameters, and failure parameters. These parameters are used to evaluate a representative sample of TPD's and the results are presented in matrix form to facilitate the selection of devices for specific hardening problems

  7. Fertilizer standards for controlling groundwater nitrate pollution from agriculture: El Salobral-Los Llanos case study, Spain

    Science.gov (United States)

    Peña-Haro, S.; Llopis-Albert, C.; Pulido-Velazquez, M.; Pulido-Velazquez, D.

    2010-10-01

    would be required to reduce nitrate concentrations in groundwater below the standard of 50 mg/l. In this particular case, it is more cost-efficient to apply standards to fertilizer use than taxes, although the instrument of fertilizer standards is more difficult to implement and control.

  8. Supersymmetric contribution to the electroweak ρ parameter

    International Nuclear Information System (INIS)

    Drees, M.; Hagiwara, K.

    1990-01-01

    Contributions to the electroweak ρ parameter, the ratio of the neutral- and charged-current strengths at zero-momentum transfer, are studied in the minimal extension of the standard model (SM) with softly broken supersymmetry. The effects of the extended Higgs sector, the gaugino-Higgsino sector, the light-squark--slepton sector and that of the stop-sbottom sector are studied separately, and the role of the custodial SU(2) V symmetry in each sector is clarified. The stop-sbottom sector is found to give potentially a most significant contribution to δρ which can double the standard-model contribution from the top-bottom sector, whereas all the remaining sectors contribute to δρ at the level of at most a few x10 -3 . In the supergravity model with radiative electroweak gauge symmetry breaking there are no extra sources of the SU(2) V breaking at the grand unification scale other than those present already in the SM, and the resulting δρ is found to be significantly smaller than in the general case. Constraints on the allowed range of δρ in the supergravity models are given by taking account of existing and prospective experimental mass limits of additional particles at CERN LEP and Sp bar pS and Fermilab Tevatron

  9. Radioactive standards type EK. Radioaktivni etalony EK

    Energy Technology Data Exchange (ETDEWEB)

    1978-01-01

    The standard gives the terminology and data on accuracy, properties, uses and technical parameters of EK type standards with radionuclides /sup 3/H and /sup 14/C. Also given are conditions for testing, packing, transportation and storage.

  10. The Role of Large-Format Histopathology in Assessing Subgross Morphological Prognostic Parameters: A Single Institution Report of 1000 Consecutive Breast Cancer Cases

    Directory of Open Access Journals (Sweden)

    Tibor Tot

    2012-01-01

    Full Text Available Breast cancer subgross morphological parameters (disease extent, lesion distribution, and tumor size provide significant prognostic information and guide therapeutic decisions. Modern multimodality radiological imaging can determine these parameters with increasing accuracy in most patients. Large-format histopathology preserves the spatial relationship of the tumor components and their relationship to the resection margins and has clear advantages over traditional routine pathology techniques. We report a series of 1000 consecutive breast cancer cases worked up with large-format histology with detailed radiological-pathological correlation. We confirmed that breast carcinomas often exhibit complex subgross morphology in both early and advanced stages. Half of the cases were extensive tumors and occupied a tissue space ≥40 mm in its largest dimension. Because both in situ and invasive tumor components may exhibit unifocal, multifocal, and diffuse lesion distribution, 17 different breast cancer growth patterns can be observed. Combining in situ and invasive tumor components, most cases fall into three aggregate growth patterns: unifocal (36%, multifocal (35%, and diffuse (28%. Large-format histology categories of tumor size and disease extent were concordant with radiological measurements in approximately 80% of the cases. Noncalcified, low-grade in situ foci, and invasive tumor foci <5 mm were the most frequent causes of discrepant findings.

  11. Standard model extended by a heavy singlet: Linear vs. nonlinear EFT

    Energy Technology Data Exchange (ETDEWEB)

    Buchalla, G., E-mail: gerhard.buchalla@lmu.de; Catà, O.; Celis, A.; Krause, C.

    2017-04-15

    We consider the Standard Model extended by a heavy scalar singlet in different regions of parameter space and construct the appropriate low-energy effective field theories up to first nontrivial order. This top-down exercise in effective field theory is meant primarily to illustrate with a simple example the systematics of the linear and nonlinear electroweak effective Lagrangians and to clarify the relation between them. We discuss power-counting aspects and the transition between both effective theories on the basis of the model, confirming in all cases the rules and procedures derived in previous works from a bottom-up approach.

  12. Essential patents in industry standards : the case of UMTS

    NARCIS (Netherlands)

    Bekkers, R.N.A.; Bongard, R.; Nuvolari, A.

    2009-01-01

    We study the determinants of essential patents in industry standards. In particular, we assess the role of two main factors: the significance of the technological solution contained in the patent and the involvement of the applicant of the patent in the standardization process. To this end, we

  13. Adoption of the Spanish eco design standard UNE 150301. A case study; Adopcion de la norma UNE 150301 de ecodeseno. Un estudio de casos

    Energy Technology Data Exchange (ETDEWEB)

    Arana-Landin, G.; Heras-Saizarbitoria, I.

    2010-07-01

    This article analyses the implementation of the Spanish National Standard UNE 150301 launched by AENOR in 2003. After studying the framework of this standard, its objectives and potential impact, for the first time the actual process of its implementation is examined using three Spanish industrial companies as case studies, all of which are pioneers in the adoption of this environmental standard. (Author)

  14. Identification of parameters underlying emotions and a classification of emotions

    OpenAIRE

    Kumar, N. Arvind

    2008-01-01

    The standard classification of emotions involves categorizing the expression of emotions. In this paper, parameters underlying some emotions are identified and a new classification based on these parameters is suggested.

  15. A Non-standard Empirical Likelihood for Time Series

    DEFF Research Database (Denmark)

    Nordman, Daniel J.; Bunzel, Helle; Lahiri, Soumendra N.

    Standard blockwise empirical likelihood (BEL) for stationary, weakly dependent time series requires specifying a fixed block length as a tuning parameter for setting confidence regions. This aspect can be difficult and impacts coverage accuracy. As an alternative, this paper proposes a new version...... of BEL based on a simple, though non-standard, data-blocking rule which uses a data block of every possible length. Consequently, the method involves no block selection and is also anticipated to exhibit better coverage performance. Its non-standard blocking scheme, however, induces non......-standard asymptotics and requires a significantly different development compared to standard BEL. We establish the large-sample distribution of log-ratio statistics from the new BEL method for calibrating confidence regions for mean or smooth function parameters of time series. This limit law is not the usual chi...

  16. A case-control analysis of common variants in GIP with type 2 diabetes and related biochemical parameters in a South Indian population

    Directory of Open Access Journals (Sweden)

    Kumar Harish

    2010-07-01

    Full Text Available Abstract Background Glucose-dependent insulinotropic polypeptide (GIP is one of the incretins, which plays a crucial role in the secretion of insulin upon food stimulus and in the regulation of postprandial glucose level. It also exerts an effect on the synthesis and secretion of lipoprotein lipase, from adipocytes, important for lipid metabolism. The aim of our study was to do a case-control association analysis of common variants in GIP in association with type 2 diabetes and related biochemical parameters. Method A total of 2000 subjects which includes 1000 (584M/416F cases with type 2 diabetes and 1000 (470M/530F normoglycemic control subjects belonging to Dravidian ethnicity from South India were recruited to assess the effect of single nucleotide polymorphisms (SNPs in GIP (rs2291725, rs2291726, rs937301 on type 2 diabetes in a case-control manner. The SNPs were genotyped by using tetra primer amplification refractory mutation system-PCR (ARMS PCR. For statistical analysis, our study population was divided into sub-groups based on gender (male and female. Association analysis was carried out using chi-squared test and the comparison of biochemical parameters among the three genotypes were performed using analysis of covariance (ANCOVA. Result Initial analysis revealed that, out of the total three SNPs selected for the present study, two SNPs namely rs2291726 and rs937301 were in complete linkage disequilibrium (LD with each other. Therefore, only two SNPs, rs2291725 and rs2291726, were genotyped for the association studies. No significant difference in the allele frequency and genotype distribution of any of the SNPs in GIP were observed between cases and controls (P > 0.05. Analysis of biochemical parameters among the three genotypes showed a significant association of total cholesterol (P = 0.042 and low density lipoprotein (LDL with the G allele of the SNP rs2291726 in GIP (P = 0.004, but this was observed only in the case of female

  17. Hecke algebras with unequal parameters

    CERN Document Server

    Lusztig, G

    2003-01-01

    Hecke algebras arise in representation theory as endomorphism algebras of induced representations. One of the most important classes of Hecke algebras is related to representations of reductive algebraic groups over p-adic or finite fields. In 1979, in the simplest (equal parameter) case of such Hecke algebras, Kazhdan and Lusztig discovered a particular basis (the KL-basis) in a Hecke algebra, which is very important in studying relations between representation theory and geometry of the corresponding flag varieties. It turned out that the elements of the KL-basis also possess very interesting combinatorial properties. In the present book, the author extends the theory of the KL-basis to a more general class of Hecke algebras, the so-called algebras with unequal parameters. In particular, he formulates conjectures describing the properties of Hecke algebras with unequal parameters and presents examples verifying these conjectures in particular cases. Written in the author's precise style, the book gives rese...

  18. varying elastic parameters distributions

    KAUST Repository

    Moussawi, Ali

    2014-12-01

    The experimental identication of mechanical properties is crucial in mechanics for understanding material behavior and for the development of numerical models. Classical identi cation procedures employ standard shaped specimens, assume that the mechanical elds in the object are homogeneous, and recover global properties. Thus, multiple tests are required for full characterization of a heterogeneous object, leading to a time consuming and costly process. The development of non-contact, full- eld measurement techniques from which complex kinematic elds can be recorded has opened the door to a new way of thinking. From the identi cation point of view, suitable methods can be used to process these complex kinematic elds in order to recover multiple spatially varying parameters through one test or a few tests. The requirement is the development of identi cation techniques that can process these complex experimental data. This thesis introduces a novel identi cation technique called the constitutive compatibility method. The key idea is to de ne stresses as compatible with the observed kinematic eld through the chosen class of constitutive equation, making possible the uncoupling of the identi cation of stress from the identi cation of the material parameters. This uncoupling leads to parametrized solutions in cases where 5 the solution is non-unique (due to unknown traction boundary conditions) as demonstrated on 2D numerical examples. First the theory is outlined and the method is demonstrated in 2D applications. Second, the method is implemented within a domain decomposition framework in order to reduce the cost for processing very large problems. Finally, it is extended to 3D numerical examples. Promising results are shown for 2D and 3D problems.

  19. Comparison of dynamic contrast-enhanced MRI parameters of breast lesions at 1.5 and 3.0 T: a pilot study

    Science.gov (United States)

    Pineda, F D; Medved, M; Fan, X; Ivancevic, M K; Abe, H; Shimauchi, A; Newstead, G M

    2015-01-01

    Objective: To compare dynamic contrast-enhanced (DCE) MRI parameters from scans of breast lesions at 1.5 and 3.0 T. Methods: 11 patients underwent paired MRI examinations in both Philips 1.5 and 3.0 T systems (Best, Netherlands) using a standard clinical fat-suppressed, T1 weighted DCE-MRI protocol, with 70–76 s temporal resolution. Signal intensity vs time curves were fit with an empirical mathematical model to obtain semi-quantitative measures of uptake and washout rates as well as time-to-peak enhancement (TTP). Maximum percent enhancement and signal enhancement ratio (SER) were also measured for each lesion. Percent differences between parameters measured at the two field strengths were compared. Results: TTP and SER parameters measured at 1.5 and 3.0 T were similar; with mean absolute differences of 19% and 22%, respectively. Maximum percent signal enhancement was significantly higher at 3 T than at 1.5 T (p = 0.006). Qualitative assessment showed that image quality was significantly higher at 3 T (p = 0.005). Conclusion: Our results suggest that TTP and SER are more robust to field strength change than other measured kinetic parameters, and therefore measurements of these parameters can be more easily standardized than measurements of other parameters derived from DCE-MRI. Semi-quantitative measures of overall kinetic curve shape showed higher reproducibility than do discrete classification of kinetic curve early and delayed phases in a majority of the cases studied. Advances in knowledge: Qualitative measures of curve shape are not consistent across field strength even when acquisition parameters are standardized. Quantitative measures of overall kinetic curve shape, by contrast, have higher reproducibility. PMID:25785918

  20. On standardization of low symmetry crystal fields

    Science.gov (United States)

    Gajek, Zbigniew

    2015-07-01

    Standardization methods of low symmetry - orthorhombic, monoclinic and triclinic - crystal fields are formulated and discussed. Two alternative approaches are presented, the conventional one, based on the second-rank parameters and the standardization based on the fourth-rank parameters. Mainly f-electron systems are considered but some guidelines for d-electron systems and the spin Hamiltonian describing the zero-field splitting are given. The discussion focuses on premises for choosing the most suitable method, in particular on inadequacy of the conventional one. Few examples from the literature illustrate this situation.

  1. Tibiofemoral wear in standard and non-standard squat: implication for total knee arthroplasty.

    Science.gov (United States)

    Fekete, Gusztáv; Sun, Dong; Gu, Yaodong; Neis, Patric Daniel; Ferreira, Ney Francisco; Innocenti, Bernardo; Csizmadia, Béla M

    2017-01-01

    Due to the more resilient biomaterials, problems related to wear in total knee replacements (TKRs) have decreased but not disappeared. In the design-related factors, wear is still the second most important mechanical factor that limits the lifetime of TKRs and it is also highly influenced by the local kinematics of the knee. During wear experiments, constant load and slide-roll ratio is frequently applied in tribo-tests beside other important parameters. Nevertheless, numerous studies demonstrated that constant slide-roll ratio is not accurate approach if TKR wear is modelled, while instead of a constant load, a flexion-angle dependent tibiofemoral force should be involved into the wear model to obtain realistic results. A new analytical wear model, based upon Archard's law, is introduced, which can determine the effect of the tibiofemoral force and the varying slide-roll on wear between the tibiofemoral connection under standard and non-standard squat movement. The calculated total wear with constant slide-roll during standard squat was 5.5 times higher compared to the reference value, while if total wear includes varying slide-roll during standard squat, the calculated wear was approximately 6.25 times higher. With regard to non-standard squat, total wear with constant slide-roll during standard squat was 4.16 times higher than the reference value. If total wear included varying slide-roll, the calculated wear was approximately 4.75 times higher. It was demonstrated that the augmented force parameter solely caused 65% higher wear volume while the slide-roll ratio itself increased wear volume by 15% higher compared to the reference value. These results state that the force component has the major effect on wear propagation while non-standard squat should be proposed for TKR patients as rehabilitation exercise.

  2. Relic abundance of WIMPs in non-standard cosmological scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Yimingniyazi, W.

    2007-08-06

    In this thesis we study the relic density n{sub {chi}} of non--relativistic long--lived or stable particles {chi} in various non--standard cosmological scenarios. First, we discuss the relic density in the non--standard cosmological scenario in which the temperature is too low for the particles {chi} to achieve full chemical equilibrium. We also investigated the case where {chi} particles are non--thermally produced from the decay of heavier particles in addition to the usual thermal production. In low temperature scenario, we calculate the relic abundance starting from arbitrary initial temperatures T{sub 0} of the radiation--dominated epoch and derive approximate solutions for the temperature dependence of the relic density which can accurately reproduces numerical results when full thermal equilibrium is not achieved. If full equilibrium is reached, our ansatz no longer reproduces the correct temperature dependence of the {chi} number density. However, we can contrive a semi-analytic formula which gives the correct final relic density, to an accuracy of about 3% or better, for all cross sections and initial temperatures. We also derive the lower bound on the initial temperature T{sub 0}, assuming that the relic particle accounts for the dark matter energy density in the universe. The observed cold dark matter abundance constrains the initial temperature T{sub 0} {>=}m{sub {chi}}/23, where m{sub {chi}} is the mass of {chi}. Second, we discuss the {chi} density in the scenario where the the Hubble parameter is modified. Even in this case, an approximate formula similar to the standard one is found to be capable of predicting the final relic abundance correctly. Choosing the {chi} annihilation cross section such that the observed cold dark matter abundance is reproduced in standard cosmology, we constrain possible modifications of the expansion rate at T {proportional_to}m{sub {chi}}/20, well before Big Bang Nucleosynthesis. (orig.)

  3. Relic abundance of WIMPs in non-standard cosmological scenarios

    International Nuclear Information System (INIS)

    Yimingniyazi, W.

    2007-01-01

    In this thesis we study the relic density n χ of non--relativistic long--lived or stable particles χ in various non--standard cosmological scenarios. First, we discuss the relic density in the non--standard cosmological scenario in which the temperature is too low for the particles χ to achieve full chemical equilibrium. We also investigated the case where χ particles are non--thermally produced from the decay of heavier particles in addition to the usual thermal production. In low temperature scenario, we calculate the relic abundance starting from arbitrary initial temperatures T 0 of the radiation--dominated epoch and derive approximate solutions for the temperature dependence of the relic density which can accurately reproduces numerical results when full thermal equilibrium is not achieved. If full equilibrium is reached, our ansatz no longer reproduces the correct temperature dependence of the χ number density. However, we can contrive a semi-analytic formula which gives the correct final relic density, to an accuracy of about 3% or better, for all cross sections and initial temperatures. We also derive the lower bound on the initial temperature T 0 , assuming that the relic particle accounts for the dark matter energy density in the universe. The observed cold dark matter abundance constrains the initial temperature T 0 ≥m χ /23, where m χ is the mass of χ. Second, we discuss the χ density in the scenario where the the Hubble parameter is modified. Even in this case, an approximate formula similar to the standard one is found to be capable of predicting the final relic abundance correctly. Choosing the χ annihilation cross section such that the observed cold dark matter abundance is reproduced in standard cosmology, we constrain possible modifications of the expansion rate at T ∝m χ /20, well before Big Bang Nucleosynthesis. (orig.)

  4. The Spatial Relationship between Apparent Diffusion Coefficient and Standardized Uptake Value of 18F-Fluorodeoxyglucose Has a Crucial Influence on the Numeric Correlation of Both Parameters in PET/MRI of Lung Tumors.

    Science.gov (United States)

    Sauter, Alexander W; Stieltjes, Bram; Weikert, Thomas; Gatidis, Sergios; Wiese, Mark; Klarhöfer, Markus; Wild, Damian; Lardinois, Didier; Bremerich, Jens; Sommer, Gregor

    2017-01-01

    The minimum apparent diffusion coefficient (ADC min ) derived from diffusion-weighted MRI (DW-MRI) and the maximum standardized uptake value (SUV max ) of FDG-PET are markers of aggressiveness in lung cancer. The numeric correlation of the two parameters has been extensively studied, but their spatial interplay is not well understood. After FDG-PET and DW-MRI coregistration, values and location of ADC min - and SUV max -voxels were analyzed. The upper limit of the 95% confidence interval for registration accuracy of sequential PET/MRI was 12 mm, and the mean distance ( D ) between ADC min - and SUV max -voxels was 14.0 mm (average of two readers). Spatial mismatch ( D > 12 mm) between ADC min and SUV max was found in 9/25 patients. A considerable number of mismatch cases (65%) was also seen in a control group that underwent simultaneous PET/MRI. In the entire patient cohort, no statistically significant correlation between SUV max and ADC min was seen, while a moderate negative linear relationship ( r = -0.5) between SUV max and ADC min was observed in tumors with a spatial match ( D ≤ 12 mm). In conclusion, spatial mismatch between ADC min and SUV max is found in a considerable percentage of patients. The spatial connection of the two parameters SUV max and ADC min has a crucial influence on their numeric correlation.

  5. Productivity standards for histology laboratories.

    Science.gov (United States)

    Buesa, René J

    2010-04-01

    The information from 221 US histology laboratories (histolabs) and 104 from 24 other countries with workloads from 600 to 116 000 cases per year was used to calculate productivity standards for 23 technical and 27 nontechnical tasks and for 4 types of work flow indicators. The sample includes 254 human, 40 forensic, and 31 veterinary pathology services. Statistical analyses demonstrate that most productivity standards are not different between services or worldwide. The total workload for the US human pathology histolabs averaged 26 061 cases per year, with 54% between 10 000 and less than 30 000. The total workload for 70% of the histolabs from other countries was less than 20 000, with an average of 15 226 cases per year. The fundamental manual technical tasks in the histolab and their productivity standards are as follows: grossing (14 cases per hour), cassetting (54 cassettes per hour), embedding (50 blocks per hour), and cutting (24 blocks per hour). All the other tasks, each with their own productivity standards, can be completed by auxiliary staff or using automatic instruments. Depending on the level of automation of the histolab, all the tasks derived from a workload of 25 cases will require 15.8 to 17.7 hours of work completed by 2.4 to 2.7 employees with 18% of their working time not directly dedicated to the production of diagnostic slides. This article explains how to extrapolate this productivity calculation for any workload and different levels of automation. The overall performance standard for all the tasks, including 8 hours for automated tissue processing, is 3.2 to 3.5 blocks per hour; and its best indicator is the value of the gross work flow productivity that is essentially dependent on how the work is organized. This article also includes productivity standards for forensic and veterinary histolabs, but the staffing benchmarks for histolabs will be the subject of a separate article. Copyright 2010 Elsevier Inc. All rights reserved.

  6. Wind Climate Parameters for Wind Turbine Fatigue Load Assessment

    DEFF Research Database (Denmark)

    Toft, Henrik Stensgaard; Svenningsen, Lasse; Moser, Wolfgang

    2016-01-01

    Site-specific assessment of wind turbine design requires verification that the individual wind turbine components can survive the site-specific wind climate. The wind turbine design standard, IEC 61400-1 (third edition), describes how this should be done using a simplified, equivalent wind climate...... climate required by the current design standard by comparing damage equivalent fatigue loads estimated based on wind climate parameters for each 10 min time-series with fatigue loads estimated based on the equivalent wind climate parameters. Wind measurements from Boulder, CO, in the United States...

  7. A fully-automated software pipeline for integrating breast density and parenchymal texture analysis for digital mammograms: parameter optimization in a case-control breast cancer risk assessment study

    Science.gov (United States)

    Zheng, Yuanjie; Wang, Yan; Keller, Brad M.; Conant, Emily; Gee, James C.; Kontos, Despina

    2013-02-01

    Estimating a woman's risk of breast cancer is becoming increasingly important in clinical practice. Mammographic density, estimated as the percent of dense (PD) tissue area within the breast, has been shown to be a strong risk factor. Studies also support a relationship between mammographic texture and breast cancer risk. We have developed a fullyautomated software pipeline for computerized analysis of digital mammography parenchymal patterns by quantitatively measuring both breast density and texture properties. Our pipeline combines advanced computer algorithms of pattern recognition, computer vision, and machine learning and offers a standardized tool for breast cancer risk assessment studies. Different from many existing methods performing parenchymal texture analysis within specific breast subregions, our pipeline extracts texture descriptors for points on a spatial regular lattice and from a surrounding window of each lattice point, to characterize the local mammographic appearance throughout the whole breast. To demonstrate the utility of our pipeline, and optimize its parameters, we perform a case-control study by retrospectively analyzing a total of 472 digital mammography studies. Specifically, we investigate the window size, which is a lattice related parameter, and compare the performance of texture features to that of breast PD in classifying case-control status. Our results suggest that different window sizes may be optimal for raw (12.7mm2) versus vendor post-processed images (6.3mm2). We also show that the combination of PD and texture features outperforms PD alone. The improvement is significant (p=0.03) when raw images and window size of 12.7mm2 are used, having an ROC AUC of 0.66. The combination of PD and our texture features computed from post-processed images with a window size of 6.3 mm2 achieves an ROC AUC of 0.75.

  8. Optimization of shell-and-tube heat exchangers conforming to TEMA standards with designs motivated by constructal theory

    International Nuclear Information System (INIS)

    Yang, Jie; Fan, Aiwu; Liu, Wei; Jacobi, Anthony M.

    2014-01-01

    Highlights: • A design method of heat exchangers motivated by constructal theory is proposed. • A genetic algorithm is applied and the TEMA standards are rigorously followed. • Three cases are studied to illustrate the advantage of the proposed design method. • The design method will reduce the total cost compared to two other methods. - Abstract: A modified optimization design approach motivated by constructal theory is proposed for shell-and-tube heat exchangers in the present paper. In this method, a shell-and-tube heat exchanger is divided into several in-series heat exchangers. The Tubular Exchanger Manufacturers Association (TEMA) standards are rigorously followed for all design parameters. The total cost of the whole shell-and-tube heat exchanger is set as the objective function, including the investment cost for initial manufacture and the operational cost involving the power consumption to overcome the frictional pressure loss. A genetic algorithm is applied to minimize the cost function by adjusting parameters such as the tube and shell diameters, tube length and tube arrangement. Three cases are studied which indicate that the modified design approach can significantly reduce the total cost compared to the original design method and traditional genetic algorithm design method

  9. Transfer parameters for routine release of HTO. Consideration of OBT

    International Nuclear Information System (INIS)

    Galeriu, D.; Paunescu, N.; Cotarlea

    1997-01-01

    Knowledge of the transfer parameters for tritium is a key requirement to assess the public dose or to establish Derived Release Limit (DRL) proper for a heavy water reactor. This report revised the transfer parameters used to assess tritium doses via the ingestion pathway. First, the procedure used in Canadian standard CSA-N288.1 to assess the DRL for tritium is revisited, clearing up some misunderstandings about the derivation of transfer parameters from air to forage and animal products. Secondly, we derive the transfer parameters applying conditions of full equilibrium to dynamic equations that describe the transfer of tritiated water in food. The new transfer parameters for tritiated water in food are more plant- and site-specific then the generic transfer parameters. The most important improvement is the introduction of organically bound tritium (OBT) production in plants or animal products. Bulk transfer parameters are introduced, which include OBT as well as HTO. Based on a standard Canadian diet, the dose increase considering OBT is almost 50 %. Recent experimental data obtained under equilibrium condition are discussed, and the revised transfer parameters for assessment purposes is demonstrated. (authors)

  10. Transfer parameters for routine release of HTO - consideration of OBT

    International Nuclear Information System (INIS)

    Galeriu, D.

    1994-06-01

    Knowledge of the transfer parameters for tritium is a key requirement to assess the public dose or to establish Derived Release Limits (DRL) appropriate for a heavy-water reactor. This report revises the transfer parameters used to assess tritium doses via the ingestion pathway. First, the procedure used in Canadian standard CSA-N288.1 to assess the DRL for tritium is revised, clearing up some misunderstandings about the derivation of transfer parameters for air to forage and animal products. Second, we rederive the transfer parameters, applying conditions of full equilibrium to dynamic equations that describe the transfer of tritiated water in food. The new transfer parameters for tritiated water in food are more plant- and site-specific than the generic transfer parameters. The most important improvement is the introduction of organically bound tritium (OBT) production in plants or animal products. Bulk transfer parameters are introduced, which include OBT as well as HTO. Based on a standard Canadian diet, the dose increase considering OBT is almost 50%. Recent experimental data obtained under equilibrium conditions are discussed, and the appropriateness of the revised transfer parameters for assessment purposes is demonstrated. (author). 26 refs., 7 tabs

  11. Desert plains classification based on Geomorphometrical parameters (Case study: Aghda, Yazd)

    Science.gov (United States)

    Tazeh, mahdi; Kalantari, Saeideh

    2013-04-01

    This research focuses on plains. There are several tremendous methods and classification which presented for plain classification. One of The natural resource based classification which is mostly using in Iran, classified plains into three types, Erosional Pediment, Denudation Pediment Aggradational Piedmont. The qualitative and quantitative factors to differentiate them from each other are also used appropriately. In this study effective Geomorphometrical parameters in differentiate landforms were applied for plain. Geomorphometrical parameters are calculable and can be extracted using mathematical equations and the corresponding relations on digital elevation model. Geomorphometrical parameters used in this study included Percent of Slope, Plan Curvature, Profile Curvature, Minimum Curvature, the Maximum Curvature, Cross sectional Curvature, Longitudinal Curvature and Gaussian Curvature. The results indicated that the most important affecting Geomorphometrical parameters for plain and desert classifications includes: Percent of Slope, Minimum Curvature, Profile Curvature, and Longitudinal Curvature. Key Words: Plain, Geomorphometry, Classification, Biophysical, Yazd Khezarabad.

  12. A Case for Standards of Counseling Practice.

    Science.gov (United States)

    Anderson, Donald

    1992-01-01

    A mature counseling profession has entered the decade of the 1990s. Several factors including professionalism, accountability, health care consumerism, credentialism, and public demands for quality mental health care indicate a need for more definitive statements on standards of practice in counseling. In response to this need, an eight-point…

  13. Windowed Multitaper Correlation Analysis of Multimodal Brain Monitoring Parameters

    Directory of Open Access Journals (Sweden)

    Rupert Faltermeier

    2015-01-01

    Full Text Available Although multimodal monitoring sets the standard in daily practice of neurocritical care, problem-oriented analysis tools to interpret the huge amount of data are lacking. Recently a mathematical model was presented that simulates the cerebral perfusion and oxygen supply in case of a severe head trauma, predicting the appearance of distinct correlations between arterial blood pressure and intracranial pressure. In this study we present a set of mathematical tools that reliably detect the predicted correlations in data recorded at a neurocritical care unit. The time resolved correlations will be identified by a windowing technique combined with Fourier-based coherence calculations. The phasing of the data is detected by means of Hilbert phase difference within the above mentioned windows. A statistical testing method is introduced that allows tuning the parameters of the windowing method in such a way that a predefined accuracy is reached. With this method the data of fifteen patients were examined in which we found the predicted correlation in each patient. Additionally it could be shown that the occurrence of a distinct correlation parameter, called scp, represents a predictive value of high quality for the patients outcome.

  14. Windowed multitaper correlation analysis of multimodal brain monitoring parameters.

    Science.gov (United States)

    Faltermeier, Rupert; Proescholdt, Martin A; Bele, Sylvia; Brawanski, Alexander

    2015-01-01

    Although multimodal monitoring sets the standard in daily practice of neurocritical care, problem-oriented analysis tools to interpret the huge amount of data are lacking. Recently a mathematical model was presented that simulates the cerebral perfusion and oxygen supply in case of a severe head trauma, predicting the appearance of distinct correlations between arterial blood pressure and intracranial pressure. In this study we present a set of mathematical tools that reliably detect the predicted correlations in data recorded at a neurocritical care unit. The time resolved correlations will be identified by a windowing technique combined with Fourier-based coherence calculations. The phasing of the data is detected by means of Hilbert phase difference within the above mentioned windows. A statistical testing method is introduced that allows tuning the parameters of the windowing method in such a way that a predefined accuracy is reached. With this method the data of fifteen patients were examined in which we found the predicted correlation in each patient. Additionally it could be shown that the occurrence of a distinct correlation parameter, called scp, represents a predictive value of high quality for the patients outcome.

  15. Sea surface stability parameters

    International Nuclear Information System (INIS)

    Weber, A.H.; Suich, J.E.

    1978-01-01

    A number of studies dealing with climatology of the Northwest Atlantic Ocean have been published in the last ten years. These published studies have dealt with directly measured meteorological parameters, e.g., wind speed, temperature, etc. This information has been useful because of the increased focus on the near coastal zone where man's activities are increasing in magnitude and scope, e.g., offshore power plants, petroleum production, and the subsequent environmental impacts of these activities. Atmospheric transport of passive or nonpassive material is significantly influenced by the turbulence structure of the atmosphere in the region of the atmosphere-ocean interface. This research entails identification of the suitability of standard atmospheric stability parameters which can be used to determine turbulence structure; the calculation of these parameters for the near-shore and continental shelf regions of the U.S. east coast from Cape Hatteras to Miami, Florida; and the preparation of a climatology of these parameters. In addition, a climatology for average surface stress for the same geographical region is being prepared

  16. Spectroscopic properties of a two-dimensional time-dependent Cepheid model. II. Determination of stellar parameters and abundances

    Science.gov (United States)

    Vasilyev, V.; Ludwig, H.-G.; Freytag, B.; Lemasle, B.; Marconi, M.

    2018-03-01

    Context. Standard spectroscopic analyses of variable stars are based on hydrostatic 1D model atmospheres. This quasi-static approach has not been theoretically validated. Aim. We aim at investigating the validity of the quasi-static approximation for Cepheid variables. We focus on the spectroscopic determination of the effective temperature Teff, surface gravity log g, microturbulent velocity ξt, and a generic metal abundance log A, here taken as iron. Methods: We calculated a grid of 1D hydrostatic plane-parallel models covering the ranges in effective temperature and gravity that are encountered during the evolution of a 2D time-dependent envelope model of a Cepheid computed with the radiation-hydrodynamics code CO5BOLD. We performed 1D spectral syntheses for artificial iron lines in local thermodynamic equilibrium by varying the microturbulent velocity and abundance. We fit the resulting equivalent widths to corresponding values obtained from our dynamical model for 150 instances in time, covering six pulsational cycles. In addition, we considered 99 instances during the initial non-pulsating stage of the temporal evolution of the 2D model. In the most general case, we treated Teff, log g, ξt, and log A as free parameters, and in two more limited cases, we fixed Teff and log g by independent constraints. We argue analytically that our approach of fitting equivalent widths is closely related to current standard procedures focusing on line-by-line abundances. Results: For the four-parametric case, the stellar parameters are typically underestimated and exhibit a bias in the iron abundance of ≈-0.2 dex. To avoid biases of this type, it is favorable to restrict the spectroscopic analysis to photometric phases ϕph ≈ 0.3…0.65 using additional information to fix the effective temperature and surface gravity. Conclusions: Hydrostatic 1D model atmospheres can provide unbiased estimates of stellar parameters and abundances of Cepheid variables for particular

  17. SPOTting Model Parameters Using a Ready-Made Python Package.

    Directory of Open Access Journals (Sweden)

    Tobias Houska

    Full Text Available The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool, an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI. We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.

  18. Standardization in smart grids. Introduction to IT-related methodologies, architectures and standards

    Energy Technology Data Exchange (ETDEWEB)

    Uslar, Mathias; Specht, Michael; Daenekas, Christian; Trefke, Joern; Rohjans, Sebastian; Gonzalez, Jose M.; Rosinger, Christine; Bleiker, Robert [OFFIS - Institut fuer Informatik, Oldenburg (Germany)

    2013-03-01

    Introduction to Standardization for Smart Grids. Presents a tutorial and best practice of Smart Grid Prototype Projects. Written by leading experts in the field. Besides the regulatory and market aspects, the technical level dealing with the knowledge from multiple disciplines and the aspects of technical system integration to achieve interoperability and integration has been a strong focus in the Smart Grid. This topic is typically covered by the means of using (technical) standards for processes, data models, functions and communication links. Standardization is a key issue for Smart Grids due to the involvement of many different sectors along the value chain from the generation to the appliances. The scope of Smart Grid is broad, therefore, the standards landscape is unfortunately very large and complex. This is why the three European Standards Organizations ETSI, CEN and CENELEC created a so called Joint Working Group (JWG). This was the first harmonized effort in Europe to bring together the needed disciplines and experts delivering the final report in May 2011. After this approach proved useful, the Commission used the Mandate M/490: Standardization Mandate to European Standardization Organizations (ESOs) to support European Smart Grid deployment. The focal point addressing the ESO's response to M/490 will be the CEN, CENELEC and ETSI Smart Grids Coordination Group (SG-CG). Based on this mandate, meaningful standardization of architectures, use cases, communication technologies, data models and security standards takes place in the four existing working groups. This book provides an overview on the various building blocks and standards identified as the most prominent ones by the JWG report as well as by the first set of standards group - IEC 61850 and CIM, IEC PAS 62559 for documenting Smart Grid use cases, security requirements from the SGIS groups and an introduction on how to apply the Smart Grid Architecture Model SGAM for utilities. In addition

  19. Optimal parameters for the FFA-Beddoes dynamic stall model

    Energy Technology Data Exchange (ETDEWEB)

    Bjoerck, A; Mert, M [FFA, The Aeronautical Research Institute of Sweden, Bromma (Sweden); Madsen, H A [Risoe National Lab., Roskilde (Denmark)

    1999-03-01

    Unsteady aerodynamic effects, like dynamic stall, must be considered in calculation of dynamic forces for wind turbines. Models incorporated in aero-elastic programs are of semi-empirical nature. Resulting aerodynamic forces therefore depend on values used for the semi-empiricial parameters. In this paper a study of finding appropriate parameters to use with the Beddoes-Leishman model is discussed. Minimisation of the `tracking error` between results from 2D wind tunnel tests and simulation with the model is used to find optimum values for the parameters. The resulting optimum parameters show a large variation from case to case. Using these different sets of optimum parameters in the calculation of blade vibrations, give rise to quite different predictions of aerodynamic damping which is discussed. (au)

  20. Physico-chemical parameters and heavy metal contents of Ibuya ...

    African Journals Online (AJOL)

    The physico-chemical parameters and heavy metal contents of Ibuya River were investigated between September 2012 and August 2013 from four stations using standard methods to etermine acceptable water quality standards and evaluate possible sustainability of a thriving fisheries cum tourist sport fishing venture.

  1. Standard audit procedure for continuous emission monitors and ambient air monitoring instruments

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2009-06-15

    The instruments were published in an operational policy manual in 2009. This policy aims to introduce standard audit criteria that can be used to determine if continuous emission monitors and ambient air monitoring devices are operating within acceptable parameters. Before delivering upscale points of the instrument to be audited, each one of the audit equipment used in the field is required to be at normal operating conditions. Before the beginning of the audit, each one of the meteorological and flow measurement equipment is required to be conditioned to current conditions. If the audit fails, the instrument will have to be audited quarterly. The establishment of specific procedures based on instrument manufacturer or certifying body operational standards is required in the case of non-continuous monitoring instruments presenting operational principles outside of the audit procedures listed in the document.

  2. Radiation safety assessment and development of environmental radiation monitoring technology; standardization of input parameters for the calculation of annual dose from routine releases from commercial reactor effluents

    Energy Technology Data Exchange (ETDEWEB)

    Rhee, I. H.; Cho, D.; Youn, S. H.; Kim, H. S.; Lee, S. J.; Ahn, H. K. [Soonchunhyang University, Ahsan (Korea)

    2002-04-01

    This research is to develop a standard methodology for determining the input parameters that impose a substantial impact on radiation doses of residential individuals in the vicinity of four nuclear power plants in Korea. We have selected critical nuclei, pathways and organs related to the human exposure via simulated estimation with K-DOSE 60 based on the updated ICRP-60 and sensitivity analyses. From the results, we found that 1) the critical nuclides were found to be {sup 3}H, {sup 133}Xe, {sup 60}Co for Kori plants and {sup 14}C, {sup 41}Ar for Wolsong plants. The most critical pathway was 'vegetable intake' for adults and 'milk intake' for infants. However, there was no preference in the effective organs, and 2) sensitivity analyses showed that the chemical composition in a nuclide much more influenced upon the radiation dose than any other input parameters such as food intake, radiation discharge, and transfer/concentration coefficients by more than 102 factor. The effect of transfer/concentration coefficients on the radiation dose was negligible. All input parameters showed highly estimated correlation with the radiation dose, approximated to 1.0, except for food intake in Wolsong power plant (partial correlation coefficient (PCC)=0.877). Consequently, we suggest that a prediction model or scenarios for food intake reflecting the current living trend and a formal publications including details of chemical components in the critical nuclei from each plant are needed. Also, standardized domestic values of the parameters used in the calculation must replace the values of the existed or default-set imported factors via properly designed experiments and/or modelling such as transport of liquid discharge in waters nearby the plants, exposure tests on corps and plants so on. 4 figs., 576 tabs. (Author)

  3. A matched case-control study comparing udder health, production and fertility parameters in dairy farms before and after the eradication of Bovine Virus Diarrhoea in Switzerland.

    Science.gov (United States)

    Tschopp, A; Deiss, R; Rotzer, M; Wanda, S; Thomann, B; Schüpbach-Regula, G; Meylan, M

    2017-09-01

    An obligatory eradication programme for Bovine Virus Diarrhoea (BVD) was implemented in Switzerland in 2008. Between 2008 and 2012, all bovines were tested for antigen or antibodies against BVDV. By the year 2012, eradication was completed in the majority of farms. A decrease of the prevalence of persistently infected (PI) newborn calves was observed from 1.4% in 2008 to study was to assess the effects of BVD eradication on different parameters of animal health, production and fertility in Swiss dairy herds which had completed the eradication programme. A matched case-control study was carried out using data from two periods, before (Period 1) and after (Period 2) the active phase of eradication. Case farms had at least two PI animals detected before or during the eradication; controls were BVD-free and matched for region, herd size and use of alpine pasture. A total of 110 farmers (55 pairs) were recruited. During a phone interview, a questionnaire about farm characteristics, animal health and appreciation of the BVD eradication programme was filled in. Breeding data and milk test day records were also analyzed. Parameters were first compared between (i) case and control herds before eradication, and (ii) Period 1 and Period 2 for case herds only. Milk yield (MY), bulk milk somatic cell count (BMSCC), prevalence of subclinical mastitis (SCM), and non-return rate (NRR) showed a p-valuecase-control) was created (IA). Except for MY, the IA was significant for all parameters modelled. Despite an overall p-value of 0.27, case herds tended to have a higher MY after eradication (β=0.53, p=0.050). For BMSCC and SCM, case herds had higher values than controls in both periods; udder health was significantly improved in control herds and it remained stable in case herds, with a slight decrease of BMSCC (β=-0.19, p=0.010). Finally, among fertility parameters, NRR showed a general improvement but it was significant only in control herds (β=0.29, p=0.019). Even though the

  4. Application of the Elitist-Mutated PSO and an Improved GSA to Estimate Parameters of Linear and Nonlinear Muskingum Flood Routing Models.

    Directory of Open Access Journals (Sweden)

    Ling Kang

    Full Text Available Heuristic search algorithms, which are characterized by faster convergence rates and can obtain better solutions than the traditional mathematical methods, are extensively used in engineering optimizations. In this paper, a newly developed elitist-mutated particle swarm optimization (EMPSO technique and an improved gravitational search algorithm (IGSA are successively applied to parameter estimation problems of Muskingum flood routing models. First, the global optimization performance of the EMPSO and IGSA are validated by nine standard benchmark functions. Then, to further analyse the applicability of the EMPSO and IGSA for various forms of Muskingum models, three typical structures are considered: the basic two-parameter linear Muskingum model (LMM, a three-parameter nonlinear Muskingum model (NLMM and a four-parameter nonlinear Muskingum model which incorporates the lateral flow (NLMM-L. The problems are formulated as optimization procedures to minimize the sum of the squared deviations (SSQ or the sum of the absolute deviations (SAD between the observed and the estimated outflows. Comparative results of the selected numerical cases (Case 1-3 show that the EMPSO and IGSA not only rapidly converge but also obtain the same best optimal parameter vector in every run. The EMPSO and IGSA exhibit superior robustness and provide two efficient alternative approaches that can be confidently employed to estimate the parameters of both linear and nonlinear Muskingum models in engineering applications.

  5. Learning regularization parameters for general-form Tikhonov

    International Nuclear Information System (INIS)

    Chung, Julianne; Español, Malena I

    2017-01-01

    Computing regularization parameters for general-form Tikhonov regularization can be an expensive and difficult task, especially if multiple parameters or many solutions need to be computed in real time. In this work, we assume training data is available and describe an efficient learning approach for computing regularization parameters that can be used for a large set of problems. We consider an empirical Bayes risk minimization framework for finding regularization parameters that minimize average errors for the training data. We first extend methods from Chung et al (2011 SIAM J. Sci. Comput. 33 3132–52) to the general-form Tikhonov problem. Then we develop a learning approach for multi-parameter Tikhonov problems, for the case where all involved matrices are simultaneously diagonalizable. For problems where this is not the case, we describe an approach to compute near-optimal regularization parameters by using operator approximations for the original problem. Finally, we propose a new class of regularizing filters, where solutions correspond to multi-parameter Tikhonov solutions, that requires less data than previously proposed optimal error filters, avoids the generalized SVD, and allows flexibility and novelty in the choice of regularization matrices. Numerical results for 1D and 2D examples using different norms on the errors show the effectiveness of our methods. (paper)

  6. MFV Reductions of MSSM Parameter Space

    CERN Document Server

    AbdusSalam, S.S.; Quevedo, F.

    2015-01-01

    The 100+ free parameters of the minimal supersymmetric standard model (MSSM) make it computationally difficult to compare systematically with data, motivating the study of specific parameter reductions such as the cMSSM and pMSSM. Here we instead study the reductions of parameter space implied by using minimal flavour violation (MFV) to organise the R-parity conserving MSSM, with a view towards systematically building in constraints on flavour-violating physics. Within this framework the space of parameters is reduced by expanding soft supersymmetry-breaking terms in powers of the Cabibbo angle, leading to a 24-, 30- or 42-parameter framework (which we call MSSM-24, MSSM-30, and MSSM-42 respectively), depending on the order kept in the expansion. We provide a Bayesian global fit to data of the MSSM-30 parameter set to show that this is manageable with current tools. We compare the MFV reductions to the 19-parameter pMSSM choice and show that the pMSSM is not contained as a subset. The MSSM-30 analysis favours...

  7. Recent Result from E821 Experiment on Muon g-2 and Unconstrained Minimal Supersymemtric Standard Model

    CERN Document Server

    Komine, S; Yamaguchi, M; Komine, Shinji; Moroi, Takeo; Yamaguchi, Masahiro

    2001-01-01

    Recently, the E821 experiment at the Brookhaven National Laboratory announced their latest result of their muon g-2 measurement which is about 2.6-\\sigma away from the standard model prediction. Taking this result seriously, we examine the possibility to explain this discrepancy by the supersymmetric contribution. Our analysis is performed in the framework of the unconstrained supersymmetric standard model which has free seven parameters relevant to muon g-2. We found that, in the case of large \\tan\\beta, sparticle masses are allowed to be large in the region where the SUSY contribution to the muon g-2 is large enough, and hence the conventional SUSY search may fail even at the LHC. On the contrary, to explain the discrepancy in the case of small \\tan\\beta, we found that (i) sleptons and SU(2)_L gauginos should be light, and (ii) negative search for the Higgs boson severely constrains the model in the framework of the mSUGRA and gauge-mediated model.

  8. Can thyroid parotid ratio replace radioactive iodine uptake as a new objective parameter in the evaluation of thyroid function in cases of hyperthyroidism? A comparative study

    International Nuclear Information System (INIS)

    Malhotra, G.; Seshadri, N.; Gupta, A.; Das, B.K.; Gambhir, S.; Pradhan, P.K.; Bhagat, J.K.

    2002-01-01

    Aim: Radioactive iodine uptake (RAIU) measurements are used to distinguish other causes of thyrotoxicosis from hyperthyroidism. However it is a cumbersome technique requiring patients' visit for two or more days. Thyroid to parotid ratio (TP ratio) which is a ratio of Technetium-99m pertechnetate counts in thyroid to those in parotid glands is a new objective parameter for evaluation of thyroid function. The aim of present study was to compare TP ratio with RAIU in patients of thyrotoxicosis. Materials and Methods: The study group comprised of 15 patients (13 males and 2 females) who were clinically and biochemically hyperthyroid. All of these patients underwent Tc-99m pertechnetate thyroid scintigraphy followed by RAIU on two separate occasions. The technetium thyroid scan was done 20 minutes after intravenous application of 185 MBq (5 mCi) of the radiotracer. Both anterior and oblique views of the neck region were acquired under a standard large field of view gamma camera with LEAP collimation. The TP ratio was calculated from anterior view by drawing the region of interest over the thyroid and two regions of interest over each parotid gland. The counts of thyroid region were compared with each parotid and average of two sides was taken as TP ratio. The radioactive iodine uptake was estimated at 2,4,24, and 48 hours after oral administration of 925 KBq (25 microcuries) of I-131 in all patients with a thyroid probe by the standard technique. Apart from this 10 controls (5 males and 5 females) who were clinically and biochemically euthyroid were also included in the study and they underwent same studies as the cases. All patients with nodular disease of thyroid, past history of thyroiditis and parotitis were excluded from the study. Results: The scintigraphic findings in all patients revealed a diffusely concentrating thyroid gland with no evidence of hot or cold areas. The RAIU and TP ratio in both cases and controls is presented.: S.E: Standard error. There was a

  9. Methodology for Evaluating Safety System Operability using Virtual Parameter Network

    International Nuclear Information System (INIS)

    Park, Sukyoung; Heo, Gyunyoung; Kim, Jung Taek; Kim, Tae Wan

    2014-01-01

    KAERI (Korea Atomic Energy Research Institute) and UTK (University of Tennessee Knoxville) are working on the I-NERI project to suggest complement of this problem. This research propose the methodology which provide the alternative signal in case of unable guaranteed reliability of some instrumentation with KAERI. Proposed methodology is assumed that several instrumentations are working normally under the power supply condition because we do not consider the instrumentation survivability itself. Thus, concept of the Virtual Parameter Network (VPN) is used to identify the associations between plant parameters. This paper is extended version of the paper which was submitted last KNS meeting by changing the methodology and adding the result of the case study. In previous research, we used Artificial Neural Network (ANN) inferential technique for estimation model but every time this model showed different estimate value due to random bias each time. Therefore Auto-Associative Kernel Regression (AAKR) model which have same number of inputs and outputs is used to estimate. Also the importance measures in the previous method depend on estimation model but importance measure of improved method independent on estimation model. Also importance index of previous method depended on estimation model but importance index of improved method is independent on estimation model. In this study, we proposed the methodology to identify the internal state of power plant when severe accident happens also it has been validated through case study. SBLOCA which has large contribution to severe accident is considered as initiating event and relationship amongst parameter has been identified. VPN has ability to identify that which parameter has to be observed and which parameter can be alternative to the missing parameter when some instruments are failed in severe accident. In this study we have identified through results that commonly number 2, 3, 4 parameter has high connectivity while

  10. Methodology for Evaluating Safety System Operability using Virtual Parameter Network

    Energy Technology Data Exchange (ETDEWEB)

    Park, Sukyoung; Heo, Gyunyoung [Kyung Hee Univ., Yongin (Korea, Republic of); Kim, Jung Taek [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Tae Wan [Kepco International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2014-05-15

    KAERI (Korea Atomic Energy Research Institute) and UTK (University of Tennessee Knoxville) are working on the I-NERI project to suggest complement of this problem. This research propose the methodology which provide the alternative signal in case of unable guaranteed reliability of some instrumentation with KAERI. Proposed methodology is assumed that several instrumentations are working normally under the power supply condition because we do not consider the instrumentation survivability itself. Thus, concept of the Virtual Parameter Network (VPN) is used to identify the associations between plant parameters. This paper is extended version of the paper which was submitted last KNS meeting by changing the methodology and adding the result of the case study. In previous research, we used Artificial Neural Network (ANN) inferential technique for estimation model but every time this model showed different estimate value due to random bias each time. Therefore Auto-Associative Kernel Regression (AAKR) model which have same number of inputs and outputs is used to estimate. Also the importance measures in the previous method depend on estimation model but importance measure of improved method independent on estimation model. Also importance index of previous method depended on estimation model but importance index of improved method is independent on estimation model. In this study, we proposed the methodology to identify the internal state of power plant when severe accident happens also it has been validated through case study. SBLOCA which has large contribution to severe accident is considered as initiating event and relationship amongst parameter has been identified. VPN has ability to identify that which parameter has to be observed and which parameter can be alternative to the missing parameter when some instruments are failed in severe accident. In this study we have identified through results that commonly number 2, 3, 4 parameter has high connectivity while

  11. Parameter identification of chaos system based on unknown parameter observer

    International Nuclear Information System (INIS)

    Wang Shaoming; Luo Haigeng; Yue Chaoyuan; Liao Xiaoxin

    2008-01-01

    Parameter identification of chaos system based on unknown parameter observer is discussed generally. Based on the work of Guan et al. [X.P. Guan, H.P. Peng, L.X. Li, et al., Acta Phys. Sinica 50 (2001) 26], the design of unknown parameter observer is improved. The application of the improved approach is extended greatly. The works in some literatures [X.P. Guan, H.P. Peng, L.X. Li, et al., Acta Phys. Sinica 50 (2001) 26; J.H. Lue, S.C. Zhang, Phys. Lett. A 286 (2001) 148; X.Q. Wu, J.A. Lu, Chaos Solitons Fractals 18 (2003) 721; J. Liu, S.H. Chen, J. Xie, Chaos Solitons Fractals 19 (2004) 533] are only the special cases of our Corollaries 1 and 2. Some observers for Lue system and a new chaos system are designed to test our improved method, and simulations results demonstrate the effectiveness and feasibility of the improved approach

  12. Bayesian Parameter Estimation for Heavy-Duty Vehicles

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Eric; Konan, Arnaud; Duran, Adam

    2017-03-28

    Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the current state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.

  13. Computer simulation for prediction of performance and thermodynamic parameters of high energy materials

    International Nuclear Information System (INIS)

    Muthurajan, H.; Sivabalan, R.; Talawar, M.B.; Asthana, S.N.

    2004-01-01

    A new code viz., Linear Output Thermodynamic User-friendly Software for Energetic Systems (LOTUSES) developed during this work predicts the theoretical performance parameters such as density, detonation factor, velocity of detonation, detonation pressure and thermodynamic properties such as heat of detonation, heat of explosion, volume of explosion gaseous products. The same code also assists in the prediction of possible explosive decomposition products after explosion and power index. The developed code has been validated by calculating the parameters of standard explosives such as TNT, PETN, RDX, and HMX. Theoretically predicated parameters are accurate to the order of ±5% deviation. To the best of our knowledge, no such code is reported in literature which can predict a wide range of characteristics of known/unknown explosives with minimum input parameters. The code can be used to obtain thermochemical and performance parameters of high energy materials (HEMs) with reasonable accuracy. The code has been developed in Visual Basic having enhanced windows environment, and thereby advantages over the conventional codes, written in Fortran. The theoretically predicted HEMs performance can be directly printed as well as stored in text (.txt) or HTML (.htm) or Microsoft Word (.doc) or Adobe Acrobat (.pdf) format in the hard disk. The output can also be copied into the Random Access Memory as clipboard text which can be imported/pasted in other software as in the case of other codes

  14. Parameters and error of a theoretical model

    International Nuclear Information System (INIS)

    Moeller, P.; Nix, J.R.; Swiatecki, W.

    1986-09-01

    We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs

  15. Comparison of Parameter Identification Techniques

    Directory of Open Access Journals (Sweden)

    Eder Rafael

    2016-01-01

    Full Text Available Model-based control of mechatronic systems requires excellent knowledge about the physical behavior of each component. For several types of components of a system, e.g. mechanical or electrical ones, the dynamic behavior can be described by means of a mathematic model consisting of a set of differential equations, difference equations and/or algebraic constraint equations. The knowledge of a realistic mathematic model and its parameter values is essential to represent the behaviour of a mechatronic system. Frequently it is hard or impossible to obtain all required values of the model parameters from the producer, so an appropriate parameter estimation technique is required to compute missing parameters. A manifold of parameter identification techniques can be found in the literature, but their suitability depends on the mathematic model. Previous work dealt with the automatic assembly of mathematical models of serial and parallel robots with drives and controllers within the dynamic multibody simulation code HOTINT as fully-fledged mechatronic simulation. Several parameters of such robot models were identified successfully by our embedded algorithm. The present work proposes an improved version of the identification algorithm with higher performance. The quality of the identified parameter values and the computation effort are compared with another standard technique.

  16. Development of nano-roughness calibration standards

    International Nuclear Information System (INIS)

    Baršić, Gorana; Mahović, Sanjin; Zorc, Hrvoje

    2012-01-01

    At the Laboratory for Precise Measurements of Length, currently the Croatian National Laboratory for Length, unique nano-roughness calibration standards were developed, which have been physically implemented in cooperation with the company MikroMasch Trading OU and the Ruđer Bošković Institute. In this paper, a new design for a calibration standard with two measuring surfaces is presented. One of the surfaces is for the reproduction of roughness parameters, while the other is for the traceability of length units below 50 nm. The nominal values of the groove depths on these measuring surfaces are the same. Thus, a link between the measuring surfaces has been ensured, which makes these standards unique. Furthermore, the calibration standards available on the market are generally designed specifically for individual groups of measuring instrumentation, such as interferometric microscopes, stylus instruments, scanning electron microscopes (SEM) or scanning probe microscopes. In this paper, a new design for nano-roughness standards has been proposed for use in the calibration of optical instruments, as well as for stylus instruments, SEM, atomic force microscopes and scanning tunneling microscopes. Therefore, the development of these new nano-roughness calibration standards greatly contributes to the reproducibility of the results of groove depth measurement as well as the 2D and 3D roughness parameters obtained by various measuring methods. (paper)

  17. Cosmological parameter estimation using particle swarm optimization

    Science.gov (United States)

    Prasad, Jayanti; Souradeep, Tarun

    2012-06-01

    Constraining theoretical models, which are represented by a set of parameters, using observational data is an important exercise in cosmology. In Bayesian framework this is done by finding the probability distribution of parameters which best fits to the observational data using sampling based methods like Markov chain Monte Carlo (MCMC). It has been argued that MCMC may not be the best option in certain problems in which the target function (likelihood) poses local maxima or have very high dimensionality. Apart from this, there may be examples in which we are mainly interested to find the point in the parameter space at which the probability distribution has the largest value. In this situation the problem of parameter estimation becomes an optimization problem. In the present work we show that particle swarm optimization (PSO), which is an artificial intelligence inspired population based search procedure, can also be used for cosmological parameter estimation. Using PSO we were able to recover the best-fit Λ cold dark matter (LCDM) model parameters from the WMAP seven year data without using any prior guess value or any other property of the probability distribution of parameters like standard deviation, as is common in MCMC. We also report the results of an exercise in which we consider a binned primordial power spectrum (to increase the dimensionality of problem) and find that a power spectrum with features gives lower chi square than the standard power law. Since PSO does not sample the likelihood surface in a fair way, we follow a fitting procedure to find the spread of likelihood function around the best-fit point.

  18. Investigating the Concentration of Heavy Metals in Bottled Water and Comparing with its Standard: Case Study

    Directory of Open Access Journals (Sweden)

    Mohammad Hossien Salmani

    2017-09-01

    Results: Brand No. 1, the concentration of zinc ion was larger in Brand 2 while in Brand No. 2 had larger copper, nickel, and aluminum ions. The results indicated that the concentration of the measured metal ions were below the allowable limit of drinking water standard across all of the studied samples. Conclusion: Based on the obtained results from the investigated parameters, it can be concluded that the bottled water of both brands poses no health issue and is drinkable. Considering the changes in the concentration of ions and the increasing trend of consumption of bottled waters, their monitoring and qualitative control of pollutants are very crucial in terms of public health.

  19. New standards for digital radiography and qualification of detectors; Neue Standards zur Digitalen Radiographie und Qualifizierung von Detektoren

    Energy Technology Data Exchange (ETDEWEB)

    Ewert, Uwe; Zscherpel, Uwe [Bundesanstalt fuer Materialforschung und -pruefung (BAM), Berlin (Germany)

    2017-08-01

    The contribution describes the most important standards for radiographic examinations that are revised in different committees. Information is given on new requirements esp. on revised parameters that have to be considered in practical testing procedures and classification. Actually for the revision of ISO 17636-1/2 in 2018 user questions are collected concerning weld seam testing. The DIN 25435 standards for weld seam testing in nuclear technology have been translated into English. Revisions of ASTM standards have to be considered in the actual EN and ISO standards. Since several years the International Institute of Welding (IIW) is also revising the international standards.

  20. Geotechnical parameters and behaviour of uranium tailings

    International Nuclear Information System (INIS)

    Matyas, E.L.

    1984-01-01

    The results of laboratory and in situ testing and test blasting, the observations made on a test embankment, and a description of actual construction practice associated with engineering studies for the management of uranium mill tailings at Elliot Lake, Ontario are presented. Relative density values inferred from standard penetration tests and cone penetrometer tests are shown to be inconsistent with relative density values determined from maximum and minimum void ratios. Some of the data contradicts existing correlations. The compressibility of in situ saturated tailings is presented in graphical form in terms of void ratio, vertical effective stress, and mean grain size. Hydraulic conductivity is shown to range over many orders of magnitude, depending on the void ratio. The observations on an instrumental test embankment are used to explain the appropriate selection of geotechnical parameters that gave good agreement between back-calculated and observed settlements. One-dimensional consolidation theory was found to be valid for the embankment case. It is necessary to account for changes in soil properties that occur during the consolidation process in order to obtain a good fit between back-calculated and observed settlements. The successful use of tailings sand for embankment construction is described. On the basis of normalized standard penetration resistance values, it is concluded that localized zones of saturated tailings may be prone to liquefaction under predicted earthquake loadings

  1. Assessment of hospital performance with a case-mix standardized mortality model using an existing administrative database in Japan

    Directory of Open Access Journals (Sweden)

    Fushimi Kiyohide

    2010-05-01

    Full Text Available Abstract Background Few studies have examined whether risk adjustment is evenly applicable to hospitals with various characteristics and case-mix. In this study, we applied a generic prediction model to nationwide discharge data from hospitals with various characteristics. Method We used standardized data of 1,878,767 discharged patients provided by 469 hospitals from July 1 to October 31, 2006. We generated and validated a case-mix in-hospital mortality prediction model using 50/50 split sample validation. We classified hospitals into two groups based on c-index value (hospitals with c-index ≥ 0.8; hospitals with c-index Results The model demonstrated excellent discrimination as indicated by the high average c-index and small standard deviation (c-index = 0.88 ± 0.04. Expected mortality rate of each hospital was highly correlated with observed mortality rate (r = 0.693, p Conclusion The model fits well to a group of hospitals with a wide variety of acute care events, though model fit is less satisfactory for specialized hospitals and those with convalescent wards. Further sophistication of the generic prediction model would be recommended to obtain optimal indices to region specific conditions.

  2. Key parameters analysis of hybrid HEMP simulator

    International Nuclear Information System (INIS)

    Mao Congguang; Zhou Hui

    2009-01-01

    According to the new standards on the high-altitude electromagnetic pulse (HEMP) developed by International Electrotechnical Commission (IEC), the target parameter requirements of the key structure of the hybrid HEMP simulator are decomposed. Firstly, the influences of the different excitation sources and biconical structures to the key parameters of the radiated electric field wave shape are investigated and analyzed. Then based on the influence curves the target parameter requirements of the pulse generator are proposed. Finally the appropriate parameters of the biconical structure and the excitation sources are chosen, and the computational result of the electric field in free space is presented. The results are of great value for the design of the hybrid HEMP simulator. (authors)

  3. Acoustical characterization and parameter optimization of polymeric noise control materials

    Science.gov (United States)

    Homsi, Emile N.

    2003-10-01

    The sound transmission loss (STL) characteristics of polymer-based materials are considered. Analytical models that predict, characterize and optimize the STL of polymeric materials, with respect to physical parameters that affect performance, are developed for single layer panel configuration and adapted for layered panel construction with homogenous core. An optimum set of material parameters is selected and translated into practical applications for validation. Sound attenuating thermoplastic materials designed to be used as barrier systems in the automotive and consumer industries have certain acoustical characteristics that vary in function of the stiffness and density of the selected material. The validity and applicability of existing theory is explored, and since STL is influenced by factors such as the surface mass density of the panel's material, a method is modified to improve STL performance and optimize load-bearing attributes. An experimentally derived function is applied to the model for better correlation. In-phase and out-of-phase motion of top and bottom layers are considered. It was found that the layered construction of the co-injection type would exhibit fused planes at the interface and move in-phase. The model for the single layer case is adapted to the layered case where it would behave as a single panel. Primary physical parameters that affect STL are identified and manipulated. Theoretical analysis is linked to the resin's matrix attribute. High STL material with representative characteristics is evaluated versus standard resins. It was found that high STL could be achieved by altering materials' matrix and by integrating design solution in the low frequency range. A suggested numerical approach is described for STL evaluation of simple and complex geometries. In practice, validation on actual vehicle systems proved the adequacy of the acoustical characterization process.

  4. The physics case of the Neutrino Factory

    Energy Technology Data Exchange (ETDEWEB)

    Gomez-Cadenas, J J [IFIC, CSIC-UV, Valencia (Spain)], E-mail: gomez@ific.uv.es

    2008-11-01

    I discuss the physics case of the standard Neutrino Factory facility coupled to an iron detector to exploit the so-called 'Golden-Channel'. The performance of the facility is impressive, although it is not free from degeneracies arising from a combination of physics and instrumental limitations. Nevertheless, one could explore at great depth the parameter of the leptonic mixing matrix as well as the mass hierarchy. Best performance is obtained with two baselines (one of them very long) and an improved magnetic detector with low energy detection threshold.

  5. Optimizing incomplete sample designs for item response model parameters

    NARCIS (Netherlands)

    van der Linden, Willem J.

    Several models for optimizing incomplete sample designs with respect to information on the item parameters are presented. The following cases are considered: (1) known ability parameters; (2) unknown ability parameters; (3) item sets with multiple ability scales; and (4) response models with

  6. 29 CFR 1960.18 - Supplementary standards.

    Science.gov (United States)

    2010-07-01

    ... agency employees for which there exists no appropriate OSHA standards. In order to avoid any possible... adopted inconsistent with OSHA standards, or inconsistent with OSHA enforcement practices under section 5... of this finding. In such a case, the supplementary standard shall not be adopted, but the agency will...

  7. Nuclear standards

    International Nuclear Information System (INIS)

    Fichtner, N.; Becker, K.; Bashir, M.

    1981-01-01

    This compilation of all nuclear standards available to the authors by mid 1980 represents the third, carefully revised edition of a catalogue which was first published in 1975 as EUR 5362. In this third edition several changes have been made. The title has been condensed. The information has again been carefully up-dated, covering all changes regarding status, withdrawal of old standards, new projects, amendments, revisions, splitting of standards into several parts, combination of several standards into one, etc., as available to the authors by mid 1980. The speed with which information travels varies and requires in many cases rather tedious and cumbersome inquiries. Also, the classification scheme has been revised with the goal of better adjustment to changing situations and priorities. Whenever it turned out to be difficult to attribute a standard to a single subject category, multiple listings in all relevant categories have been made. As in previous editions, within the subcategories the standards are arranged by organization (in Categorie 2.1 by country) alphabetically and in ascending numerical order. It covers all relevant areas of power reactors, the fuel cycle, radiation protection, etc., from the basic laws and governmental regulations, regulatory guides, etc., all the way to voluntary industrial standards and codes of pratice. (orig./HP)

  8. The Standard Model

    Science.gov (United States)

    Burgess, Cliff; Moore, Guy

    2012-04-01

    List of illustrations; List of tables; Preface; Acknowledgments; Part I. Theoretical Framework: 1. Field theory review; 2. The standard model: general features; 3. Cross sections and lifetimes; Part II. Applications: Leptons: 4. Elementary boson decays; 5. Leptonic weak interactions: decays; 6. Leptonic weak interactions: collisions; 7. Effective Lagrangians; Part III. Applications: Hadrons: 8. Hadrons and QCD; 9. Hadronic interactions; Part IV. Beyond the Standard Model: 10. Neutrino masses; 11. Open questions, proposed solutions; Appendix A. Experimental values for the parameters; Appendix B. Symmetries and group theory review; Appendix C. Lorentz group and the Dirac algebra; Appendix D. ξ-gauge Feynman rules; Appendix E. Metric convention conversion table; Select bibliography; Index.

  9. On Logic and Standards for Structuring Documents

    Science.gov (United States)

    Eyers, David M.; Jones, Andrew J. I.; Kimbrough, Steven O.

    The advent of XML has been widely seized upon as an opportunity to develop document representation standards that lend themselves to automated processing. This is a welcome development and much good has come of it. That said, present standardization efforts may be criticized on a number of counts. We explore two issues associated with document XML standardization efforts. We label them (i) the dynamic point and (ii) the logical point. Our dynamic point is that in many cases experience has shown that the search for a final, or even reasonably permanent, document representation standard is futile. The case is especially strong for electronic data interchange (EDI). Our logical point is that formalization into symbolic logic is materially helpful for understanding and designing dynamic document standards.

  10. Gravitational Field as a Pressure Force from Logarithmic Lagrangians and Non-Standard Hamiltonians: The Case of Stellar Halo of Milky Way

    Science.gov (United States)

    El-Nabulsi, Rami Ahmad

    2018-03-01

    Recently, the notion of non-standard Lagrangians was discussed widely in literature in an attempt to explore the inverse variational problem of nonlinear differential equations. Different forms of non-standard Lagrangians were introduced in literature and have revealed nice mathematical and physical properties. One interesting form related to the inverse variational problem is the logarithmic Lagrangian, which has a number of motivating features related to the Liénard-type and Emden nonlinear differential equations. Such types of Lagrangians lead to nonlinear dynamics based on non-standard Hamiltonians. In this communication, we show that some new dynamical properties are obtained in stellar dynamics if standard Lagrangians are replaced by Logarithmic Lagrangians and their corresponding non-standard Hamiltonians. One interesting consequence concerns the emergence of an extra pressure term, which is related to the gravitational field suggesting that gravitation may act as a pressure in a strong gravitational field. The case of the stellar halo of the Milky Way is considered.

  11. Parameter Estimation in Continuous Time Domain

    Directory of Open Access Journals (Sweden)

    Gabriela M. ATANASIU

    2016-12-01

    Full Text Available This paper will aim to presents the applications of a continuous-time parameter estimation method for estimating structural parameters of a real bridge structure. For the purpose of illustrating this method two case studies of a bridge pile located in a highly seismic risk area are considered, for which the structural parameters for the mass, damping and stiffness are estimated. The estimation process is followed by the validation of the analytical results and comparison with them to the measurement data. Further benefits and applications for the continuous-time parameter estimation method in civil engineering are presented in the final part of this paper.

  12. Quantifying Parameter Sensitivity, Interaction and Transferability in Hydrologically Enhanced Versions of Noah-LSM over Transition Zones

    Science.gov (United States)

    Rosero, Enrique; Yang, Zong-Liang; Wagener, Thorsten; Gulden, Lindsey E.; Yatheendradas, Soni; Niu, Guo-Yue

    2009-01-01

    We use sensitivity analysis to identify the parameters that are most responsible for shaping land surface model (LSM) simulations and to understand the complex interactions in three versions of the Noah LSM: the standard version (STD), a version enhanced with a simple groundwater module (GW), and version augmented by a dynamic phenology module (DV). We use warm season, high-frequency, near-surface states and turbulent fluxes collected over nine sites in the US Southern Great Plains. We quantify changes in the pattern of sensitive parameters, the amount and nature of the interaction between parameters, and the covariance structure of the distribution of behavioral parameter sets. Using Sobol s total and first-order sensitivity indexes, we show that very few parameters directly control the variance of the model output. Significant parameter interaction occurs so that not only the optimal parameter values differ between models, but the relationships between parameters change. GW decreases parameter interaction and appears to improve model realism, especially at wetter sites. DV increases parameter interaction and decreases identifiability, implying it is overparameterized and/or underconstrained. A case study at a wet site shows GW has two functional modes: one that mimics STD and a second in which GW improves model function by decoupling direct evaporation and baseflow. Unsupervised classification of the posterior distributions of behavioral parameter sets cannot group similar sites based solely on soil or vegetation type, helping to explain why transferability between sites and models is not straightforward. This evidence suggests a priori assignment of parameters should also consider climatic differences.

  13. Integrating standardization into engineering education: the case of forerunner Korea

    NARCIS (Netherlands)

    D.G. Choi (Dong Geun); H.J. de Vries (Henk)

    2013-01-01

    textabstractThe Republic of Korea is a forerunner in integrating the topic of standardization into engineering education at the academic level. This study investigates developments and evolutions in the planning and operating of the University Education Promotion on Standardization (UEPS) in Korea.

  14. Implementation experiences of ISO/IEEE11073 standard applied to new use cases for e-health environments.

    Science.gov (United States)

    Martinez, I; Escayola, J; Martinez-Espronceda, M; Serrano, L; Trigo, J D; Led, S; Garcia, J

    2009-01-01

    Recent advances in biomedical engineering and continuous technological innovations in last decade are promoting new challenges, especially in e-Health environments. In this context, the medical devices interoperability is one of the interest fields wherein these improvements require a standard-based design in order to achieve homogeneous solutions. Furthermore, the spreading of wearable devices, oriented to the paradigm of patient environment and supported by wireless technologies as Bluetooth or ZigBee, is bringing new medical use cases based on Ambient Assisted Living, home monitoring of elderly, heart failure, chronic, under palliative care or patients who have undergone surgery, urgencies and emergencies, or even fitness auto-control and health follow-up. In this paper, several implementation experiences based on ISO/IEEE11073 standard are detailed. These evolved e-Health services can improve the quality of the patient's care, increase the user's interaction, and assure these e-Health applications to be fully compatible with global telemedicine systems.

  15. Standardization of the cumulative absolute velocity

    International Nuclear Information System (INIS)

    O'Hara, T.F.; Jacobson, J.P.

    1991-12-01

    EPRI NP-5930, ''A Criterion for Determining Exceedance of the Operating Basis Earthquake,'' was published in July 1988. As defined in that report, the Operating Basis Earthquake (OBE) is exceeded when both a response spectrum parameter and a second damage parameter, referred to as the Cumulative Absolute Velocity (CAV), are exceeded. In the review process of the above report, it was noted that the calculation of CAV could be confounded by time history records of long duration containing low (nondamaging) acceleration. Therefore, it is necessary to standardize the method of calculating CAV to account for record length. This standardized methodology allows consistent comparisons between future CAV calculations and the adjusted CAV threshold value based upon applying the standardized methodology to the data set presented in EPRI NP-5930. The recommended method to standardize the CAV calculation is to window its calculation on a second-by-second basis for a given time history. If the absolute acceleration exceeds 0.025g at any time during each one second interval, the earthquake records used in EPRI NP-5930 have been reanalyzed and the adjusted threshold of damage for CAV was found to be 0.16g-set

  16. Can establishment success be determined through demographic parameters? A case study on five introduced bird species.

    Directory of Open Access Journals (Sweden)

    Ana Sanz-Aguilar

    Full Text Available The dominant criterion to determine when an introduced species is established relies on the maintenance of a self-sustaining population in the area of introduction, i.e. on the viability of the population from a demographic perspective. There is however a paucity of demographic studies on introduced species, and establishment success is thus generally determined by expert opinion without undertaking population viability analyses (PVAs. By means of an intensive five year capture-recapture monitoring program (involving >12,000 marked individuals we studied the demography of five introduced passerine bird species in southern Spain which are established and have undergone a fast expansion over the last decades. We obtained useful estimates of demographic parameters (survival and reproduction for one colonial species (Ploceus melanocephalus, confirming the long-term viability of its local population through PVAs. However, extremely low recapture rates prevented the estimation of survival parameters and population growth rates for widely distributed species with low local densities (Estrilda troglodytes and Amandava amandava but also for highly abundant yet non-colonial species (Estrilda astrild and Euplectes afer. Therefore, determining the establishment success of introduced passerine species by demographic criteria alone may often be troublesome even when devoting much effort to field-work. Alternative quantitative methodologies such as the analysis of spatio-temporal species distributions complemented with expert opinion deserve thus their role in the assessment of establishment success of introduced species when estimates of demographic parameters are difficult to obtain, as is generally the case for non-colonial, highly mobile passerines.

  17. Sensitivity Analysis of Input Parameters for a Dynamic Food Chain Model DYNACON

    International Nuclear Information System (INIS)

    Hwang, Won Tae; Lee, Geun Chang; Han, Moon Hee; Cho, Gyu Seong

    2000-01-01

    The sensitivity analysis of input parameters for a dynamic food chain model DYNACON was conducted as a function of deposition data for the long-lived radionuclides ( 137 Cs, 90 Sr). Also, the influence of input parameters for the short and long-terms contamination of selected foodstuffs (cereals, leafy vegetables, milk) was investigated. The input parameters were sampled using the LHS technique, and their sensitivity indices represented as PRCC. The sensitivity index was strongly dependent on contamination period as well as deposition data. In case of deposition during the growing stages of plants, the input parameters associated with contamination by foliar absorption were relatively important in long-term contamination as well as short-term contamination. They were also important in short-term contamination in case of deposition during the non-growing stages. In long-term contamination, the influence of input parameters associated with foliar absorption decreased, while the influence of input parameters associated with root uptake increased. These phenomena were more remarkable in case of the deposition of non-growing stages than growing stages, and in case of 90 Sr deposition than 137 Cs deposition. In case of deposition during growing stages of pasture, the input parameters associated with the characteristics of cattle such as feed-milk transfer factor and daily intake rate of cattle were relatively important in contamination of milk

  18. Measurement of the Michel Parameters in Leptonic Tau Decays

    CERN Document Server

    Ackerstaff, K.; Allison, John; Altekamp, N.; Anderson, K.J.; Anderson, S.; Arcelli, S.; Asai, S.; Ashby, S.F.; Axen, D.; Azuelos, G.; Ball, A.H.; Barberio, E.; Barlow, Roger J.; Bartoldus, R.; Batley, J.R.; Baumann, S.; Bechtluft, J.; Behnke, T.; Bell, Kenneth Watson; Bella, G.; Bentvelsen, S.; Bethke, S.; Betts, S.; Biebel, O.; Biguzzi, A.; Bird, S.D.; Blobel, V.; Bloodworth, I.J.; Bobinski, M.; Bock, P.; Bohme, J.; Boutemeur, M.; Braibant, S.; Bright-Thomas, P.; Brown, Robert M.; Burckhart, H.J.; Burgard, C.; Burgin, R.; Capiluppi, P.; Carnegie, R.K.; Carter, A.A.; Carter, J.R.; Chang, C.Y.; Charlton, David G.; Chrisman, D.; Ciocca, C.; Clarke, P.E.L.; Clay, E.; Cohen, I.; Conboy, J.E.; Cooke, O.C.; Couyoumtzelis, C.; Coxe, R.L.; Cuffiani, M.; Dado, S.; Dallavalle, G.Marco; Davis, R.; De Jong, S.; del Pozo, L.A.; de Roeck, A.; Desch, K.; Dienes, B.; Dixit, M.S.; Doucet, M.; Dubbert, J.; Duchovni, E.; Duckeck, G.; Duerdoth, I.P.; Eatough, D.; Estabrooks, P.G.; Etzion, E.; Evans, H.G.; Fabbri, F.; Fanfani, A.; Fanti, M.; Faust, A.A.; Fiedler, F.; Fierro, M.; Fischer, H.M.; Fleck, I.; Folman, R.; Furtjes, A.; Futyan, D.I.; Gagnon, P.; Gary, J.W.; Gascon, J.; Gascon-Shotkin, S.M.; Geich-Gimbel, C.; Geralis, T.; Giacomelli, G.; Giacomelli, P.; Gibson, V.; Gibson, W.R.; Gingrich, D.M.; Glenzinski, D.; Goldberg, J.; Gorn, W.; Grandi, C.; Gross, E.; Grunhaus, J.; Gruwe, M.; Hanson, G.G.; Hansroul, M.; Hapke, M.; Hargrove, C.K.; Hartmann, C.; Hauschild, M.; Hawkes, C.M.; Hawkings, R.; Hemingway, R.J.; Herndon, M.; Herten, G.; Heuer, R.D.; Hildreth, M.D.; Hill, J.C.; Hillier, S.J.; Hobson, P.R.; Hocker, James Andrew; Homer, R.J.; Honma, A.K.; Horvath, D.; Hossain, K.R.; Howard, R.; Huntemeyer, P.; Igo-Kemenes, P.; Imrie, D.C.; Ishii, K.; Jacob, F.R.; Jawahery, A.; Jeremie, H.; Jimack, M.; Joly, A.; Jones, C.R.; Jovanovic, P.; Junk, T.R.; Karlen, D.; Kartvelishvili, V.; Kawagoe, K.; Kawamoto, T.; Kayal, P.I.; Keeler, R.K.; Kellogg, R.G.; Kennedy, B.W.; Klier, A.; Kluth, S.; Kobayashi, T.; Kobel, M.; Koetke, D.S.; Kokott, T.P.; Kolrep, M.; Komamiya, S.; Kowalewski, Robert V.; Kress, T.; Krieger, P.; von Krogh, J.; Kyberd, P.; Lafferty, G.D.; Lanske, D.; Lauber, J.; Lautenschlager, S.R.; Lawson, I.; Layter, J.G.; Lazic, D.; Lee, A.M.; Lefebvre, E.; Lellouch, D.; Letts, J.; Levinson, L.; Liebisch, R.; List, B.; Littlewood, C.; Lloyd, A.W.; Lloyd, S.L.; Loebinger, F.K.; Long, G.D.; Losty, M.J.; Ludwig, J.; Lui, D.; Macchiolo, A.; Macpherson, A.; Mannelli, M.; Marcellini, S.; Markopoulos, C.; Martin, A.J.; Martin, J.P.; Martinez, G.; Mashimo, T.; Mattig, Peter; McDonald, W.John; McKenna, J.; Mckigney, E.A.; McMahon, T.J.; McPherson, R.A.; Meijers, F.; Menke, S.; Merritt, F.S.; Mes, H.; Meyer, J.; Michelini, A.; Mihara, S.; Mikenberg, G.; Miller, D.J.; Mir, R.; Mohr, W.; Montanari, A.; Mori, T.; Nagai, K.; Nakamura, I.; Neal, H.A.; Nellen, B.; Nisius, R.; O'Neale, S.W.; Oakham, F.G.; Odorici, F.; Ogren, H.O.; Oreglia, M.J.; Orito, S.; Palinkas, J.; Pasztor, G.; Pater, J.R.; Patrick, G.N.; Patt, J.; Perez-Ochoa, R.; Petzold, S.; Pfeifenschneider, P.; Pilcher, J.E.; Pinfold, J.; Plane, David E.; Poffenberger, P.; Poli, B.; Polok, J.; Przybycien, M.; Rembser, C.; Rick, H.; Robertson, S.; Robins, S.A.; Rodning, N.; Roney, J.M.; Roscoe, K.; Rossi, A.M.; Rozen, Y.; Runge, K.; Runolfsson, O.; Rust, D.R.; Sachs, K.; Saeki, T.; Sahr, O.; Sang, W.M.; Sarkisian, E.K.G.; Sbarra, C.; Schaile, A.D.; Schaile, O.; Scharf, F.; Scharff-Hansen, P.; Schieck, J.; Schmitt, B.; Schmitt, S.; Schoning, A.; Schorner, T.; Schroder, Matthias; Schumacher, M.; Schwick, C.; Scott, W.G.; Seuster, R.; Shears, T.G.; Shen, B.C.; Shepherd-Themistocleous, C.H.; Sherwood, P.; Siroli, G.P.; Sittler, A.; Skuja, A.; Smith, A.M.; Snow, G.A.; Sobie, R.; Soldner-Rembold, S.; Sproston, M.; Stahl, A.; Stephens, K.; Steuerer, J.; Stoll, K.; Strom, David M.; Strohmer, R.; Tafirout, R.; Talbot, S.D.; Tanaka, S.; Taras, P.; Tarem, S.; Teuscher, R.; Thiergen, M.; Thomson, M.A.; von Torne, E.; Torrence, E.; Towers, S.; Trigger, I.; Trocsanyi, Z.; Tsur, E.; Turcot, A.S.; Turner-Watson, M.F.; Van Kooten, Rick J.; Vannerem, P.; Verzocchi, M.; Vikas, P.; Voss, H.; Wackerle, F.; Wagner, A.; Ward, C.P.; Ward, D.R.; Watkins, P.M.; Watson, A.T.; Watson, N.K.; Wells, P.S.; Wermes, N.; White, J.S.; Wilson, G.W.; Wilson, J.A.; Wyatt, T.R.; Yamashita, S.; Yekutieli, G.; Zacek, V.; Zer-Zion, D.

    1999-01-01

    The Michel parameters of the leptonic tau decays are measured using the OPAL detector at LEP. The Michel parameters are extracted from the energy spectra of the charged decay leptons and from their energy-energy correlations. A new method involving a global likelihood fit of Monte Carlo generated events with complete detector simulation and background treatment has been applied to the data recorded at center-of-mass energies close to sqrt(s) = M(Z) corresponding to an integrated luminosity of 155 pb-1 during the years 1990 to 1995. If e-mu universality is assumed and inferring the tau polarization from neutral current data, the measured Michel parameters are extracted. Limits on non-standard coupling constants and on the masses of new gauge bosons are obtained. The results are in agreement with the V-A prediction of the Standard Model.

  19. Meeting the Standards for Foreign Language Leaming through an Internet-Based Newspaper Project: Case Studies of Advanced-Level Japanese Learners

    Directory of Open Access Journals (Sweden)

    Miyuki Fukai

    2005-01-01

    Full Text Available Published in 1996, the Standards far Foreign Language Learning (Standards, define knowledge and abilities that foreign language learners should acquire in the U.S. The Internet is believed to facilitate standards-based instruction because of its capabilities as a communication medium, information provider, and publication tool. This paper presents one part of a study that investigated this claim through examining an Internet-based newspaper project in an advanced-level college Japanese course in light of the Japanese Standards. Six students were selected to serve as case studies, with their experiences in relation to this project analyzed in depth. The results show that the students found using the Internet to read authentic materials with the help of an online dictionary to be a positive experience. This then resulted in their actively using Japanese for personal enjoyment outside the classroom. These results suggest that the project was particularly successful in two goal areas: Communication and Communities.

  20. Standards for labelling and storage of anaesthetic medications--an audit.

    Science.gov (United States)

    Imran, Muhammad; Khan, Fauzia Anis; Abbasi, Shemila

    2009-12-01

    To check compliance of anaesthetist to current policies set for the use of medication within operation room and for induction room floor stock. The initial audit was conducted from 1st October to 31st November 2006 and reaudit after dissemination and sharing of results within the department repeated in July-August 2007. In each audit four operating rooms were visited twice a week. Syringes were checked for standard drug labelling for narcotic and non narcotic preparations. Drug trolley was checked for any expired drugs and whether the trolley was locked in case of operating room (OR) where list was ended or was on hold. Any unattended drug was noted and Induction room was checked twice weekly for accurate drug inventory and for standard drug storage recommendations. Labels were according to standard in non narcotic drugs on 25% syringes in first audit and 63% in second audit, likewise, narcotics labels were according to standards in 41% in first and 57% in second audit. Unattended drugs were present once in first and twice in second audit. There was 100% compliance in other drug storage policy parameters in both audits. Poor compliance of drug labelling standards for both narcotic and non narcotic drugs was present. However, second audit revealed improvement in all areas of drug handling. Dissemination of policies and reminders are important for continuing improvement in use of medication within operation room and within induction room floor stock.

  1. An enhancement to the NA4 gear vibration diagnostic parameter

    Science.gov (United States)

    Decker, Harry J.; Handschuh, Robert F.; Zakrajsek, James J.

    1994-01-01

    A new vibration diagnostic parameter for health monitoring of gears, NA4*, is proposed and tested. A recently developed gear vibration diagnostic parameter NA4 outperformed other fault detection methods at indicating the start and initial progression of damage. However, in some cases, as the damage progressed, the sensitivity of the NA4 and FM4 parameters tended to decrease and no longer indicated damage. A new parameter, NA4* was developed by enhancing NA4 to improve the trending of the parameter. This allows for the indication of damage both at initiation and also as the damage progresses. The NA4* parameter was verified and compared to the NA4 and FM4 parameters using experimental data from single mesh spur and spiral bevel gear fatigue rigs. The primary failure mode for the test cases was naturally occurring tooth surface pitting. The NA4* parameter is shown to be a more robust indicator of damage.

  2. Multi-parameter CAMAC compatible ADC scanner

    Energy Technology Data Exchange (ETDEWEB)

    Midttun, G J; Ingebretsen, F [Oslo Univ. (Norway). Fysisk Inst.; Johnsen, P J [Norsk Data A.S., Box 163, Oekern, Oslo 5, Norway

    1979-02-15

    A fast ADC scanner for multi-parameter nuclear physics experiments is described. The scanner is based on a standard CAMAC crate, and data from several different experiments can be handled simultaneously through a direct memory access (DMA) channel. The implementation on a PDP-7 computer is outlined.

  3. Parameter estimation of variable-parameter nonlinear Muskingum model using excel solver

    Science.gov (United States)

    Kang, Ling; Zhou, Liwei

    2018-02-01

    Abstract . The Muskingum model is an effective flood routing technology in hydrology and water resources Engineering. With the development of optimization technology, more and more variable-parameter Muskingum models were presented to improve effectiveness of the Muskingum model in recent decades. A variable-parameter nonlinear Muskingum model (NVPNLMM) was proposed in this paper. According to the results of two real and frequently-used case studies by various models, the NVPNLMM could obtain better values of evaluation criteria, which are used to describe the superiority of the estimated outflows and compare the accuracies of flood routing using various models, and the optimal estimated outflows by the NVPNLMM were closer to the observed outflows than the ones by other models.

  4. Maximum Likelihood Estimates of Parameters in Various Types of Distribution Fitted to Important Data Cases.

    OpenAIRE

    HIROSE,Hideo

    1998-01-01

    TYPES OF THE DISTRIBUTION:13;Normal distribution (2-parameter)13;Uniform distribution (2-parameter)13;Exponential distribution ( 2-parameter)13;Weibull distribution (2-parameter)13;Gumbel Distribution (2-parameter)13;Weibull/Frechet Distribution (3-parameter)13;Generalized extreme-value distribution (3-parameter)13;Gamma distribution (3-parameter)13;Extended Gamma distribution (3-parameter)13;Log-normal distribution (3-parameter)13;Extended Log-normal distribution (3-parameter)13;Generalized ...

  5. Planck 2013 results. XVI. Cosmological parameters

    DEFF Research Database (Denmark)

    Planck Collaboration,; Ade, P. A. R.; Aghanim, N.

    2013-01-01

    parameters to high precision. We find a low value of the Hubble constant, H0=67.3+/-1.2 km/s/Mpc and a high value of the matter density parameter, Omega_m=0.315+/-0.017 (+/-1 sigma errors) in excellent agreement with constraints from baryon acoustic oscillation (BAO) surveys. Including curvature, we find...... over standard LCDM. The deviation of the scalar spectral index from unity is insensitive to the addition of tensor modes and to changes in the matter content of the Universe. We find a 95% upper limit of r...

  6. The need for LWR metrology standardization: the imec roughness protocol

    Science.gov (United States)

    Lorusso, Gian Francesco; Sutani, Takumichi; Rutigliani, Vito; van Roey, Frieda; Moussa, Alain; Charley, Anne-Laure; Mack, Chris; Naulleau, Patrick; Constantoudis, Vassilios; Ikota, Masami; Ishimoto, Toru; Koshihara, Shunsuke

    2018-03-01

    As semiconductor technology keeps moving forward, undeterred by the many challenges ahead, one specific deliverable is capturing the attention of many experts in the field: Line Width Roughness (LWR) specifications are expected to be less than 2nm in the near term, and to drop below 1nm in just a few years. This is a daunting challenge and engineers throughout the industry are trying to meet these targets using every means at their disposal. However, although current efforts are surely admirable, we believe they are not enough. The fact is that a specification has a meaning only if there is an agreed methodology to verify if the criterion is met or not. Such a standardization is critical in any field of science and technology and the question that we need to ask ourselves today is whether we have a standardized LWR metrology or not. In other words, if a single reference sample were provided, would everyone measuring it get reasonably comparable results? We came to realize that this is not the case and that the observed spread in the results throughout the industry is quite large. In our opinion, this makes the comparison of LWR data among institutions, or to a specification, very difficult. In this paper, we report the spread of measured LWR data across the semiconductor industry. We investigate the impact of image acquisition, measurement algorithm, and frequency analysis parameters on LWR metrology. We review critically some of the International Technology Roadmap for Semiconductors (ITRS) metrology guidelines (such as measurement box length larger than 2μm and the need to correct for SEM noise). We compare the SEM roughness results to AFM measurements. Finally, we propose a standardized LWR measurement protocol - the imec Roughness Protocol (iRP) - intended to ensure that every time LWR measurements are compared (from various sources or to specifications), the comparison is sensible and sound. We deeply believe that the industry is at a point where it is

  7. Riccati-parameter solutions of nonlinear second-order ODEs

    International Nuclear Information System (INIS)

    Reyes, M A; Rosu, H C

    2008-01-01

    It has been proven by Rosu and Cornejo-Perez (Rosu and Cornejo-Perez 2005 Phys. Rev. E 71 046607, Cornejo-Perez and Rosu 2005 Prog. Theor. Phys. 114 533) that for some nonlinear second-order ODEs it is a very simple task to find one particular solution once the nonlinear equation is factorized with the use of two first-order differential operators. Here, it is shown that an interesting class of parametric solutions is easy to obtain if the proposed factorization has a particular form, which happily turns out to be the case in many problems of physical interest. The method that we exemplify with a few explicitly solved cases consists in using the general solution of the Riccati equation, which contributes with one parameter to this class of parametric solutions. For these nonlinear cases, the Riccati parameter serves as a 'growth' parameter from the trivial null solution up to the particular solution found through the factorization procedure

  8. A Case for Common Core State Standards: Gifted Curriculum 3.0

    Science.gov (United States)

    VanTassel-Baska, Joyce

    2012-01-01

    The Common Core State Standards (CCSS) is the most successful attempt to gain consensus across states for 21st century standards in language arts and mathematics. So far, 46 states have accepted these standards, with two consortia organized to translate them into resources and sample activities. A consultant firm has been hired to develop the…

  9. Spatial divergence of living standards during the economic growth phase in the periphery: A case study of North Karelia

    Directory of Open Access Journals (Sweden)

    Olli Lehtonen

    2011-01-01

    Full Text Available The advisability of an urban-centred growth strategy in sparsely populated parts of Europe has not been much analysed at micro-levels such as that of the postcode area. This paper investigates how regional disparities in living standards continued to increase during the technology-driven growth phase of 1993−2003, as exemplified by the case of North Karelia in Finland. Urban sprawl conveyed the spread effects of the rise in incomes, and the upsurge of living standards was concentrated in the neighbourhood of the provincial centre, Joensuu. Living standards faced a process of double divergence: between the central district of Joensuu and its commuter belt, and between the provincial core area and its hinterland, the latter consisting of rural areas and small towns dependent largely on natural resources. The spatial outcome of this socio-economic reorganization is a three-zone core-periphery pattern. As the economy grew, geographical shifts in wealth were consequences of the growth and mobility of certain social groups and strata. A wave of high living standards towards the outskirts of the provincial centre was generated by an expansion in commuting. The relative decline in living standards in the periphery was due to long-term rural decline and involved spatial restructuring.

  10. DYNAMIC ESTIMATION FOR PARAMETERS OF INTERFERENCE SIGNALS BY THE SECOND ORDER EXTENDED KALMAN FILTERING

    Directory of Open Access Journals (Sweden)

    P. A. Ermolaev

    2014-03-01

    Full Text Available Data processing in the interferometer systems requires high-resolution and high-speed algorithms. Recurrence algorithms based on parametric representation of signals execute consequent processing of signal samples. In some cases recurrence algorithms make it possible to increase speed and quality of data processing as compared with classic processing methods. Dependence of the measured interferometer signal on parameters of its model and stochastic nature of noise formation in the system is, in general, nonlinear. The usage of nonlinear stochastic filtering algorithms is expedient for such signals processing. Extended Kalman filter with linearization of state and output equations by the first vector parameters derivatives is an example of these algorithms. To decrease approximation error of this method the second order extended Kalman filtering is suggested with additionally usage of the second vector parameters derivatives of model equations. Examples of algorithm implementation with the different sets of estimated parameters are described. The proposed algorithm gives the possibility to increase the quality of data processing in interferometer systems in which signals are forming according to considered models. Obtained standard deviation of estimated amplitude envelope does not exceed 4% of the maximum. It is shown that signal-to-noise ratio of reconstructed signal is increased by 60%.

  11. The measures for achieving nZEB standard of retrofitted educational building for specific polish location - case study

    Science.gov (United States)

    Kwiatkowski, Jerzy; Mijakowski, Maciej; Trząski, Adrian

    2017-11-01

    Most of the EU member states have already set a definition of nZEB for new buildings and some of the countries also done it for existing buildings. As there is no definition of nZEB for existing buildings in Poland, the paper will include various considerations of such a standard. Next, a case study of educational building retrofitting to a proposed nZEB standard will be presented. The aim of the paper is to present what measures can be used in order to decrease energy consumption in existing building. The measures are divided into three parts: architectural and construction, installations and energy sources. Thus a complexity of the solutions are presented. As the nZEB standard is related to available energy sources, also an influence of local condition will be considered. Building chosen for analysis is located in an area under historic protection which makes the work even more difficult. It was proved that used solutions were chosen not only to reduce energy demand or increase energy production from renewable energy sources, but also to increase social and aesthetic features of the building.

  12. [Study of echocardiographic parameters of rheumatoid arthritis black African without clinically evident cardiovascular manifestations: A cross-sectional study of 73 cases in Senegal].

    Science.gov (United States)

    Dodo-Siddo, M N; Diao, M; Ndiaye, M B; Ndongo, S; Kane, A; Mbaye, A; Bodian, M; Sarr, S A; Sarr, M; Ba, S; Diop, T M

    2016-04-01

    Research of cardiac involvement in patients with rheumatoid arthritis can prevent complications and place in a logical secondary prevention. The objective of this study was to investigate the echocardiographic parameters in a population of Senegalese patients with rheumatoid arthritis without clinically evident cardiovascular manifestations. We conducted a descriptive cross-sectional study, which included prospectively from outpatients in the internal medicine department of university hospital center Aristide Le Dantec in Dakar, Senegal, with a diagnosis of rheumatoid arthritis without clinically evident cardiovascular disease. It focused on a sample of 73 patients of both sexes aged at least 18 years. Following clinical examination, we conducted laboratory tests (CRP, fibrinogen, ESR, rheumatoid factors: Latex and Waaler-Rose, anti-CCP, antinuclear factors and anti-ENA antibodies), ECG, echocardiography standard. Data were analyzed using a descriptive study of the different variables with the calculation of proportions for categorical variables, and the positional parameters and dispersion for quantitative variables. A total of 73 patients with rheumatoid arthritis without obvious cardiac events and meeting the criteria of definition of the ACR 1987 were included in the study. The mean age was 44.17±14.43 years with extremes of 18 and 75 years. The mean duration of RA was 5.93±4.78 years. The concept of family inflammatory arthritis was reported in 35.60% of cases and almost one in six patients had at least a factor of cardiovascular risk (16.96%). The abnormalities found in Doppler echocardiography were dominated by diastolic LV dysfunction (42.46%), increased left ventricular mass in 35.61%. Valvular leaks of variable grades were highlighted regarding all orifices but were rarely significant. The realization of echocardiography in patients with rheumatoid arthritis without clinically evident cardiovascular manifestations helps to highlight cardiovascular

  13. Guest Editorial: The "NGSS" Case Studies: All Standards, All Students

    Science.gov (United States)

    Miller, Emily; Januszyk, Rita

    2014-01-01

    To teachers of diverse classrooms, more rigorous standards in science may seem intimidating, as the past years of rigid accountability have failed to close the achievement gap. However, the "Next Generation Science Standards" ("NGSS") were written with all students in mind, with input and full review by a diversity and equity…

  14. Activity and gamma-ray emission standards

    International Nuclear Information System (INIS)

    Debertin, K.

    1983-01-01

    The standards made available by PTB are standards defined with regard to their activity A. If standard and measuring sample do not contain the same nuclide, the comparison with the standard is made via the emission rate R of the photons of a given energy. For this comparison, the value recommended by PTB for PTB standards should be used, as this value in many cases has been derived by emission rate measurements on specimens whose activity has been directly correlated with the activity of the standards released. (orig./DG) [de

  15. Green and sustainable remediation (GSR) evaluation: framework, standards, and tool. A case study in Taiwan.

    Science.gov (United States)

    Huang, Wen-Yen; Hung, Weiteng; Vu, Chi Thanh; Chen, Wei-Ting; Lai, Jhih-Wei; Lin, Chitsan

    2016-11-01

    Taiwan has a large number of poorly managed contaminated sites in need of remediation. This study proposes a framework, a set of standards, and a spreadsheet-based evaluation tool for implementing green and sustainable principles into remediation projects and evaluating the projects from this perspective. We performed a case study to understand how the framework would be applied. For the case study, we used a spreadsheet-based evaluation tool (SEFA) and performed field scale cultivation tests on a site contaminated with total petroleum hydrocarbons (TPHs). The site was divided into two lots: one treated by chemical oxidation and the other by bioremediation. We evaluated five core elements of green and sustainable remediation (GSR): energy, air, water resources, materials and wastes, and land and ecosystem. The proposed evaluation tool and field scale cultivation test were found to efficiently assess the effectiveness of the two remediation alternatives. The framework and related tools proposed herein can potentially be used to support decisions about the remediation of contaminated sites taking into account engineering management, cost effectiveness, and social reconciliation.

  16. Standard sirens and dark sector with Gaussian process*

    Directory of Open Access Journals (Sweden)

    Cai Rong-Gen

    2018-01-01

    Full Text Available The gravitational waves from compact binary systems are viewed as a standard siren to probe the evolution of the universe. This paper summarizes the potential and ability to use the gravitational waves to constrain the cosmological parameters and the dark sector interaction in the Gaussian process methodology. After briefly introducing the method to reconstruct the dark sector interaction by the Gaussian process, the concept of standard sirens and the analysis of reconstructing the dark sector interaction with LISA are outlined. Furthermore, we estimate the constraint ability of the gravitational waves on cosmological parameters with ET. The numerical methods we use are Gaussian process and the Markov-Chain Monte-Carlo. Finally, we also forecast the improvements of the abilities to constrain the cosmological parameters with ET and LISA combined with the Planck.

  17. What to use to express the variability of data: Standard deviation or standard error of mean?

    OpenAIRE

    Barde, Mohini P.; Barde, Prajakt J.

    2012-01-01

    Statistics plays a vital role in biomedical research. It helps present data precisely and draws the meaningful conclusions. While presenting data, one should be aware of using adequate statistical measures. In biomedical journals, Standard Error of Mean (SEM) and Standard Deviation (SD) are used interchangeably to express the variability; though they measure different parameters. SEM quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean. As reade...

  18. Minimal extension of the standard model scalar sector

    International Nuclear Information System (INIS)

    O'Connell, Donal; Wise, Mark B.; Ramsey-Musolf, Michael J.

    2007-01-01

    The minimal extension of the scalar sector of the standard model contains an additional real scalar field with no gauge quantum numbers. Such a field does not couple to the quarks and leptons directly but rather through its mixing with the standard model Higgs field. We examine the phenomenology of this model focusing on the region of parameter space where the new scalar particle is significantly lighter than the usual Higgs scalar and has small mixing with it. In this region of parameter space most of the properties of the additional scalar particle are independent of the details of the scalar potential. Furthermore the properties of the scalar that is mostly the standard model Higgs can be drastically modified since its dominant branching ratio may be to a pair of the new lighter scalars

  19. Impact of European standard EN15251 in the energy certification of services buildings-A Portuguese study case

    International Nuclear Information System (INIS)

    Alexandre, J.L.; Freire, A.; Teixeira, A.M.; Silva, M.; Rouboa, A.

    2011-01-01

    In Europe, about 40% of the energy is consumed in buildings, more than by industry or transport. However, there is a great potential for energy savings in this field, often at little cost. A new European directive and several European standards, including the comfort standard EN15251, were created to develop comfortable and efficient buildings. This paper presents the interaction of this specific standard with the application of energy efficiency regulations. In order to evaluate the impact of the EN15251 application in commercial buildings, a case study was analyzed using the dynamic simulation software TRNSYS-The Transient Energy System Simulation Tool. The building was set according to the standard conditions specified by the national regulation, and the assessment of comfort requirements of EN15251 was verified. It was found that the current comfort requirements of the Portuguese regulation are not sufficient to achieve by themselves the comfort categories specified in the EN15251. About 55% of the comfort hours could not be assured. Furthermore, reaching the main comfort requirements (temperature/fresh air rates) of the EN15251 does not lead directly to the assessment of the corresponding categories. Results showed that the building stayed one comfort category behind from the desired when the correspondent operative temperatures were secured. On the other hand, it was found possible to achieve the comfort categories by increasing the operative temperature ranges, imposed by the standard, about 1 deg. C. This has a negative consequence, which is the increment of energy consumption. However, there is a large room for maneuver to reduce this consumption into acceptable levels according to the EPBD. - Highlights: → We study the interaction of European standard with energy efficiency regulations. → We evaluate the impact of the EN15251 application in commercial buildings. → Buildings were set according to the standards specified by the national regulation.

  20. EG type radioactive calibration standards

    International Nuclear Information System (INIS)

    1980-01-01

    EG standards are standards with a radioactive substance deposited as a solution on filtration paper and after drying sealed into a plastic disc or cylinder shaped casing. They serve the official testing of X-ray and gamma spectrometers and as test sources. The table shows the types of used radionuclides, nominal values of activity and total error of determination not exceeding +-4%. Activity of standards is calculated from the charge and the specific activity of standard solution used for the preparation of the standard. Tightness and surface contamination is measured for each standard. The manufacturer, UVVVR Praha, gives a guarantee for the given values of activity and total error of determination. (M.D.)

  1. Physical parameters for the application of MRI. Restrictions due to physiological consequences and guidelines

    International Nuclear Information System (INIS)

    Frese, G.; Hebrank, F.X.; Renz, W.; Storch, T.

    1998-01-01

    Purpose: The standards and regulations concerning the protection of patients and operator staff within the context of MRI are compiled. Resulting consequences regarding physical parameters are evaluated. Material and methods: The static magnetic field, heating effects caused by RF-fields and acoustical noise are outlined. The actual boundaries of these parameters are compared against the relevant published standards. Peripheral stimulation limits due to pulsed gradient fields have been determined in a new clinical study. Results: Many parameters recommended for the normal operating mode are already exceeded during routine MRI. Referring to our clinical study, we found that limits recommended in the MRI relevant standards are unnecessarily conservative and can actually be doubled. (orig./AJ) [de

  2. Contact parameters in two dimensions for general three-body systems

    DEFF Research Database (Denmark)

    F. Bellotti, F.; Frederico, T.; T. Yamashita, M.

    2014-01-01

    a subsystem is composed of two identical non-interacting particles. We also show that the three-body contact parameter is negligible in the case of one non-interacting subsystem compared to the situation where all subsystem are bound. As example, we present results for mixtures of Lithium with two Cesium......We study the two dimensional three-body problem in the general case of three distinguishable particles interacting through zero-range potentials. The Faddeev decomposition is used to write the momentum-space wave function. We show that the large-momentum asymptotic spectator function has the same...... to obtain two- and three-body contact parameters. We specialize from the general cases to examples of two identical, interacting or non-interacting, particles. We find that the two-body contact parameter is not a universal constant in the general case and show that the universality is recovered when...

  3. Geometrically engineering the standard model: Locally unfolding three families out of E8

    International Nuclear Information System (INIS)

    Bourjaily, Jacob L.

    2007-01-01

    This paper extends and builds upon the results of [J. L. Bourjaily, arXiv:0704.0444.], in which we described how to use the tools of geometrical engineering to deform geometrically engineered grand unified models into ones with lower symmetry. This top-down unfolding has the advantage that the relative positions of singularities giving rise to the many 'low-energy' matter fields are related by only a few parameters which deform the geometry of the unified model. And because the relative positions of singularities are necessary to compute the superpotential, for example, this is a framework in which the arbitrariness of geometrically engineered models can be greatly reduced. In [J. L. Bourjaily, arXiv:0704.0444.], this picture was made concrete for the case of deforming the representations of an SU 5 model into their standard model content. In this paper we continue that discussion to show how a geometrically engineered 16 of SO 10 can be unfolded into the standard model, and how the three families of the standard model uniquely emerge from the unfolding of a single, isolated E 8 singularity

  4. Assessment of the water quality parameters in relation to fish ...

    African Journals Online (AJOL)

    Physicochemical indices of water body changed seasonally and this necessitated an investigation to assess the water quality parameters of Osinmo reservoir in relation to its fish species. The water quality parameters were measured using standard methods. Results obtained show that the reservoir is alkaline in nature with ...

  5. Possible checking of technical parameters in nondestructive materials and products testing

    International Nuclear Information System (INIS)

    Kesl, J.

    1987-01-01

    The requirements are summed up for partial technical parameters of instruments and facilities for nondestructive testing by ultrasound, radiography, by magnetic, capillary and electric induction methods. The requirements and procedures for testing instrument performance are presented for the individual methods as listed in domestic and foreign standards, specifications and promotional literature. The parameters to be tested and the methods of testing, including the testing and calibration instruments are shown in tables. The Czechoslovak standards are listed currently valid for nondestructive materials testing. (M.D.)

  6. Work related perceived stress and muscle activity during standardized computer work among female computer users

    DEFF Research Database (Denmark)

    Larsman, P; Thorn, S; Søgaard, K

    2009-01-01

    The current study investigated the associations between work-related perceived stress and surface electromyographic (sEMG) parameters (muscle activity and muscle rest) during standardized simulated computer work (typing, editing, precision, and Stroop tasks). It was part of the European case......-control study, NEW (Neuromuscular assessment in the Elderly Worker). The present cross-sectional study was based on a questionnaire survey and sEMG measurements among Danish and Swedish female computer users aged 45 or older (n=49). The results show associations between work-related perceived stress...... and trapezius muscle activity and rest during standardized simulated computer work, and provide partial empirical support for the hypothesized pathway of stress induced muscle activity in the association between an adverse psychosocial work environment and musculoskeletal symptoms in the neck and shoulder....

  7. Meta-Learning Approach for Automatic Parameter Tuning: A Case Study with Educational Datasets

    Science.gov (United States)

    Molina, M. M.; Luna, J. M.; Romero, C.; Ventura, S.

    2012-01-01

    This paper proposes to the use of a meta-learning approach for automatic parameter tuning of a well-known decision tree algorithm by using past information about algorithm executions. Fourteen educational datasets were analysed using various combinations of parameter values to examine the effects of the parameter values on accuracy classification.…

  8. Development of a safety parameter supervision system for Angra-1

    International Nuclear Information System (INIS)

    Silva, R.A. da; Thome Filho, Z.D.; Schirru, R.; Martinez, A.S.; Oliveira, L.F.S. de

    1986-01-01

    The Safety Parameter Supervision System (SSPS) which is a computerized system for monitoring essential parameters in real time, determining the safety status and emergency procedures for returning normal reactor operation, in case of an anomaly occurrence, is presented. The SSPS consists of three sub-systems: Integrated parameter monitoring system which gives to operators an integrated vision of values of a parameter set, able to detect any deviation of normal reactor operation; safety critical function system which evaluates safety status in terms of a safety critical function set appointed in advance, and in case of violation of any critical function, it initiates the adequate emergency procedure to return normal operation; and safety parameter computer system which carries out the arquirement of analogic and digital control signals of nuclear power plant. (M.C.K.) [pt

  9. Robotic 3D scanner as an alternative to standard modalities of medical imaging.

    Science.gov (United States)

    Chromy, Adam; Zalud, Ludek

    2014-01-01

    There are special medical cases, where standard medical imaging modalities are able to offer sufficient results, but not in the optimal way. It means, that desired results are produced with unnecessarily high expenses, with redundant informations or with needless demands on patient. This paper deals with one special case, where information useful for examination is the body surface only, inner sight into the body is needless. New specialized medical imaging device is developed for this situation. In the Introduction section, analysis of presently used medical imaging modalities is presented, which declares, that no available imaging device is best fitting for mentioned purposes. In the next section, development of the new specialized medical imaging device is presented, and its principles and functions are described. Then, the parameters of new device are compared with present ones. It brings significant advantages comparing to present imaging systems.

  10. Magnetic resonance imaging in the evaluation of standard radiotherapy field borders in patients with uterine cervix cancer

    International Nuclear Information System (INIS)

    Freire, Geison Moreira; Dias, Rodrigo Souza; Giordani, Adelmo Jose; Segreto, Helena Regina Comodo; Segreto, Roberto Araujo; Ribalta, Julisa Chamorro Lascasas

    2010-01-01

    Objective: to evaluate, by means of magnetic resonance imaging, the standardized field borders in radiotherapy for malignant neoplasm of uterine cervix, and to determine the role of this method in the reduction of possible planning errors related to the conventional technique. Materials and methods: magnetic resonance imaging studies for planning of treatment of 51 patients with uterine cervix cancer were retrospectively analyzed. The parameters assessed were the anterior and posterior field borders on sagittal section. Results: The anterior field border was inappropriate in 20 (39.2%) patients and geographic miss was observed in 37.3% of cases in the posterior border. The inappropriateness of both field borders did not correlate with clinical parameters such as patients' age, tumor staging, histological type and degree. Conclusion: the evaluation of standardized field borders with the use of magnetic resonance imaging has demonstrated high indices of inappropriateness of the lateral field borders, as well as the relevant role of magnetic resonance imaging in the radiotherapy planning for patients with uterine cervix cancer with a view to reduce the occurrence of geographic miss of the target volume. (author)

  11. Standardized perfusion value of the esophageal carcinoma and its correlation with quantitative CT perfusion parameter values

    Energy Technology Data Exchange (ETDEWEB)

    Djuric-Stefanovic, A., E-mail: avstefan@eunet.rs [Faculty of Medicine, University of Belgrade, Belgrade (Serbia); Unit of Digestive Radiology (First University Surgical Clinic), Center of Radiology and MR, Clinical Center of Serbia, Belgrade (Serbia); Saranovic, Dj., E-mail: crvzve4@gmail.com [Faculty of Medicine, University of Belgrade, Belgrade (Serbia); Unit of Digestive Radiology (First University Surgical Clinic), Center of Radiology and MR, Clinical Center of Serbia, Belgrade (Serbia); Sobic-Saranovic, D., E-mail: dsobic2@gmail.com [Faculty of Medicine, University of Belgrade, Belgrade (Serbia); Center of Nuclear Medicine, Clinical Center of Serbia, Belgrade (Serbia); Masulovic, D., E-mail: draganmasulovic@yahoo.com [Faculty of Medicine, University of Belgrade, Belgrade (Serbia); Unit of Digestive Radiology (First University Surgical Clinic), Center of Radiology and MR, Clinical Center of Serbia, Belgrade (Serbia); Artiko, V., E-mail: veraart@beotel.rs [Faculty of Medicine, University of Belgrade, Belgrade (Serbia); Center of Nuclear Medicine, Clinical Center of Serbia, Belgrade (Serbia)

    2015-03-15

    Purpose: Standardized perfusion value (SPV) is a universal indicator of tissue perfusion, normalized to the whole-body perfusion, which was proposed to simplify, unify and allow the interchangeability among the perfusion measurements and comparison between the tumor perfusion and metabolism. The aims of our study were to assess the standardized perfusion value (SPV) of the esophageal carcinoma, and its correlation with quantitative CT perfusion measurements: blood flow (BF), blood volume (BV), mean transit time (MTT) and permeability surface area product (PS) of the same tumor volume samples, which were obtained by deconvolution-based CT perfusion analysis. Methods: Forty CT perfusion studies of the esophageal cancer were analyzed, using the commercial deconvolution-based CT perfusion software (Perfusion 3.0, GE Healthcare). The SPV of the esophageal tumor and neighboring skeletal muscle were correlated with the corresponding mean tumor and muscle quantitative CT perfusion parameter values, using Spearman's rank correlation coefficient (r{sub S}). Results: Median SPV of the esophageal carcinoma (7.1; range: 2.8–13.4) significantly differed from the SPV of the skeletal muscle (median: 1.0; range: 0.4–2.4), (Z = −5.511, p < 0.001). The cut-off value of the SPV of 2.5 enabled discrimination of esophageal cancer from the skeletal muscle with sensitivity and specificity of 100%. SPV of the esophageal carcinoma significantly correlated with corresponding tumor BF (r{sub S} = 0.484, p = 0.002), BV (r{sub S} = 0.637, p < 0.001) and PS (r{sub S} = 0.432, p = 0.005), and SPV of the skeletal muscle significantly correlated with corresponding muscle BF (r{sub S} = 0.573, p < 0.001), BV (r{sub S} = 0.849, p < 0.001) and PS (r{sub S} = 0.761, p < 0.001). Conclusions: We presented a database of the SPV for the esophageal cancer and proved that SPV of the esophageal neoplasm significantly differs from the SPV of the skeletal muscle, which represented a sample of healthy

  12. Right-handed neutrino dark matter in a U(1) extension of the Standard Model

    Science.gov (United States)

    Cox, Peter; Han, Chengcheng; Yanagida, Tsutomu T.

    2018-01-01

    We consider minimal U(1) extensions of the Standard Model in which one of the right-handed neutrinos is charged under the new gauge symmetry and plays the role of dark matter. In particular, we perform a detailed phenomenological study for the case of a U(1)(B‑L)3 flavoured B‑L symmetry. If perturbativity is required up to high-scales, we find an upper bound on the dark matter mass of mχlesssim2 TeV, significantly stronger than that obtained in simplified models. Furthermore, if the U(1)(B‑L)3 breaking scalar has significant mixing with the SM Higgs, there are already strong constraints from direct detection. On the other hand, there remains significant viable parameter space in the case of small mixing, which may be probed in the future via LHC Z' searches and indirect detection. We also comment on more general anomaly-free symmetries consistent with a TeV-scale RH neutrino dark matter candidate, and show that if two heavy RH neutrinos for leptogenesis are also required, one is naturally led to a single-parameter class of U(1) symmetries.

  13. Acceleration parameters for fluid physics with accelerating bodies

    CSIR Research Space (South Africa)

    Gledhill, Irvy MA

    2016-06-01

    Full Text Available to an acceleration parameter that appears to be new in fluid physics, but is known in cosmology. A selection of cases for rectilinear acceleration has been chosen to illustrate the point that this parameter alone does not govern regimes of flow about significantly...

  14. Parameter analysis calculation on characteristics of portable FAST reactor

    International Nuclear Information System (INIS)

    Otsubo, Akira; Kowata, Yasuki

    1998-06-01

    In this report, we performed a parameter survey analysis by using the analysis program code STEDFAST (Space, TErrestrial and Deep sea FAST reactor-gas turbine system). Concerning the deep sea fast reactor-gas turbine system, calculations with many variable parameters were performed on the base case of a NaK cooled reactor of 40 kWe. We aimed at total equipment weight and surface area necessary to remove heat from the system as important values of the characteristics of the system. Electric generation power and the material of a pressure hull were specially influential for the weight. The electric generation power, reactor outlet/inlet temperatures, a natural convection heat transfer coefficient of sea water were specially influential for the area. Concerning the space reactor-gas turbine system, the calculations with the variable parameters of compressor inlet temperature, reactor outlet/inlet temperatures and turbine inlet pressure were performed on the base case of a Na cooled reactor of 40 kWe. The first and the second variable parameters were influential for the total equipment weight of the important characteristic of the system. Concerning the terrestrial fast reactor-gas turbine system, the calculations with the variable parameters of heat transferred pipe number in a heat exchanger to produce hot water of 100degC for cogeneration, compressor stage number and the kind of primary coolant material were performed on the base case of a Pb cooled reactor of 100 MWt. In the comparison of calculational results for Pb and Na of primary coolant material, the primary coolant weight flow rate was naturally large for the former case compared with for the latter case because density is very different between them. (J.P.N.)

  15. Relationship between macular ganglion cell complex parameters and visual field parameters after tumor resection in chiasmal compression.

    Science.gov (United States)

    Ohkubo, Shinji; Higashide, Tomomi; Takeda, Hisashi; Murotani, Eiji; Hayashi, Yasuhiko; Sugiyama, Kazuhisa

    2012-01-01

    To evaluate the relationship between macular ganglion cell complex (GCC) parameters and visual field (VF) parameters in chiasmal compression and the potential for GCC parameters in order to predict the short-term postsurgical VF. Twenty-three eyes of 12 patients with chiasmal compression and 33 control eyes were studied. All patients underwent transsphenoidal tumor resection. Before surgery a 3D scan of the macula was taken using spectral-domain optical coherence tomography. All patients underwent Humphrey 24-2 VF testing after surgery. Spearman's rank correlation coefficients were used to evaluate the relationship between the GCC parameters and VF parameters [mean deviation (MD), pattern standard deviation]. Coefficients of determination (R2) were calculated using linear regression. Average thickness in the patients was significantly thinner than that of controls. Average thickness, global loss volume and focal loss volume (FLV) significantly correlated with the MD. We observed the greatest R2 between FLV and MD. Examining the macular GCC was useful for evaluating structural damage in patients with chiasmal compression. Preoperative GCC parameters, especially FLV, may be useful in predicting visual function following surgical decompression of chiasmal compression.

  16. New aspects from legislation, guidelines and safety standards for MRI

    International Nuclear Information System (INIS)

    Muehlenweg, M.; Schaefers, G.; Trattnig, S.

    2015-01-01

    Many aspects of magnetic resonance (MR) operation are not directly regulated by law but in standards, guidelines and the operating instructions of the MR scanner. The mandatory contents of the operating instructions are regulated in a central standard of the International Electrotechnical Commission (IEC) 60601-2-33. In this standard, the application of static magnetic fields in MRI up to 8 Tesla (T) in the clinical routine (first level controlled mode) has recently been approved. Furthermore, the equally necessary CE certification of ultra-high field scanners (7-8 T) in Europe is expected for future devices. The existing installations will not be automatically certified but will retain their experimental status. The current extension of IEC 60601-2-33 introduces a new add-on option, the so-called fixed parameter option (FPO). This option might also be switched on in addition to the established operating modes and defines a fixed device constellation and certain parameters of the energy output of MR scanners designed to simplify the testing of patients with implants in the future. The employment of pregnant workers in an MRI environment is still not generally regulated in Europe. In parts of Germany and Austria pregnant and lactating employees were prohibited from working in the MR control zone (0.5 mT) in 2014. This is based on the mostly unresolved question of the applicability of limits for employees (exposure of extremities to static magnetic fields up to 8 T allowed) or the thresholds for the general population (maximum 400 mT). According to the European Society of Urogenital Radiology (ESUR), the discarding of breast milk after i.v. administration of gadolinium-based contrast agents in the case of a breastfeeding woman is only recommended when using contrast agents in the nephrogenic systemic fibrosis (NSF) high-risk category. (orig.) [de

  17. Ride comfort analysis with physiological parameters for an e-health train.

    Science.gov (United States)

    Lee, Youngbum; Shin, Kwangsoo; Lee, Sangjoon; Song, Yongsoo; Han, Sungho; Lee, Myoungho

    2009-12-01

    Transportation by train has numerous advantages over road transportation, especially with regard to energy efficiency, ecological features, safety, and punctuality. However, the contrast in ride comfort between standard road transportation and train travel has become a competitive issue. The ride comfort enhancement technology of tilting trains (TTX) is a particularly important issue in the development of the Korean high-speed railroad business. Ride comfort is now defined in international standards such as UIC13 and ISO2631. The Korean standards such as KSR9216 mainly address physical parameters such as vibration and noise. In the area of ride comfort, living quality parameter techniques have recently been considered in Korea, Japan, and Europe. This study introduces biological parameters, particularly variations in heart rate, as a more direct measure of comfort. Biological parameters are based on physiological responses rather than on purely external mechanical parameters. Variability of heart rate and other physiological parameters of passengers are measured in a simulation involving changes in the tilting angle of the TTX. This research is a preliminary study for the implementation of an e-health train, which would provide passengers with optimized ride comfort. The e-health train would also provide feedback on altered ride comfort situations that can improve a passenger's experience and provide a healthcare service on the train. The aim of this research was to develop a ride comfort evaluation system for the railway industry, the automobile industry, and the air industry. The degree of tilt correlated with heart rate, fatigue, and unrelieved alertness.

  18. Correlation of pattern reversal visual evoked potential parameters with the pattern standard deviation in primary open angle glaucoma.

    Science.gov (United States)

    Kothari, Ruchi; Bokariya, Pradeep; Singh, Ramji; Singh, Smita; Narang, Purvasha

    2014-01-01

    To evaluate whether glaucomatous visual field defect particularly the pattern standard deviation (PSD) of Humphrey visual field could be associated with visual evoked potential (VEP) parameters of patients having primary open angle glaucoma (POAG). Visual field by Humphrey perimetry and simultaneous recordings of pattern reversal visual evoked potential (PRVEP) were assessed in 100 patients with POAG. The stimulus configuration for VEP recordings consisted of the transient pattern reversal method in which a black and white checker board pattern was generated (full field) and displayed on VEP monitor (colour 14″) by an electronic pattern regenerator inbuilt in an evoked potential recorder (RMS EMG EP MARK II). The results of our study indicate that there is a highly significant (P<0.001) negative correlation of P100 amplitude and a statistically significant (P<0.05) positive correlation of N70 latency, P100 latency and N155 latency with the PSD of Humphrey visual field in the subjects of POAG in various age groups as evaluated by Student's t-test. Prolongation of VEP latencies were mirrored by a corresponding increase of PSD values. Conversely, as PSD increases the magnitude of VEP excursions were found to be diminished.

  19. [Analysis of correlation between trabecular microstructure and clinical imaging parameters in fracture region of osteoporotic hip].

    Science.gov (United States)

    Peng, Jing; Zhou, Yong; Min, Li; Zhang, Wenli; Luo, Yi; Zhang, Xuelei; Zou, Chang; Shi, Rui; Tu, Chongqi

    2014-05-01

    To analyze the correlation between the trabecular microstructure and the clinical imaging parameters in the fracture region of osteoporotic hip so as to provide a simple method to evaluate the trabecular microstructure by a non-invasive way. Between June 2012 and January 2013, 16 elderly patients with femoral neck fracture underwent hip arthroplasty were selected as the trial group; 5 young patients with pelvic fracture were selected as the control group. The hip CT examination was done, and cancellous bone volume/marrow cavity volume (CV/MV) was analyzed with Mimics 10.01 software in the control group. The CT scan and bone mineral density (BMD) measurement were performed on normal hips of the trial group, and cuboid specimens were gained from the femoral necks at the place of the tensional trabeculae to evaluate the trabecular microstructure parameters by Micro-CT, including bone volume fraction (BV/TV), trabecular number (Tb.N), trabecular spacing (Tb.Sp), trabecular thickness (Tb.Th), connect density (Conn.D), and structure model index (SMI). The correlation between imaging parameters and microstructure parameters was analyzed. In the trial group, the BMD value was 0.491-0.698 g/cm2 (mean, 0.601 g/cm2); according to World Health Organization (WHO) standard, 10 cases were diagnosed as having osteoporosis, and 6 cases as having osteopenia. The CV/MV of the trial group (0.670 1 +/- 0.102 0) was significantly lower than that of the control group (0.885 0 +/- 0.089 1) (t = -4.567, P = 0.000). In the trial group, CV/MV had correlation with BV/TV, Tb.Th, and SMI (P 0.05). BV/TV had correlation with Tb.Th, Tb.N, Tb.Sp, and SMI (P microstructure parameters (P > 0.05). CV/MV obviously decreases in the osteoporotic hip, and there is a correlation between CV/MV and the microstructure parameters of BV/TV, Tb.Th, and SMI, to some extent, which can reflect the variety of the microstructure of the trabeculae. There is no correlation between BMD of femoral neck and

  20. Lumped-parameters equivalent circuit for condenser microphones modeling.

    Science.gov (United States)

    Esteves, Josué; Rufer, Libor; Ekeom, Didace; Basrour, Skandar

    2017-10-01

    This work presents a lumped parameters equivalent model of condenser microphone based on analogies between acoustic, mechanical, fluidic, and electrical domains. Parameters of the model were determined mainly through analytical relations and/or finite element method (FEM) simulations. Special attention was paid to the air gap modeling and to the use of proper boundary condition. Corresponding lumped-parameters were obtained as results of FEM simulations. Because of its simplicity, the model allows a fast simulation and is readily usable for microphone design. This work shows the validation of the equivalent circuit on three real cases of capacitive microphones, including both traditional and Micro-Electro-Mechanical Systems structures. In all cases, it has been demonstrated that the sensitivity and other related data obtained from the equivalent circuit are in very good agreement with available measurement data.

  1. An Assessment of Physicochemical Parameters of Selected Industrial Effluents in Nepal

    Directory of Open Access Journals (Sweden)

    Abhinay Man Shrestha

    2017-01-01

    Full Text Available It is a well-known fact that the effluents released from the industries and environmental degradation go hand in hand. With the ongoing global industrialization this problem has been further aggravated. As such, Nepal is no exception. Hundreds of industries are being registered in the country annually which inevitably brings the issues regarding environmental pollution. This study has been conducted with samples of wastewater from 5 different industrial sites in 4 districts of Nepal, namely, Makwanpur, Sunsari, Morang, and Kathmandu, among which two were Waste Water Treatment Plants which treated the combined effluents collected from various sources. The other three sites were from wires and cables industry, paint manufacturing industry, and plastic cutting industry. The physicochemical parameters analysed were pH, temperature, conductivity, turbidity, and Cu, Cr, SO42-, and PO43- levels. Possible onsite measurements were recorded using portable, handheld devices whereas other parameters were assessed in the laboratory. The observed parameter levels in the collected samples were compared against the available Nepal national standards for industrial effluents and in the absence of standards for industrial effluents, with other relevant standard levels. Most of the parameters analysed were within the permissible limits with the exception of pH and Cr levels in some sites.

  2. An analysis of Indonesia’s information security index: a case study in a public university

    Science.gov (United States)

    Yustanti, W.; Qoiriah, A.; Bisma, R.; Prihanto, A.

    2018-01-01

    Ministry of Communication and Informatics of the Republic of Indonesia has issued the regulation number 4-2016 about Information Security Management System (ISMS) for all kind organizations. Public university as a government institution must apply this standard to assure its level of information security has complied ISO 27001:2013. This research is a preliminary study to evaluate the readiness of university IT services (case study in a public university) meets the requirement of ISO 27001:2013 using the Indonesia’s Information Security Index (IISI). There are six parameters used to measure the level of information security, these are the ICT role, governance, risk management, framework, asset management and technology. Each parameter consists of serial questions which must be answered and convert to a numeric value. The result shows the level of readiness and maturity to apply ISO 27001 standard.

  3. Circuit realization, chaos synchronization and estimation of parameters of a hyperchaotic system with unknown parameters

    Directory of Open Access Journals (Sweden)

    A. Elsonbaty

    2014-10-01

    Full Text Available In this article, the adaptive chaos synchronization technique is implemented by an electronic circuit and applied to the hyperchaotic system proposed by Chen et al. We consider the more realistic and practical case where all the parameters of the master system are unknowns. We propose and implement an electronic circuit that performs the estimation of the unknown parameters and the updating of the parameters of the slave system automatically, and hence it achieves the synchronization. To the best of our knowledge, this is the first attempt to implement a circuit that estimates the values of the unknown parameters of chaotic system and achieves synchronization. The proposed circuit has a variety of suitable real applications related to chaos encryption and cryptography. The outputs of the implemented circuits and numerical simulation results are shown to view the performance of the synchronized system and the proposed circuit.

  4. Standard Model CP-violation and baryon asymmetry

    CERN Document Server

    Gavela, M.B.; Orloff, J.; Pene, O.

    1994-01-01

    Simply based on CP arguments, we argue against a Standard Model explanation of the baryon asymmetry of the universe in the presence of a first order phase transition. A CP-asymmetry is found in the reflection coefficients of quarks hitting the phase boundary created during the electroweak transition. The problem is analyzed both in an academic zero temperature case and in the realistic finite temperature one. The building blocks are similar in both cases: Kobayashi-Maskawa CP-violation, CP-even phases in the reflection coefficients of quarks, and physical transitions due to fermion self-energies. In both cases an effect is present at order $\\alpha_W^2$ in rate. A standard GIM behaviour is found as intuitively expected. In the finite temperature case, a crucial role is played by the damping rate of quasi-particles in a hot plasma, which is a relevant scale together with $M_W$ and the temperature. The effect is many orders of magnitude below what observation requires, and indicates that non standard physics is ...

  5. Influence of coronal mass ejections on parameters of high-speed solar wind: a case study

    Science.gov (United States)

    Shugay, Yulia; Slemzin, Vladimir; Rodkin, Denis; Yermolaev, Yuri; Veselovsky, Igor

    2018-05-01

    We investigate the case of disagreement between predicted and observed in-situ parameters of the recurrent high-speed solar wind streams (HSSs) existing for Carrington rotation (CR) 2118 (December 2011) in comparison with CRs 2117 and 2119. The HSSs originated at the Sun from a recurrent polar coronal hole (CH) expanding to mid-latitudes, and its area in the central part of the solar disk increased with the rotation number. This part of the CH was responsible for the equatorial flank of the HSS directed to the Earth. The time and speed of arrival for this part of the HSS to the Earth were predicted by the hierarchical empirical model based on EUV-imaging and the Wang-Sheeley-Arge ENLIL semi-empirical replace model and compared with the parameters measured in-situ by model. The predicted parameters were compared with those measured in-situ. It was found, that for CR 2117 and CR 2119, the predicted HSS speed values agreed with the measured ones within the typical accuracy of ±100 km s-1. During CR 2118, the measured speed was on 217 km s-1 less than the value predicted in accordance with the increased area of the CH. We suppose that at CR 2118, the HSS overtook and interacted with complex ejecta formed from three merged coronal mass ejections (CMEs) with a mean speed about 400 km s-1. According to simulations of the Drag-based model, this complex ejecta might be created by several CMEs starting from the Sun in the period between 25 and 27 December 2011 and arriving to the Earth simultaneously with the HSS. Due to its higher density and magnetic field strength, the complex ejecta became an obstacle for the equatorial flank of the HSS and slowed it down. During CR 2117 and CR 2119, the CMEs appeared before the arrival of the HSSs, so the CMEs did not influence on the HSSs kinematics.

  6. Realizing business benefits from company IT standardization : Case study research into the organizational value of IT standards, towards a company IT standardization management framework

    NARCIS (Netherlands)

    van Wessel, R.M.

    2008-01-01

    From a practical point of view, this research provides insight into how company IT standards affect business process performance. Furthermore it gives recommendations on how to govern and manage such standards successfully with regard to their selection, implementation and usage. After evaluating

  7. Supergravity p-branes reexamined: Extra parameters, uniqueness, and topological censorship

    International Nuclear Information System (INIS)

    Gal'tsov, Dmitri V.; Lemos, Jose P.S.; Clement, Gerard

    2004-01-01

    We perform a complete integration of the Einstein-dilaton-antisymmetric form action describing black p-branes in arbitrary dimensions assuming the transverse space to be homogeneous and possessing spherical, toroidal, or hyperbolic topology. The generic solution contains eight parameters satisfying one constraint. Asymptotically flat solutions form a five-parametric subspace, while conditions of regularity of the nondegenerate event horizon further restrict this number to 3, which can be related to the mass and charge densities and the asymptotic value of the dilaton. In the case of a degenerate horizon, this number is reduced by 1. Our derivation constitutes a constructive proof of the uniqueness theorem for p-branes with the homogeneous transverse space. No asymptotically flat solutions with toroidal or hyperbolic transverse space within the considered class are shown to exist, which result can be viewed as a demonstration of the topological censorship for p-branes. From our considerations it follows, in particular, that some previously discussed p-brane-like solutions with extra parameters do not satisfy the standard conditions of asymptotic flatness and absence of naked singularities. We also explore the same system in presence of a cosmological constant and derive a complete analytic solution for higher-dimensional charged topological black holes, thus proving their uniqueness

  8. NeuroRecovery Network provides standardization of locomotor training for persons with incomplete spinal cord injury.

    Science.gov (United States)

    Morrison, Sarah A; Forrest, Gail F; VanHiel, Leslie R; Davé, Michele; D'Urso, Denise

    2012-09-01

    To illustrate the continuity of care afforded by a standardized locomotor training program across a multisite network setting within the Christopher and Dana Reeve Foundation NeuroRecovery Network (NRN). Single patient case study. Two geographically different hospital-based outpatient facilities. This case highlights a 25-year-old man diagnosed with C4 motor incomplete spinal cord injury with American Spinal Injury Association Impairment Scale grade D. Standardized locomotor training program 5 sessions per week for 1.5 hours per session, for a total of 100 treatment sessions, with 40 sessions at 1 center and 60 at another. Ten-meter walk test and 6-minute walk test were assessed at admission and discharge across both facilities. For each of the 100 treatment sessions percent body weight support, average, and maximum treadmill speed were evaluated. Locomotor endurance, as measured by the 6-minute walk test, and overground gait speed showed consistent improvement from admission to discharge. Throughout training, the patient decreased the need for body weight support and was able to tolerate faster treadmill speeds. Data indicate that the patient continued to improve on both treatment parameters and walking function. Standardization across the NRN centers provided a mechanism for delivering consistent and reproducible locomotor training programs across 2 facilities without disrupting training or recovery progression. Copyright © 2012 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  9. Towards the standardization of time--temperature parameter usage in elevated temperature data analysis

    International Nuclear Information System (INIS)

    Goldhoff, R.M.

    1975-01-01

    Work devoted to establishment of recommended practices for correlating and extrapolating relevant data on creep-rupture properties of materials at high temperatures is described. An analysis of the time-temperature parameter is included along with descriptions of analysis and evaluation methods. Results of application of the methods are compared

  10. Robust Parameter Coordination for Multidisciplinary Design

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    This paper introduced a robust parameter coordination method to analyze parameter uncertainties so as to predict conflicts and coordinate parameters in multidisciplinary design. The proposed method is based on constraints network, which gives a formulated model to analyze the coupling effects between design variables and product specifications. In this model, interval boxes are adopted to describe the uncertainty of design parameters quantitatively to enhance the design robustness. To solve this constraint network model, a general consistent algorithm framework is designed and implemented with interval arithmetic and the genetic algorithm, which can deal with both algebraic and ordinary differential equations. With the help of this method, designers could infer the consistent solution space from the given specifications. A case study involving the design of a bogie dumping system demonstrates the usefulness of this approach.

  11. Hand 'stress' arthritis in young subjects: effects of Flexiqule (pharma-standard Boswellia extract). A preliminary case report.

    Science.gov (United States)

    Belcaro, G; Feragalli, B; Cornelli, U; Dugall, M

    2015-10-22

    This case report (supplement registry study) evaluated subjects with painful 'stress' arthritis of the hand mainly localized at the joints. The patients received a suggestion to follow a rehabilitation plan (standard management; SM). A second group also used the same SM in association with the oral, pharma-standard supplement FlexiQule (Alchem) a new standardized, phytosomal preparation manufactured from the Boswellia plant, which can be used for self-management in inflammatory conditions (150 mg / 3 times daily). The two resulting registry groups included 12 subjects using SM+Flexiqule and and 11 controls (SM only). The groups were comparable. Serology showed no significant alterations: only ESR was slightly elevated (minimal elevation). After 2 weeks, the ESR was normal in the supplement group and mildly elevated in controls (p<0.05%). The decrease in hypertermic areas was greater/faster (p<0.05) in the supplement group. The identification of a working stress and the localization to the dominant hand was comparable in both groups. At 2 weeks, the decrease in pain was significantly faster and more important with the supplement (p<0.05). The hand became more usable in time and the score was better with the supplement (p<0.05). No supplemented patient had to use other drugs, while in the control group 3 subjects eventually used NSAIDs to control pain and stiffness and one used corticosteroids. In conclusion, the natural extract Flexiqule was effective in controlling work-related stress arthritis (without inflammaìtory signs) over a 2 weeks period, better than only Standard Management. More prolonged and larger studies are needed.

  12. Cogeneration: Key feasibility analysis parameters

    International Nuclear Information System (INIS)

    Coslovi, S.; Zulian, A.

    1992-01-01

    This paper first reviews the essential requirements, in terms of scope, objectives and methods, of technical/economic feasibility analyses applied to cogeneration systems proposed for industrial plants in Italy. Attention is given to the influence on overall feasibility of the following factors: electric power and fuel costs, equipment coefficients of performance, operating schedules, maintenance costs, Italian Government taxes and financial and legal incentives. Through an examination of several feasibility studies that were done on cogeneration proposals relative to different industrial sectors, a sensitivity analysis is performed on the effects of varying the weights of different cost benefit analysis parameters. With the use of statistical analyses, standard deviations are then determined for key analysis parameters, and guidelines are suggested for analysis simplifications

  13. Developing and enforcing internal information systems standards: InduMaker’s Standards Management Process

    Directory of Open Access Journals (Sweden)

    Claudia Loebbecke

    2016-01-01

    Full Text Available It is widely agreed that standards provide numerous benefits when available and enforced. Company-internal Information Systems (IS management procedures and solutions, in the following coined IS ‘standards’, allow for harmonizing operations between company units, locations and even different service providers. However, many companies lack an organized process for defining and managing internal IS standards, which causes uncertainties and delays in decision making, planning, and design processes. In this case study of the globally operating InduMaker (anonymized company name, an established manufacturing supplier, we look into the company-internal management of IS standards. Theoretically grounded in the organizational and IS-focused literature on business process modelling and business process commoditization, we describe and investigate InduMaker’s newly developed Standard Management Process (SMP for defining and managing company-internal business and IS standards, with which the multinational pursues offering clear answers to business and IT departments about existing IS standards, their degree of obligation, applicability, and scope at any time.

  14. Polychronakos fractional statistics with a complex-valued parameter

    International Nuclear Information System (INIS)

    Rovenchak, Andrij

    2012-01-01

    A generalization of quantum statistics is proposed in a fashion similar to the suggestion of Polychronakos [Phys. Lett. B 365, 202 (1996)] with the parameter α varying between −1 (fermionic case) and +1 (bosonic case). However, unlike the original formulation, it is suggested that intermediate values are located on the unit circle in the complex plane. In doing so one can avoid the case α = 0 corresponding to the Boltzmann statistics, which is not a quantum one. The limits of α → +1 and α → −1 reproducing small deviations from the Bose and Fermi statistics, respectively, are studied in detail. The equivalence between the statistics parameter and a possible dissipative part of the excitation spectrum is established. The case of a non-conserving number of excitations is analyzed. It is defined from the condition that the real part of the chemical potential equals zero. Thermodynamic quantities of a model system of two-dimensional harmonic oscillators are calculated.

  15. Standard of care: the legal view.

    Science.gov (United States)

    Curley, Arthur W; Peltier, Bruce

    2014-01-01

    The standard of care is a legal construct, a line defined by juries, based on expert testimony, marking a point where treatment failed to meet expectations for what a reasonable professional would have done. There is no before-the-fact objective definition of this standard, except for cases of law and regulation, such as the Occupational Safety and Health Admintration (OSHA). Practitioners must use their judgment in determining what would be acceptable should a case come to trial. Professional codes of conduct and acting in the patient's best interests are helpful guides to practicing within the standard of care. Continuing education credit is available for this and the following article together online at www.dentalethics.org for those who wish to complete the quiz and exercises associated with them (see Course 22).

  16. Security Scheme Based on Parameter Hiding Technic for Mobile Communication in a Secure Cyber World

    Directory of Open Access Journals (Sweden)

    Jong Hyuk Park

    2016-10-01

    Full Text Available Long Term Evolution (LTE and Long Term Evolution-Advanced (LTE-A support a better data transmission service than 3G dose and are globally commercialized technologies in a cyber world that is essential for constructing a future mobile environment, since network traffics have exponentially increased as people have started to use more than just one mobile device. However, when User Equipment (UE is executing initial attach processes to access LTE networks, there is a vulnerability in which identification parameters like International Mobile Subscriber Identity (IMSI and Radio Network Temporary Identities (RNTI are transmitted as plain texts. It can threat various services that are commercialized therewith in a cyber world. Therefore, a security scheme is proposed in this paper where identification parameters can be securely transmitted and hidden in four cases where initial attach occurs between UE and Mobility Management Entity (MME. The proposed security scheme not only supports encrypted transmission of identification parameters but also mutual authentication between Evolved Node B (eNB and MME to make a secure cyber world. Additionally, performance analysis results using an OPNET simulator showed the satisfaction of the average delay rate that is specified in LTE standards.

  17. Reproducibility of phantom-based quality assurance parameters in real-time ultrasound imaging.

    Science.gov (United States)

    Sipilä, Outi; Blomqvist, Päivi; Jauhiainen, Mervi; Kilpeläinen, Tiina; Malaska, Paula; Mannila, Vilma; Vinnurva-Jussila, Tuula; Virsula, Sari

    2011-07-01

    In a large radiological center, the ultrasound (US) quality assurance (QA) program involves several professionals. Although the operator and the parameters utilized can contribute to the results, the selected QA parameters should still reflect the quality of the US scanner, not the measuring process. To evaluate the reproducibility of recommended phantom-based US QA parameters in a realistic environment. Six sonographers measured six high-end US scanners with 20 transducers using a general purpose phantom. Every transducer was measured altogether seven times, using one frequency per transducer. The QA parameters studied were homogeneity, visualization depth, vertical and horizontal distance measurements, axial and lateral resolution, and the correct visibility of anechoic and high-contrast masses. The evaluation of the homogeneity was based on visual observations. Inter-observer interquartile ranges were computed for the grading of the masses. For the other QA parameters, the mean inter- and intra-observer coefficients of variation (CoV) were calculated. In addition, the symmetry of the reverberations when imaging air with a clean transducer was checked. The mean inter-observer CoVs were: visualization depth 11 ± 4%, vertical distance 1.7 ± 0.4%, horizontal distance 1.4 ± 0.6%, axial resolution 22 ± 7%, and lateral resolution 16 ± 8%. The mean intra-observer values were about half of these values with similar standard deviations. The visual evaluation of the homogeneity and the symmetry of the reverberations produced false-positive findings in 5% of the cases, but were found useful in detecting a defective transducer. The grading of the masses had mean interquartile ranges of 20-30% of the grading scale. The inter-observer variability in measuring phantom-based QA parameters can be relatively high. This should be considered when implementing a phantom-based QA protocol and evaluating the results.

  18. Inventory control in case of unknown demand and control parameters

    NARCIS (Netherlands)

    Janssen, E.

    2010-01-01

    This thesis deals with unknown demand and control parameters in inventory control. Inventory control involves decisions on what to order when and in what quantity. These decisions are based on information about the demand. Models are constructed using complete demand information; these models ensure

  19. Managing Effectively National and Regional projects-A Case of Kenya Bureau of Standards

    International Nuclear Information System (INIS)

    Kioko, J

    2009-01-01

    Discusses the functions of Kenya Bureau of Standards as standards development, testing,metrology,implementation of standards in commerce and industry, accredit ion,certification,inspection of imports and exports,training and education in Metrology,standards,testing and quality assurance

  20. Comments on 'Standard effective doses for proliferative tumours'

    International Nuclear Information System (INIS)

    Dasu, Iuliana Livia; Dasu, Alexandru; Denekamp, Juliana; Fowler, Jack F.

    2000-01-01

    We should like to make some comments on the paper published by Jones et al (1999). The paper presents some interesting and useful contributions on the theoretical evaluation of different fractionated schedules used now. The use of the linear quadratic equation has been very useful in focusing attention on the differences in fractionation responses of fast and slow proliferating normal tissues and tumours. Unfortunately the BED 10 or BED 3 units for (α/β ratios of 10 Gy and 3 Gy respectively) do not directly relate to anything used in routine clinical practice. The purpose of the paper by Jones et al (1999) is to covert any new schedule into the equivalent total dose as if it was given in the same size fractions as are in common use in that department. They illustrate that, if proliferation is taken into account for the altered schedule, it can be compared in two ways with the standard conventional schedule: (a) the proliferative standard effective dose one (PSED 1 ) in which the proliferation correction is applied in the altered schedule, but not in the standard schedule; (b) the proliferative standard effective dose two (PSED 2 ) in which the proliferation correction is applied to both schedules using the same proliferation parameters. This is expected to provide a better evaluation of the response of a 'real' tumour (i.e. a tumour that also proliferates during the standard treatment). However, there seem to be two errors in the paper. First, the authors quoted a wrong equation for calculating the proliferative standard effective dose two (PSED 2 ) (equations (2) and (A6) in their paper). There are also some special cases with respect to the time available for proliferation and the duration of the treatment that have been neglected in their paper and which require further specification. Therefore, we should like to give the full mathematical derivation of the correct equations for calculating the proliferative standard effective doses. We would also like to make

  1. Precision Parameter Estimation and Machine Learning

    Science.gov (United States)

    Wandelt, Benjamin D.

    2008-12-01

    I discuss the strategy of ``Acceleration by Parallel Precomputation and Learning'' (AP-PLe) that can vastly accelerate parameter estimation in high-dimensional parameter spaces and costly likelihood functions, using trivially parallel computing to speed up sequential exploration of parameter space. This strategy combines the power of distributed computing with machine learning and Markov-Chain Monte Carlo techniques efficiently to explore a likelihood function, posterior distribution or χ2-surface. This strategy is particularly successful in cases where computing the likelihood is costly and the number of parameters is moderate or large. We apply this technique to two central problems in cosmology: the solution of the cosmological parameter estimation problem with sufficient accuracy for the Planck data using PICo; and the detailed calculation of cosmological helium and hydrogen recombination with RICO. Since the APPLe approach is designed to be able to use massively parallel resources to speed up problems that are inherently serial, we can bring the power of distributed computing to bear on parameter estimation problems. We have demonstrated this with the CosmologyatHome project.

  2. Review of operating room ventilation standards

    NARCIS (Netherlands)

    Melhado, M.D.A.; Hensen, J.L.M.; Loomans, M.G.L.C.

    2006-01-01

    This article reviews standards applied to operating room ventilation design used by European, South and North American countries. Required environmental parameters are compared with regard to type of surgery, and ventilation system. These requirements as well as their relation to infection control

  3. 29 CFR 1953.5 - Special provisions for standards changes.

    Science.gov (United States)

    2010-07-01

    ... of its intent to retain the existing State standard to OSHA within 6 months of the Federal..., in the case of standards applicable to products used or distributed in interstate commerce where... standards. (1) Immediately upon publication of an emergency temporary standard in the Federal Register, OSHA...

  4. The standard model and beyond

    International Nuclear Information System (INIS)

    Marciano, W.J.

    1989-05-01

    In these lectures, my aim is to present a status report on the standard model and some key tests of electroweak unification. Within that context, I also discuss how and where hints of new physics may emerge. To accomplish those goals, I have organized my presentation as follows. I survey the standard model parameters with particular emphasis on the gauge coupling constants and vector boson masses. Examples of new physics appendages are also commented on. In addition, I have included an appendix on dimensional regularization and a simple example which employs that technique. I focus on weak charged current phenomenology. Precision tests of the standard model are described and up-to-date values for the Cabibbo-Kobayashi-Maskawa (CKM) mixing matrix parameters are presented. Constraints implied by those tests for a 4th generation, extra Z' bosons, and compositeness are discussed. An overview of the physics of tau decays is also included. I discuss weak neutral current phenomenology and the extraction of sin 2 θW from experiment. The results presented there are based on a global analysis of all existing data. I have chosen to concentrate that discussion on radiative corrections, the effect of a heavy top quark mass, implications for grand unified theories (GUTS), extra Z' gauge bosons, and atomic parity violation. The potential for further experimental progress is also commented on. Finally, I depart from the narrowest version of the standard model and discuss effects of neutrino masses, mixings, and electromagnetic moments. 32 refs., 3 figs., 5 tabs

  5. On-shell gauge-parameter independence of contributions to electroweak quark self-energies

    International Nuclear Information System (INIS)

    Ahmady, M.R.; Elias, V.; Mendel, R.R.; Scadron, M.D.; Steele, T.

    1989-01-01

    We allow an external condensate to enter standard SU(2) x U(1) electroweak theory via the vacuum expectation value , as in QCD sum-rule applications. For a given flavor, we then find that any gauge-parameter dependence of quark self-energies on the ''mass shell'' is eliminated provided that the mass shell is made to coincide with both the expansion-parameter mass occurring in the operator-product expansion of and the standard electroweak mass acquired via the Yukawa coupling to the usual scalar vacuum expectation value of spontaneous symmetry breaking. This result indicates that if the QCD-generated order parameter and associated dynamical mass(es) m/sub q//sup dyn/ are utilized as external input parameters in electroweak calculations involving hadrons, then new corrections must be introduced into the q-barqW and q-barqZ vertices in order to preserve SU(2) x U(1) Ward identities

  6. An Evaluation of the Intended and Implemented Curricula’s Adherence to the NCTM Standards on the Mathematics Achievement of Third Grade Students: A Case Study

    OpenAIRE

    Griffin, Cynthia C.; Xin, Yan Ping; Jitendra, Asha

    2010-01-01

    This article describes the results of a case study evaluating the influence of the intended (textbook) and implemented curricula’s (teachers’ instructional practice) adherence to the National Council of Teachers of Mathematics’ (NCTM) Standards on student outcomes in mathematics. We collected data on 72 third-grade students from four classrooms in one elementary school. Textbook and teacher adherence to the standards were evaluated using content analysis and direct observation procedures, res...

  7. Reionization history and CMB parameter estimation

    International Nuclear Information System (INIS)

    Dizgah, Azadeh Moradinezhad; Kinney, William H.; Gnedin, Nickolay Y.

    2013-01-01

    We study how uncertainty in the reionization history of the universe affects estimates of other cosmological parameters from the Cosmic Microwave Background. We analyze WMAP7 data and synthetic Planck-quality data generated using a realistic scenario for the reionization history of the universe obtained from high-resolution numerical simulation. We perform parameter estimation using a simple sudden reionization approximation, and using the Principal Component Analysis (PCA) technique proposed by Mortonson and Hu. We reach two main conclusions: (1) Adopting a simple sudden reionization model does not introduce measurable bias into values for other parameters, indicating that detailed modeling of reionization is not necessary for the purpose of parameter estimation from future CMB data sets such as Planck. (2) PCA analysis does not allow accurate reconstruction of the actual reionization history of the universe in a realistic case

  8. Reionization history and CMB parameter estimation

    Energy Technology Data Exchange (ETDEWEB)

    Dizgah, Azadeh Moradinezhad; Gnedin, Nickolay Y.; Kinney, William H.

    2013-05-01

    We study how uncertainty in the reionization history of the universe affects estimates of other cosmological parameters from the Cosmic Microwave Background. We analyze WMAP7 data and synthetic Planck-quality data generated using a realistic scenario for the reionization history of the universe obtained from high-resolution numerical simulation. We perform parameter estimation using a simple sudden reionization approximation, and using the Principal Component Analysis (PCA) technique proposed by Mortonson and Hu. We reach two main conclusions: (1) Adopting a simple sudden reionization model does not introduce measurable bias into values for other parameters, indicating that detailed modeling of reionization is not necessary for the purpose of parameter estimation from future CMB data sets such as Planck. (2) PCA analysis does not allow accurate reconstruction of the actual reionization history of the universe in a realistic case.

  9. The minimally tuned minimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Essig, Rouven; Fortin, Jean-Francois

    2008-01-01

    The regions in the Minimal Supersymmetric Standard Model with the minimal amount of fine-tuning of electroweak symmetry breaking are presented for general messenger scale. No a priori relations among the soft supersymmetry breaking parameters are assumed and fine-tuning is minimized with respect to all the important parameters which affect electroweak symmetry breaking. The superpartner spectra in the minimally tuned region of parameter space are quite distinctive with large stop mixing at the low scale and negative squark soft masses at the high scale. The minimal amount of tuning increases enormously for a Higgs mass beyond roughly 120 GeV

  10. Precision determinations of electroweak parameters from ep-collisions at Hera-energies

    International Nuclear Information System (INIS)

    Weber, A.

    1990-01-01

    The authors have studied HERA's capability of precisely measuring various parameters of the electroweak standard model. The analysis was performed in kinematical regions, x ≥ 0.01 and x ≥ 0.1, where systematic errors are expected to be under control. The statistical precision reachable for standard model parameters, extracted from R ≡ σ NC /σ CC and NC asymmetries A for polarized e ± beams, was estimated for both regions. Heavy flavor contributions, which amount up to 15% to the cross sections, were included via the boson-gluon fusion process. Furthermore the influence of various uncertainties (parton distributions, quark masses, σ L /σ T , fixing input parameters) was estimated. For x ≥ 0.01 the uncertainties due to parton densities are sizeable, the total rates (cross sections), however, increase strongly in contrast to the region x ≥ 0.1

  11. Digital subtraction radiographic evaluation of the standardize periapical intraoral radiographs

    International Nuclear Information System (INIS)

    Cho, Bong Hae; Nah, Kyung Soo

    1993-01-01

    The geometrically standardized intraoral radiographs using 5 occlusal registration material were taken serially from immediate, 1 day, 2, 4, 8, 12 and 16 weeks after making the bite blocks. The qualities of those subtracted images were evaluated to check the degree of reproducibility of each impression material. The results were as follows: 1. The standard deviations of the grey scales of the overall subtracted images were 4.9 for Exaflex, 7.2 for Pattern resin, 9.0 for Tooth Shade Acrylic, 12.2 for XCP only, 14.8 for Impregum. 2. The standard deviation of the grey scales of the overall subtracted images were grossly related to those of the localized horizontal line of interest. 3. Exaflex which showed the best subtracted image quality had 15 cases of straight, 14 cases of wave, 1 case of canyon shape. Impregum which showed the worst subtracted image quality had 4 cases of straight, 8 cases of wave, 18 cases of canyon shape respectively.

  12. Correlation of pattern reversal visual evoked potential parameters with the pattern standard deviation in primary open angle glaucoma

    Directory of Open Access Journals (Sweden)

    Ruchi Kothari

    2014-04-01

    Full Text Available AIM:To evaluate whether glaucomatous visual field defect particularly the pattern standard deviation (PSD of Humphrey visual field could be associated with visual evoked potential (VEP parameters of patients having primary open angle glaucoma (POAG.METHODS:Visual field by Humphrey perimetry and simultaneous recordings of pattern reversal visual evoked potential (PRVEP were assessed in 100 patients with POAG. The stimulus configuration for VEP recordings consisted of the transient pattern reversal method in which a black and white checker board pattern was generated (full field and displayed on VEP monitor (colour 14” by an electronic pattern regenerator inbuilt in an evoked potential recorder (RMS EMG EP MARK II.RESULTS:The results of our study indicate that there is a highly significant (P<0.001 negative correlation of P100 amplitude and a statistically significant (P<0.05 positive correlation of N70 latency, P100 latency and N155 latency with the PSD of Humphrey visual field in the subjects of POAG in various age groups as evaluated by Student’s t-test.CONCLUSION:Prolongation of VEP latencies were mirrored by a corresponding increase of PSD values. Conversely, as PSD increases the magnitude of VEP excursions were found to be diminished.

  13. The Challenge of Determining SUSY Parameters in Focus-Point-Inspired Cases

    CERN Document Server

    Rolbiecki, K.; Kalinowski, J.; Moortgat-Pick, G.

    2006-01-01

    We discuss the potential of combined LHC and ILC experiments for SUSY searches in a difficult region of the parameter space, in which all sfermion masses are above the TeV scale. Precision analyses of cross sections of light chargino production and forward--backward asymmetries of decay leptons and hadrons at the ILC, together with mass information on \\tilde{\\chi}^0_2 and squarks from the LHC, allow us to fit rather precisely the underlying fundamental gaugino/higgsino MSSM parameters and to constrain the masses of the heavy virtual sparticles. For such analyses the complete spin correlations between the production and decay processes have to be taken into account. We also took into account expected experimental uncertainties.

  14. The influence of using accelerator addition on High strength self-compacting concrete (HSSCC) in case of enhancement early compressive strength and filling ability parameters

    Science.gov (United States)

    Wibowo; Fadillah, Y.

    2018-03-01

    Efficiency in a construction works is a very important thing. Concrete with ease of workmanship and rapid achievement of service strength will to determine the level of efficiency. In this research, we studied the optimization of accelerator usage in achieving performance on compressive strength of concrete in function of time. The addition of variation of 0.3% - 2.3% to the weight of cement gives a positive impact of the rapid achievement of hardened concrete, however the speed of increasing of concrete strength achievement in term of time influence present increasing value of filling ability parameter of self-compacting concrete. The right composition of accelerator aligned with range of the values standard of filling ability parameters of HSSCC will be an advantage guidance for producers in the ready-mix concrete industry.

  15. Comparison of parameters of modern cooled and uncooled thermal cameras

    Science.gov (United States)

    Bareła, Jarosław; Kastek, Mariusz; Firmanty, Krzysztof; Krupiński, Michał

    2017-10-01

    During the design of a system employing thermal cameras one always faces a problem of choosing the camera types best suited for the task. In many cases such a choice is far from optimal one, and there are several reasons for that. System designers often favor tried and tested solution they are used to. They do not follow the latest developments in the field of infrared technology and sometimes their choices are based on prejudice and not on facts. The paper presents the results of measurements of basic parameters of MWIR and LWIR thermal cameras, carried out in a specialized testing laboratory. The measured parameters are decisive in terms of image quality generated by thermal cameras. All measurements were conducted according to current procedures and standards. However the camera settings were not optimized for a specific test conditions or parameter measurements. Instead the real settings used in normal camera operations were applied to obtain realistic camera performance figures. For example there were significant differences between measured values of noise parameters and catalogue data provided by manufacturers, due to the application of edge detection filters to increase detection and recognition ranges. The purpose of this paper is to provide help in choosing the optimal thermal camera for particular application, answering the question whether to opt for cheaper microbolometer device or apply slightly better (in terms of specifications) yet more expensive cooled unit. Measurements and analysis were performed by qualified personnel with several dozen years of experience in both designing and testing of thermal camera systems with both cooled and uncooled focal plane arrays. Cameras of similar array sizes and optics were compared, and for each tested group the best performing devices were selected.

  16. Adaptations of International Standards on Educational Leadership Preparation in Egypt

    Science.gov (United States)

    Purinton, Ted; Khalil, Dalia

    2016-01-01

    This paper is a case study of one leadership preparation program, utilizing US school leadership standards and practices, offered in Egypt. This case study illuminates how cultural and policy distinctions impact differing necessities of educational leadership, and how those necessities conflict or concur with the international standards and…

  17. Experimental Standards in Sustainability Transitions

    DEFF Research Database (Denmark)

    Hale, Lara Anne

    In this thesis I address how experimental standards are used in the new governance paradigm to further sustainability transitions. Focusing on the case of the Active House standard in the building sector, I investigate experimental standards in three research papers examining the following dynamics......: (1) the relationship between commensuration and legitimacy in the formulation and diffusion of a standard’s specifications; (2) the role of awareness in standardizing green default rules to establish sustainable consumption in buildings; and (3) the significance of focus on humans in the development...... of technological standards for sustainable building. Launching from a critical realist social ontology, I collected ethnographic data on the Active House Alliance, its cofounder VELUX, and three of their demonstration building projects in Austria, Germany, and Belgium over the course of three years from 2013...

  18. Extraction of the Susy and Higgs parameters

    International Nuclear Information System (INIS)

    Adam-Bourdarios, Claire

    2010-01-01

    If supersymmetry is discovered by the next generation of collider experiments, it will be crucial to determine its fundamental high-scale parameters. Three scenarios have been recently investigated by the SFitter collaboration : the case where the LHC 'only' measures a light Higgs like signal, the case where SUSY signal are discovered at the LHC, and the dream scenario, where LHC and ILC measurements can be combined.

  19. Investigating the Effect of Cosmic Opacity on Standard Candles

    International Nuclear Information System (INIS)

    Hu, J.; Yu, H.; Wang, F. Y.

    2017-01-01

    Standard candles can probe the evolution of dark energy over a large redshift range. But the cosmic opacity can degrade the quality of standard candles. In this paper, we use the latest observations, including Type Ia supernovae (SNe Ia) from the “joint light-curve analysis” sample and Hubble parameters, to probe the opacity of the universe. A joint fitting of the SNe Ia light-curve parameters, cosmological parameters, and opacity is used in order to avoid the cosmological dependence of SNe Ia luminosity distances. The latest gamma-ray bursts are used in order to explore the cosmic opacity at high redshifts. The cosmic reionization process is considered at high redshifts. We find that the sample supports an almost transparent universe for flat ΛCDM and XCDM models. Meanwhile, free electrons deplete photons from standard candles through (inverse) Compton scattering, which is known as an important component of opacity. This Compton dimming may play an important role in future supernova surveys. From analysis, we find that about a few per cent of the cosmic opacity is caused by Compton dimming in the two models, which can be corrected.

  20. Progress Report on the Airborne Composition Standard Variable Name and Time Series Working Groups of the 2017 ESDSWG

    Science.gov (United States)

    Evans, K. D.; Early, A. B.; Northup, E. A.; Ames, D. P.; Teng, W. L.; Olding, S. W.; Krotkov, N. A.; Arctur, D. K.; Beach, A. L., III; Silverman, M. L.

    2017-12-01

    The role of NASA's Earth Science Data Systems Working Groups (ESDSWG) is to make recommendations relevant to NASA's Earth science data systems from users' experiences and community insight. Each group works independently, focusing on a unique topic. Progress of two of the 2017 Working Groups will be presented. In a single airborne field campaign, there can be several different instruments and techniques that measure the same parameter on one or more aircraft platforms. Many of these same parameters are measured during different airborne campaigns using similar or different instruments and techniques. The Airborne Composition Standard Variable Name Working Group is working to create a list of variable standard names that can be used across all airborne field campaigns in order to assist in the transition to the ICARTT Version 2.0 file format. The overall goal is to enhance the usability of ICARTT files and the search ability of airborne field campaign data. The Time Series Working Group (TSWG) is a continuation of the 2015 and 2016 Time Series Working Groups. In 2015, we started TSWG with the intention of exploring the new OGC (Open Geospatial Consortium) WaterML 2 standards as a means for encoding point-based time series data from NASA satellites. In this working group, we realized that WaterML 2 might not be the best solution for this type of data, for a number of reasons. Our discussion with experts from other agencies, who have worked on similar issues, identified several challenges that we would need to address. As a result, we made the recommendation to study the new TimeseriesML 1.0 standard of OGC as a potential NASA time series standard. The 2016 TSWG examined closely the TimeseriesML 1.0 and, in coordination with the OGC TimeseriesML Standards Working Group, identified certain gaps in TimeseriesML 1.0 that would need to be addressed for the standard to be applicable to NASA time series data. An engineering report was drafted based on the OGC Engineering

  1. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  2. The Quality of Teaching Staff: Higher Education Institutions' Compliance with the European Standards and Guidelines for Quality Assurance--The Case of Portugal

    Science.gov (United States)

    Cardoso, Sónia; Tavares, Orlanda; Sin, Cristina

    2015-01-01

    In recent years, initiatives for the improvement of teaching quality have been pursued both at European and national levels. Such is the case of the European Standards and Guidelines for Quality Assurance (ESG) and of legislation passed by several European countries, including Portugal, in response to European policy developments driven by the…

  3. The ASDEX Upgrade Parameter Server

    Energy Technology Data Exchange (ETDEWEB)

    Neu, Gregor, E-mail: gregor.neu@ipp.mpg.de [Max-Planck-Institut für Plasmaphysik, Boltzmannstr. 2, 85748 Garching (Germany); Cole, Richard [Unlimited Computer Systems, Seeshaupter Str. 15, 82393 Iffeldorf (Germany); Gräter, Alex [Max-Planck-Institut für Plasmaphysik, Boltzmannstr. 2, 85748 Garching (Germany); Lüddecke, Klaus [Unlimited Computer Systems, Seeshaupter Str. 15, 82393 Iffeldorf (Germany); Rapson, Christopher J.; Raupp, Gerhard; Treutterer, Wolfgang; Zasche, Dietrich; Zehetbauer, Thomas [Max-Planck-Institut für Plasmaphysik, Boltzmannstr. 2, 85748 Garching (Germany)

    2015-10-15

    Highlights: • We describe our main tool in the plasma control configuration process. • Parameter access and computation are configurable with XML files. • Simple implementation of in situ tests by rerouting requests to test data. • Pulse specific overriding of parameters. - Abstract: Concepts for the configuration of plant systems and plasma control of modern devices such as ITER and W7-X are based on global data structures, or “pulse schedules” or “experiment programs”, which specify all physics characteristics (waveforms for controlled actuators and plasma quantities) and all technical characteristics of the plant systems (diagnostics and actuators operation settings) for a planned pulse. At ASDEX Upgrade we use different approach. We observed that the physics characteristics driving the discharge control system (DCS) are frequently modified on a pulse-to-pulse basis. Plant system operation, however, relies on technical standard settings, or “basic configurations” to provide guaranteed resources or services, which evolve according to longer term session or campaign operation schedules. This is why AUG manages technical configuration items separately from physics items. Consistent computation of the DCS configuration requires access to all this physics and technical data, which include the discharge programme (DP), settings of actuator systems and real-time diagnostics, the current system state and a database of static parameters. A Parameter Server provides a unified view on all these parameter sets and acts as the central point of access. We describe the functionality and architecture of the Parameter Server and its embedding into the control environment.

  4. Towards automatic parameter tuning of stream processing systems

    KAUST Repository

    Bilal, Muhammad

    2017-09-27

    Optimizing the performance of big-data streaming applications has become a daunting and time-consuming task: parameters may be tuned from a space of hundreds or even thousands of possible configurations. In this paper, we present a framework for automating parameter tuning for stream-processing systems. Our framework supports standard black-box optimization algorithms as well as a novel gray-box optimization algorithm. We demonstrate the multiple benefits of automated parameter tuning in optimizing three benchmark applications in Apache Storm. Our results show that a hill-climbing algorithm that uses a new heuristic sampling approach based on Latin Hypercube provides the best results. Our gray-box algorithm provides comparable results while being two to five times faster.

  5. Hail, Procrustes! Harmonized accounting standards as a Procrustean bed

    NARCIS (Netherlands)

    Stecher, J.; Suijs, J.P.M.

    2012-01-01

    This article finds that the use of a harmonized accounting standard, such as the International Financial Reporting Standards, increases the information available to markets only if institutional differences across countries using the harmonized standard are insignificant. In all other cases,

  6. Nonlinear dynamics of the relativistic standard map

    International Nuclear Information System (INIS)

    Nomura, Y.; Ichikawa, Y.H.; Horton, W.

    1991-04-01

    Heating and acceleration of charged particles by RF fields have been extensively investigated by the standard map. The question arises as to how the relativistic effects change the nonlinear dynamical behavior described by the classical standard map. The relativistic standard map is a two parameter (K, Β = ω/kc) family of dynamical systems reducing to the standard map when Β → 0. For Β ≠ 0 the relativistic mass increase suppresses the onset of stochasticity. It shown that the speed of light limits the rate of advance of the phase in the relativistic standard map and introduces KAM surfaces persisting in the high momentum region. An intricate structure of mixing in the higher order periodic orbits and chaotic orbits is analyzed using the symmetry properties of the relativistic standard map. The interchange of the stability of the periodic orbits in the relativistic standard map is also observed and is explained by the local linear stability of the orbits. 12 refs., 16 figs

  7. Spillover effects in epidemiology: parameters, study designs and methodological considerations

    Science.gov (United States)

    Benjamin-Chung, Jade; Arnold, Benjamin F; Berger, David; Luby, Stephen P; Miguel, Edward; Colford Jr, John M; Hubbard, Alan E

    2018-01-01

    Abstract Many public health interventions provide benefits that extend beyond their direct recipients and impact people in close physical or social proximity who did not directly receive the intervention themselves. A classic example of this phenomenon is the herd protection provided by many vaccines. If these ‘spillover effects’ (i.e. ‘herd effects’) are present in the same direction as the effects on the intended recipients, studies that only estimate direct effects on recipients will likely underestimate the full public health benefits of the intervention. Causal inference assumptions for spillover parameters have been articulated in the vaccine literature, but many studies measuring spillovers of other types of public health interventions have not drawn upon that literature. In conjunction with a systematic review we conducted of spillovers of public health interventions delivered in low- and middle-income countries, we classified the most widely used spillover parameters reported in the empirical literature into a standard notation. General classes of spillover parameters include: cluster-level spillovers; spillovers conditional on treatment or outcome density, distance or the number of treated social network links; and vaccine efficacy parameters related to spillovers. We draw on high quality empirical examples to illustrate each of these parameters. We describe study designs to estimate spillovers and assumptions required to make causal inferences about spillovers. We aim to advance and encourage methods for spillover estimation and reporting by standardizing spillover parameter nomenclature and articulating the causal inference assumptions required to estimate spillovers. PMID:29106568

  8. Comparison of vibration damping of standard and PDCPD housing of the electric power steering system

    Science.gov (United States)

    Płaczek, M.; Wróbel, A.; Baier, A.

    2017-08-01

    A comparison of two different types of electric power steering system housing is presented. The first considered type of the housing was a standard one that is made of an aluminium alloy. The second one is made of polydicyclopentadiene polymer (PDCPD) and was produced using the RIM technology. Considered elements were analysed in order to verify their properties of vibrations damping. This property is very important taking into account noise generated by elements of a car’s power steering system. During the carried out tests vibrations of analysed power steering housings were measured using Marco Fiber Composite (MFC) piezoelectric transducers. Results obtained for both considered power steering housings in case of the same parameters of vibrations excitations were measured and juxtaposed. Obtained results were analysed in order to verify if the housing made of PDCPD polymer has better properties of vibration damping than the standard one.

  9. Consensus conference on core radiological parameters to describe lumbar stenosis - an initiative for structured reporting

    Energy Technology Data Exchange (ETDEWEB)

    Andreisek, Gustav; Winklhofer, Sebastian F.X. [University Hospital Zurich, Department of Radiology, Zurich (Switzerland); Deyo, Richard A. [Oregon Health and Science University, Portland, OR (United States); Jarvik, Jeffrey G. [University of Washington, Seattle, WA (United States); Porchet, Francois [Schulthess Klinik, Zuerich (Switzerland); Steurer, Johann [University Hospital Zurich, Horten Center for patient oriented research and knowledge transfer, Zurich (Switzerland); Collaboration: On behalf of the LSOS working group

    2014-12-15

    To define radiological criteria and parameters as a minimum standard in a structured radiological report for patients with lumbar spinal stenosis (LSS) and to identify criteria and parameters for research purposes. All available radiological criteria and parameters for LSS were identified using systematic literature reviews and a Delphi survey. We invited to the consensus meeting, and provided data, to 15 internationally renowned experts from different countries. During the meeting, these experts reached consensus in a structured and systematic discussion about a core list of radiological criteria and parameters for standard reporting. We identified a total of 27 radiological criteria and parameters for LSS. During the meeting, the experts identified five of these as core items for a structured report. For central stenosis, these were ''compromise of the central zone'' and ''relation between fluid and cauda equina''. For lateral stenosis, the group agreed that ''nerve root compression in the lateral recess'' was a core item. For foraminal stenosis, we included ''nerve root impingement'' and ''compromise of the foraminal zone''. As a minimum standard, five radiological criteria should be used in a structured radiological report in LSS. Other parameters are well suited for research. (orig.)

  10. Standard rulers, candles, and clocks from the low-redshift universe.

    Science.gov (United States)

    Heavens, Alan; Jimenez, Raul; Verde, Licia

    2014-12-12

    We measure the length of the baryon acoustic oscillation (BAO) feature, and the expansion rate of the recent Universe, from low-redshift data only, almost model independently. We make only the following minimal assumptions: homogeneity and isotropy, a metric theory of gravity, a smooth expansion history, and the existence of standard candles (supernovæ) and a standard BAO ruler. The rest is determined by the data, which are compilations of recent BAO and type IA supernova results. Making only these assumptions, we find for the first time that the standard ruler has a length of 103.9±2.3h⁻¹ Mpc. The value is a measurement, in contrast to the model-dependent theoretical prediction determined with model parameters set by Planck data (99.3±2.1h⁻¹ Mpc). The latter assumes the cold dark matter model with a cosmological constant, and that the ruler is the sound horizon at radiation drag. Adding passive galaxies as standard clocks or a local Hubble constant measurement allows the absolute BAO scale to be determined (142.8±3.7 Mpc), and in the former case the additional information makes the BAO length determination more precise (101.9±1.9h⁻¹ Mpc). The inverse curvature radius of the Universe is weakly constrained and consistent with zero, independently of the gravity model, provided it is metric. We find the effective number of relativistic species to be N(eff)=3.53±0.32, independent of late-time dark energy or gravity physics.

  11. Retrieval of Effective Parameters of Subwavelength Periodic Photonic Structures

    DEFF Research Database (Denmark)

    Orlov, Alexey A.; Yankovskaya, Elizaveta A.; Zhukovsky, Sergei

    2014-01-01

    We revisit the standard Nicolson Ross Weir method of effective permittivity and permeability restoration of photonic structures for the case of subwavelength metal-dielectric multilayers. We show that the direct application of the standard method yields a false zero-epsilon point and an associated...

  12. Reduction of coupling parameters and duality

    International Nuclear Information System (INIS)

    Oehme, R.; Max-Planck-Institut fuer Physik, Muenchen

    2000-01-01

    The general method of the reduction in the number of coupling parameters is discussed. Using renormalization group invariance, theories with several independent couplings are related to a set of theories with a single coupling parameter. The reduced theories may have particular symmetries, or they may not be related to any known symmetry. The method is more general than the imposition of invariance properties. Usually, there are only a few reduced theories with an asymptotic power series expansion corresponding to a renormalizable Lagrangian. There also exist 'general' solutions containing non-integer powers and sometimes logarithmic factors. As an example for the use of the reduction method, the dual magnetic theories associated with certain supersymmetric gauge theories are discussed. They have a superpotential with a Yukawa coupling parameter. This parameter is expressed as a function of the gauge coupling. Given some standard conditions, a unique, isolated power series solution of the reduction equations is obtained. After reparameterization, the Yukawa coupling is proportional to the square of the gauge coupling parameter. The coefficient is given explicitly in terms of the numbers of colors and flavors. 'General' solutions with non-integer powers are also discussed. A brief list is given of other applications of the reduction method. (orig.)

  13. Decision support systems for power plants impact on the living standard

    International Nuclear Information System (INIS)

    Chatzimouratidis, Athanasios I.; Pilavachi, Petros A.

    2012-01-01

    Highlights: ► Ten major types of power plant are evaluated as to their impact on living standard. ► Uncertainty in both criteria performance and criteria weighting is considered. ► PROMETHEE II method, 12 criteria and 13 scenarios are used. ► Results are presented per scenario and per type of power plant. ► Optimal solution depends on scenario assumptions of the decision maker. - Abstract: In developed countries, the quality of life is of first priority and an overall assessment of power plant impact on the living standard requires a multicriteria analysis of both positive and negative factors incorporating uncertainty in criteria performance and probability assessment of weighting factors. This study incorporates PROMETHEE II to assess 10 major types of power plant under 12 criteria, 13 fixed and infinite customized probability assessed weight set scenarios. The power plants considered are coal/lignite, oil, natural gas turbine, natural gas combined cycle, nuclear, hydro, wind, photovoltaic, biomass and geothermal. Geothermal, wind and photovoltaic power plants are excellent choices in most of the cases and biomass and hydro should also be preferred to nuclear and fossil fuel. Among nuclear and fossil fuel the choice is based on the specific parameters of each case examined while natural gas technologies have specific advantages. The motivation of this study was to provide a tool for the decision-maker to evaluate all major types of power plant incorporating multicriteria and customized probability assessment of weighting factors.

  14. Correcting the bias of empirical frequency parameter estimators in codon models.

    Directory of Open Access Journals (Sweden)

    Sergei Kosakovsky Pond

    2010-07-01

    Full Text Available Markov models of codon substitution are powerful inferential tools for studying biological processes such as natural selection and preferences in amino acid substitution. The equilibrium character distributions of these models are almost always estimated using nucleotide frequencies observed in a sequence alignment, primarily as a matter of historical convention. In this note, we demonstrate that a popular class of such estimators are biased, and that this bias has an adverse effect on goodness of fit and estimates of substitution rates. We propose a "corrected" empirical estimator that begins with observed nucleotide counts, but accounts for the nucleotide composition of stop codons. We show via simulation that the corrected estimates outperform the de facto standard estimates not just by providing better estimates of the frequencies themselves, but also by leading to improved estimation of other parameters in the evolutionary models. On a curated collection of sequence alignments, our estimators show a significant improvement in goodness of fit compared to the approach. Maximum likelihood estimation of the frequency parameters appears to be warranted in many cases, albeit at a greater computational cost. Our results demonstrate that there is little justification, either statistical or computational, for continued use of the -style estimators.

  15. Use of a non-linear method for including the mass uncertainty of gravimetric standards and system measurement errors in the fitting of calibration curves for XRFA freeze-dried UNO3 standards

    International Nuclear Information System (INIS)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-05-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities with a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO 3 can have an accuracy of 0.2% in 1000 s

  16. A study of the differences between trade standards inside and outside Europe

    International Nuclear Information System (INIS)

    García-González, D.L.; Tena, N.; Romero, I.; Aparicio-Ruiz, R.; Morales, M.T.; Aparicio, R.

    2017-01-01

    The definitions of olive oil categories are common or very similar for all the international regulatory bodies, and in many cases the text is even literally the same. However, the values of some parameters which chemically define the different categories do not have the same degree of agreement. These disagreements mean a difficult task for importers and exporters who have to deal with these differences when they need to defend the quality and genuineness of their product. This work analyzes the differences found when scrutinizing the current trade standards and regulations from a critical viewpoint, with comments and useful tips for improving the current International Olive Council methods when possible, as well as alternatives from non targeted techniques. The values of precision associated with the International , [es

  17. Adaptive synchronization of fractional Lorenz systems using a reduced number of control signals and parameters

    International Nuclear Information System (INIS)

    Aguila-Camacho, Norelys; Duarte-Mermoud, Manuel A.; Delgado-Aguilera, Efredy

    2016-01-01

    This paper analyzes the synchronization of two fractional Lorenz systems in two cases: the first one considering fractional Lorenz systems with unknown parameters, and the second one considering known upper bounds on some of the fractional Lorenz systems parameters. The proposed control strategies use a reduced number of control signals and control parameters, employing mild assumptions. The stability of the synchronization errors is analytically demonstrated in all cases, and the convergence to zero of the synchronization errors is analytically proved in the case when the upper bounds on some system parameters are assumed to be known. Simulation studies are presented, which allows verifying the effectiveness of the proposed control strategies.

  18. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas Harder; Juul, Anders

    2004-01-01

    -like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used......Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...

  19. Precision tests of the Standard Model

    International Nuclear Information System (INIS)

    Ol'shevskij, A.G.

    1996-01-01

    The present status of the precision measurements of electroweak observables is discussed with the special emphasis on the results obtained recently. All together these measurements provide the basis for the stringent test of the Standard Model and determination of the SM parameters. 22 refs., 23 figs., 11 tabs

  20. Development of standardized bioassay protocols for the toxicity assessment of waste, manufactured products, and effluents in Latin America: Venezuela, a Case Study

    International Nuclear Information System (INIS)

    Rodriquez-Grau, J.

    1993-01-01

    The present status of the toxicity assessment of industrial products in Latin America is well below North America/EC standards. As an example, most of Latin America regulatory laws regarding effluent discharge are still based upon concentration limits of certain major pollutants, and BOD/COD measurements; no reference is made to the necessity of aquatic bioassay toxicity data. Aware of this imperative need, the Venezuelan Petroleum Industry (PDVSA), through its R ampersand D Corporative branch (INTEVEP) gave priority to the development of standardized acute/sublethal toxicity test protocols as sound means of evaluating their products and wastes. Throughout this presentation, the Venezuelan case will be studied, showing strategies undertaken to accelerate protocol development. Results will show the assessment of 14 different protocols encompassing a variety of species of aquatic/terrestrial organisms, and a series of toxicity test endpoints including mortality, reproductive, biological and immunological measurements, most of which are currently in use or being developed. These protocols have already yielded useful results in numerous cases where toxicity assessment was required, including evaluations of effluent, oil dispersants, drilling fluids, toxic wastes, fossil fuels and newly developed products. The Venezuelan case demonstrates that the integration of Industry, Academia and Government, which is an essential part of SETAC's philosophy, is absolutely necessary for the successful advancement of environmental scientific/regulatory issues

  1. Heart rate variability parameters do not correlate with pain intensity in healthy volunteers

    NARCIS (Netherlands)

    Meeuse, Jan J; Löwik, Marco S P; Löwik, Sabine A M; Aarden, Eline; van Roon, Arie M; Gans, Reinold O B; van Wijhe, Marten; Lefrandt, Joop D; Reyners, Anna K L

    OBJECTIVE: When patients cannot indicate pain, physiological parameters may be useful. We tested whether heart rate variability (HRV) parameters, as reflection of sympathetic and vagal tone, can be used to quantify pain intensity. DESIGN: Prospective study. SUBJECTS AND SETTING: A standardized heat

  2. Atlas selection for hippocampus segmentation: Relevance evaluation of three meta-information parameters.

    Science.gov (United States)

    Dill, Vanderson; Klein, Pedro Costa; Franco, Alexandre Rosa; Pinho, Márcio Sarroglia

    2018-04-01

    Current state-of-the-art methods for whole and subfield hippocampus segmentation use pre-segmented templates, also known as atlases, in the pre-processing stages. Typically, the input image is registered to the template, which provides prior information for the segmentation process. Using a single standard atlas increases the difficulty in dealing with individuals who have a brain anatomy that is morphologically different from the atlas, especially in older brains. To increase the segmentation precision in these cases, without any manual intervention, multiple atlases can be used. However, registration to many templates leads to a high computational cost. Researchers have proposed to use an atlas pre-selection technique based on meta-information followed by the selection of an atlas based on image similarity. Unfortunately, this method also presents a high computational cost due to the image-similarity process. Thus, it is desirable to pre-select a smaller number of atlases as long as this does not impact on the segmentation quality. To pick out an atlas that provides the best registration, we evaluate the use of three meta-information parameters (medical condition, age range, and gender) to choose the atlas. In this work, 24 atlases were defined and each is based on the combination of the three meta-information parameters. These atlases were used to segment 352 vol from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Hippocampus segmentation with each of these atlases was evaluated and compared to reference segmentations of the hippocampus, which are available from ADNI. The use of atlas selection by meta-information led to a significant gain in the Dice similarity coefficient, which reached 0.68 ± 0.11, compared to 0.62 ± 0.12 when using only the standard MNI152 atlas. Statistical analysis showed that the three meta-information parameters provided a significant improvement in the segmentation accuracy. Copyright © 2018 Elsevier Ltd

  3. Pharmacognostic standardization and preliminary phytochemical studies of Gaultheria trichophylla.

    Science.gov (United States)

    Alam, Fiaz; Najum us Saqib, Qazi

    2015-01-01

    Gaultheria trichophylla Royle (Ericaceae) has long been used for various ailments in traditional systems of medicines; most importantly it is used against pain and inflammation. This study determines various pharmacognostic and phytochemical standards helpful to ensure the purity, safety, and efficacy of medicinal plant G. trichophylla. Intact aerial parts, powdered materials, and extracts were examined macro- and microscopically and pharmacognostic standardization parameters were determined in accordance with the guidelines given by the World Health Organization (WHO). Parameters including extractive values, ash values, and loss on drying were determined. Preliminary phytochemical tests, fluorescence analysis, and chromatographic profiling were performed for the identification and standardization of G. trichophylla. The shape, size, color, odor, and surface characteristics were noted for intact drug and powdered drug material of G. trichophylla. Light and scanning electron microscope images of cross section of leaf and powdered microscopy revealed useful diagnostic features. Histochemical, phytochemical, physicochemical, and fluorescence analysis proved useful tools to differentiate the powdered drug material. High-performance liquid chromatography (HPLC) analysis showed the presence of important phytoconstituents such as gallic acid, rutin, and quercetin. The data generated from the present study help to authenticate the medicinally important plant G. trichophylla. Qualitative and quantitative microscopic features may be helpful for establishing the pharmacopeia standards. Morphology as well as various pharmacognostic aspects of different parts of the plant were studied and described along with phytochemical and physicochemical parameters, which could be helpful in further isolation and purification of medicinally important compounds.

  4. A neutron balance approach in critical parameter determination

    International Nuclear Information System (INIS)

    Dall'Osso, Aldo

    2008-01-01

    The determination of a critical parameter, process known also as criticality or eigenvalue search, is one of the major functionalities in neutronics codes. The determination of the critical boron concentration or the critical control rod position are two examples. Classical procedures used to solve this problem are based on the iterative Newton-Raphson method where the value of the parameter is changed until the eigenvalue matches the target. We present here a different approach where an equation, derived from the neutron balance, is set and the critical parameter is the unknown. Solving this equation is equivalent to solve an eigenvalue problem where the critical parameter is the eigenvalue. It is also shown that this approach can be seen as an application of inverse perturbation theory. This method reduces considerably the computation time in situations where changes on the critical parameter make a high distortion on the flux distribution, as it is the case of the control rods. Some numerical examples illustrate the performances and the gain in stability in cases of simultaneous control of criticality and axial offset of the power distribution. The application to the determination of the critical uranium enrichment in a transport code is also presented. The simplicity of the method makes its implementation in fuel bundle lattice and reactor codes very easy

  5. Kinetic parameter estimation from attenuated SPECT projection measurements

    International Nuclear Information System (INIS)

    Reutter, B.W.; Gullberg, G.T.

    1998-01-01

    Conventional analysis of dynamically acquired nuclear medicine data involves fitting kinetic models to time-activity curves generated from regions of interest defined on a temporal sequence of reconstructed images. However, images reconstructed from the inconsistent projections of a time-varying distribution of radiopharmaceutical acquired by a rotating SPECT system can contain artifacts that lead to biases in the estimated kinetic parameters. To overcome this problem the authors investigated the estimation of kinetic parameters directly from projection data by modeling the data acquisition process. To accomplish this it was necessary to parametrize the spatial and temporal distribution of the radiopharmaceutical within the SPECT field of view. In a simulated transverse slice, kinetic parameters were estimated for simple one compartment models for three myocardial regions of interest, as well as for the liver. Myocardial uptake and washout parameters estimated by conventional analysis of noiseless simulated data had biases ranging between 1--63%. Parameters estimated directly from the noiseless projection data were unbiased as expected, since the model used for fitting was faithful to the simulation. Predicted uncertainties (standard deviations) of the parameters obtained for 500,000 detected events ranged between 2--31% for the myocardial uptake parameters and 2--23% for the myocardial washout parameters

  6. Data-constrained reionization and its effects on cosmological parameters

    International Nuclear Information System (INIS)

    Pandolfi, S.; Ferrara, A.; Choudhury, T. Roy; Mitra, S.; Melchiorri, A.

    2011-01-01

    We perform an analysis of the recent WMAP7 data considering physically motivated and viable reionization scenarios with the aim of assessing their effects on cosmological parameter determinations. The main novelties are: (i) the combination of cosmic microwave background data with astrophysical results from quasar absorption line experiments; (ii) the joint variation of both the cosmological and astrophysical [governing the evolution of the free electron fraction x e (z)] parameters. Including a realistic, data-constrained reionization history in the analysis induces appreciable changes in the cosmological parameter values deduced through a standard WMAP7 analysis. Particularly noteworthy are the variations in Ω b h 2 =0.02258 -0.00056 +0.00057 [WMAP7 (Sudden)] vs Ω b h 2 =0.02183±0.00054[WMAP7+ASTRO (CF)] and the new constraints for the scalar spectral index, for which WMAP7+ASTRO (CF) excludes the Harrison-Zel'dovich value n s =1 at >3σ. Finally, the electron-scattering optical depth value is considerably decreased with respect to the standard WMAP7, i.e. τ e =0.080±0.012. We conclude that the inclusion of astrophysical data sets, allowing to robustly constrain the reionization history, in the extraction procedure of cosmological parameters leads to relatively important differences in the final determination of their values.

  7. Compatible topologies and parameters for NMR structure determination of carbohydrates by simulated annealing.

    Science.gov (United States)

    Feng, Yingang

    2017-01-01

    The use of NMR methods to determine the three-dimensional structures of carbohydrates and glycoproteins is still challenging, in part because of the lack of standard protocols. In order to increase the convenience of structure determination, the topology and parameter files for carbohydrates in the program Crystallography & NMR System (CNS) were investigated and new files were developed to be compatible with the standard simulated annealing protocols for proteins and nucleic acids. Recalculating the published structures of protein-carbohydrate complexes and glycosylated proteins demonstrates that the results are comparable to the published structures which employed more complex procedures for structure calculation. Integrating the new carbohydrate parameters into the standard structure calculation protocol will facilitate three-dimensional structural study of carbohydrates and glycosylated proteins by NMR spectroscopy.

  8. Determination of material parameters by comparison of 3D simulations and 3D experiments

    DEFF Research Database (Denmark)

    Zhang, Jin

    microstructure and the measured microstructure in a global manner. The proposed method is demonstrated on a simple case to fit two material parameters: the liquid diffusion coefficient and the capillary length of a hypoeutectic Al-Cu alloy, and a complicated case to fit hundreds of material parameters......: the reduced grain boundary mobilities of pure iron. Results show that the proposed method is capable of providing reliable measurements of material parameters that are difficult to measure in traditional ways and can determine many - possibly all relevant - values of material parameters simultaneously...

  9. Reconciling Planck with the local value of H0 in extended parameter space

    Directory of Open Access Journals (Sweden)

    Eleonora Di Valentino

    2016-10-01

    Full Text Available The recent determination of the local value of the Hubble constant by Riess et al., 2016 (hereafter R16 is now 3.3 sigma higher than the value derived from the most recent CMB anisotropy data provided by the Planck satellite in a ΛCDM model. Here we perform a combined analysis of the Planck and R16 results in an extended parameter space, varying simultaneously 12 cosmological parameters instead of the usual 6. We find that a phantom-like dark energy component, with effective equation of state w=−1.29−0.12+0.15 at 68% c.l. can solve the current tension between the Planck dataset and the R16 prior in an extended ΛCDM scenario. On the other hand, the neutrino effective number is fully compatible with standard expectations. This result is confirmed when including cosmic shear data from the CFHTLenS survey and CMB lensing constraints from Planck. However, when BAO measurements are included we find that some of the tension with R16 remains, as also is the case when we include the supernova type Ia luminosity distances from the JLA catalog.

  10. Evaluation of Neonatal Hemolytic Jaundice: Clinical and Laboratory Parameters

    Directory of Open Access Journals (Sweden)

    Anet Papazovska Cherepnalkovski

    2015-12-01

    CONCLUSIONS: The laboratory profile in ABO/Rh isoimmunisation cases depicts hemolytic mechanism of jaundice. These cases carry a significant risk for early and severe hyperbilirubinemia and are eligible for neurodevelopmental follow-up. Hematological parameters and blood grouping are simple diagnostic methods that assist the etiological diagnosis of neonatal hyperbilirubinemia.

  11. Distribution Development for STORM Ingestion Input Parameters

    Energy Technology Data Exchange (ETDEWEB)

    Fulton, John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-07-01

    The Sandia-developed Transport of Radioactive Materials (STORM) code suite is used as part of the Radioisotope Power System Launch Safety (RPSLS) program to perform statistical modeling of the consequences due to release of radioactive material given a launch accident. As part of this modeling, STORM samples input parameters from probability distributions with some parameters treated as constants. This report described the work done to convert four of these constant inputs (Consumption Rate, Average Crop Yield, Cropland to Landuse Database Ratio, and Crop Uptake Factor) to sampled values. Consumption rate changed from a constant value of 557.68 kg / yr to a normal distribution with a mean of 102.96 kg / yr and a standard deviation of 2.65 kg / yr. Meanwhile, Average Crop Yield changed from a constant value of 3.783 kg edible / m 2 to a normal distribution with a mean of 3.23 kg edible / m 2 and a standard deviation of 0.442 kg edible / m 2 . The Cropland to Landuse Database ratio changed from a constant value of 0.0996 (9.96%) to a normal distribution with a mean value of 0.0312 (3.12%) and a standard deviation of 0.00292 (0.29%). Finally the crop uptake factor changed from a constant value of 6.37e-4 (Bq crop /kg)/(Bq soil /kg) to a lognormal distribution with a geometric mean value of 3.38e-4 (Bq crop /kg)/(Bq soil /kg) and a standard deviation value of 3.33 (Bq crop /kg)/(Bq soil /kg)

  12. Reexploration of interacting holographic dark energy model. Cases of interaction term excluding the Hubble parameter

    Energy Technology Data Exchange (ETDEWEB)

    Li, Hai-Li; Zhang, Jing-Fei; Feng, Lu [Northeastern University, Department of Physics, College of Sciences, Shenyang (China); Zhang, Xin [Northeastern University, Department of Physics, College of Sciences, Shenyang (China); Peking University, Center for High Energy Physics, Beijing (China)

    2017-12-15

    In this paper, we make a deep analysis for the five typical interacting holographic dark energy models with the interaction terms Q = 3βH{sub 0}ρ{sub de}, Q = 3βH{sub 0}ρ{sub c}, Q = 3βH{sub 0}(ρ{sub de} + ρ{sub c}), Q = 3βH{sub 0}√(ρ{sub de}ρ{sub c}), and Q = 3βH{sub 0}(ρ{sub de}ρ{sub c})/(ρ{sub de}+ρ{sub c}), respectively. We obtain observational constraints on these models by using the type Ia supernova data (the Joint Light-Curve Analysis sample), the cosmic microwave background data (Planck 2015 distance priors), the baryon acoustic oscillations data, and the direct measurement of the Hubble constant. We find that the values of χ{sub min}{sup 2} for all the five models are almost equal (around 699), indicating that the current observational data equally favor these IHDE models. In addition, a comparison with the cases of an interaction term involving the Hubble parameter H is also made. (orig.)

  13. Precision Electroweak Measurements and Constraints on the Standard Model

    CERN Document Server

    ,

    2010-01-01

    This note presents constraints on Standard Model parameters using published and preliminary precision electroweak results measured at the electron-positron colliders LEP and SLC. The results are compared with precise electroweak measurements from other experiments, notably CDF and DØ at the Tevatron. Constraints on the input parameters of the Standard Model are derived from the combined set of results obtained in high-$Q^2$ interactions, and used to predict results in low-$Q^2$ experiments, such as atomic parity violation, Møller scattering, and neutrino-nucleon scattering. The main changes with respect to the experimental results presented in 2009 are new combinations of results on the width of the W boson and the mass of the top quark.

  14. Non-standard neutrino interactions in the mu–tau sector

    Directory of Open Access Journals (Sweden)

    Irina Mocioiu

    2015-04-01

    Full Text Available We discuss neutrino mass hierarchy implications arising from the effects of non-standard neutrino interactions on muon rates in high statistics atmospheric neutrino oscillation experiments like IceCube DeepCore. We concentrate on the mu–tau sector, which is presently the least constrained. It is shown that the magnitude of the effects depends strongly on the sign of the ϵμτ parameter describing this non-standard interaction. A simple analytic model is used to understand the parameter space where differences between the two signs are maximized. We discuss how this effect is partially degenerate with changing the neutrino mass hierarchy, as well as how this degeneracy could be lifted.

  15. The selection of Lorenz laser parameters for transmission in the SMF 3rd transmission window

    Science.gov (United States)

    Gajda, Jerzy K.; Niesterowicz, Andrzej; Zeglinski, Grzegorz

    2003-10-01

    The work presents simulation of transmission line results with the fiber standard ITU-T G.652. The parameters of Lorenz laser decide about electrical signal parameters like eye pattern, jitter, BER, S/N, Q-factor, scattering diagram. For a short line lasers with linewidth larger than 100MHz can be used. In the paper cases for 10 Gbit/s and 40 Gbit/s transmission and the fiber length 30km, 50km, and 70km are calculated. The average open eye patterns were 1*10-5-120*10-5. The Q factor was 10-23dB. In calcuations the bit error rate (BER) was 10-40-10-4. If the bandwidth of Lorenz laser increases from 10 MHz to 500MHz a distance of transmission decrease from 70km to 30km. Very important for transmission distance is a rate bit of transmitter. If a bit rate increase from 10Gbit/s to 40 Gbit/s, the transmission distance for the signal mode fiber G.652 will decrease from 70km to 5km.

  16. Arguing about climate change. Judging the handling of climate risk to future generations by comparison to the general standards of conduct in the case of risk to contemporaries

    International Nuclear Information System (INIS)

    Davidson, M.D.

    2009-01-01

    Intergenerational justice requires that climate risks to future generations be handled with the same reasonable care deemed acceptable by society in the case of risks to contemporaries. Such general standards of conduct are laid down in tort law, for example. Consequently, the validity of arguments for or against more stringent climate policy can be judged by comparison to the general standards of conduct applying in the case of risk to contemporaries. That this consistency test is able to disqualify certain arguments in the climate debate is illustrated by a further investigation of the debate on the social discount rate, used in cost-benefit analysis of climate policy

  17. Estimation of octanol/water partition coefficients using LSER parameters

    Science.gov (United States)

    Luehrs, Dean C.; Hickey, James P.; Godbole, Kalpana A.; Rogers, Tony N.

    1998-01-01

    The logarithms of octanol/water partition coefficients, logKow, were regressed against the linear solvation energy relationship (LSER) parameters for a training set of 981 diverse organic chemicals. The standard deviation for logKow was 0.49. The regression equation was then used to estimate logKow for a test of 146 chemicals which included pesticides and other diverse polyfunctional compounds. Thus the octanol/water partition coefficient may be estimated by LSER parameters without elaborate software but only moderate accuracy should be expected.

  18. Estimation of pharmacokinetic parameters from non-compartmental variables using Microsoft Excel.

    Science.gov (United States)

    Dansirikul, Chantaratsamon; Choi, Malcolm; Duffull, Stephen B

    2005-06-01

    This study was conducted to develop a method, termed 'back analysis (BA)', for converting non-compartmental variables to compartment model dependent pharmacokinetic parameters for both one- and two-compartment models. A Microsoft Excel spreadsheet was implemented with the use of Solver and visual basic functions. The performance of the BA method in estimating pharmacokinetic parameter values was evaluated by comparing the parameter values obtained to a standard modelling software program, NONMEM, using simulated data. The results show that the BA method was reasonably precise and provided low bias in estimating fixed and random effect parameters for both one- and two-compartment models. The pharmacokinetic parameters estimated from the BA method were similar to those of NONMEM estimation.

  19. Parameter Optimisation for the Behaviour of Elastic Models over Time

    DEFF Research Database (Denmark)

    Mosegaard, Jesper

    2004-01-01

    Optimisation of parameters for elastic models is essential for comparison or finding equivalent behaviour of elastic models when parameters cannot simply be transferred or converted. This is the case with a large range of commonly used elastic models. In this paper we present a general method tha...

  20. Conditional Poisson models: a flexible alternative to conditional logistic case cross-over analysis.

    Science.gov (United States)

    Armstrong, Ben G; Gasparrini, Antonio; Tobias, Aurelio

    2014-11-24

    The time stratified case cross-over approach is a popular alternative to conventional time series regression for analysing associations between time series of environmental exposures (air pollution, weather) and counts of health outcomes. These are almost always analyzed using conditional logistic regression on data expanded to case-control (case crossover) format, but this has some limitations. In particular adjusting for overdispersion and auto-correlation in the counts is not possible. It has been established that a Poisson model for counts with stratum indicators gives identical estimates to those from conditional logistic regression and does not have these limitations, but it is little used, probably because of the overheads in estimating many stratum parameters. The conditional Poisson model avoids estimating stratum parameters by conditioning on the total event count in each stratum, thus simplifying the computing and increasing the number of strata for which fitting is feasible compared with the standard unconditional Poisson model. Unlike the conditional logistic model, the conditional Poisson model does not require expanding the data, and can adjust for overdispersion and auto-correlation. It is available in Stata, R, and other packages. By applying to some real data and using simulations, we demonstrate that conditional Poisson models were simpler to code and shorter to run than are conditional logistic analyses and can be fitted to larger data sets than possible with standard Poisson models. Allowing for overdispersion or autocorrelation was possible with the conditional Poisson model but when not required this model gave identical estimates to those from conditional logistic regression. Conditional Poisson regression models provide an alternative to case crossover analysis of stratified time series data with some advantages. The conditional Poisson model can also be used in other contexts in which primary control for confounding is by fine

  1. Maximum standard uptake value on pre-chemotherapeutic FDG-PET is a significant parameter for disease progression of newly diagnosed lymphoma

    International Nuclear Information System (INIS)

    Eo, Jae Seon; Lee, Won Woo; Chung, June Key; Lee, Myung Chul; Kim, Sang Eun

    2005-01-01

    F-18 FDG-PET is useful for detection and staging of lymphoma. We investigated the prognostic significance of maximum standard uptake (maxSUV) value of FDG-PET for newly diagnosed lymphoma patients before chemotherapy. Twenty-seven patients (male: female = 17: 10: age: 49±19 years) with newly diagnosed lymphoma were enrolled. Nine-teen patients suffered from B cell lymphoma, 6 Hodgkins disease and 2 T cell lymphoma. One patient was stage I, 9 stage II, 3 stage III, 1 stage IV and 13 others. All patients underwent FDG-PET before initiation of chemotherapy. MaxSUV values using lean body weight were obtained for main and largest lesion to represent maxSUV of the patients. The disease progression was defined as total change of the chemotherapeutic regimen or addition of new chemotherapeutic agent during follow up period. The observed period was 389±224 days. The value of maxSUV ranged from 3 to 18 (mean±SD = 10.6±4.4). The disease progressions occurred in 6 patients. Using Cox proportional-hazard regression analysis, maxSUV was identified as a significant parameter for the disease progression free survival (p=0.044). Kaplan-Meier survival curve analysis revealed that the group with higher maxSUV (=10.6, n=5) suffered from shorter disease progression free survival (median 299 days) than the group with lower maxSUV (<10.6, n = 22) (median 378 days, p=0.0146). We found that maxSUV on pre-chemotherapeutic F-18 FDG-PET for newly diagnosed lymphoma patients is a significant parameter for disease progression. Lymphoma patients can be stratified before initiation of chemotherapy in terms of disease progression by the value of maxSUV 10.6

  2. On the evaluation of micromatter thin standards by RBS

    International Nuclear Information System (INIS)

    Ionescu, M.; Stelcer, E.; Hawas, O.; Siegele, R.; Cohen, D.; Linch, D.; Sarbutt, A.; Garton, D.

    2005-01-01

    Thin film standards are routinely used in PIXE and PIGE techniques for elemental analysis of particulates present in air samples, collected on Teflon filters. A number of parameters such as thickness, homogeneity and the type and amount of impurities present in the standards are crucial in order to perform high accuracy measurements. In this paper we report the use of RBS on the new STAR 2MV accelerator for characterisation of thin film standards obtained commercially. All standards were produced by MicroMatter Co. on polymer substrates, using a room temperature evaporation method. (author). 4 refs., 5 figs., 1 tab

  3. Cosmological-model-parameter determination from satellite-acquired type Ia and IIP Supernova Data

    International Nuclear Information System (INIS)

    Podariu, Silviu; Nugent, Peter; Ratra, Bharat

    2000-01-01

    We examine the constraints that satellite-acquired Type Ia and IIP supernova apparent magnitude versus redshift data will place on cosmological model parameters in models with and without a constant or time-variable cosmological constant lambda. High-quality data which could be acquired in the near future will result in tight constraints on these parameters. For example, if all other parameters of a spatially-flat model with a constant lambda are known, the supernova data should constrain the non-relativistic matter density parameter omega to better than 1 (2, 0.5) at 1 sigma with neutral (worst case, best case) assumptions about data quality

  4. Relationship of image magnification between periapical standard film and orthopantomogram

    International Nuclear Information System (INIS)

    Kim, Young Tae; Park, Tae Won

    1986-01-01

    The author studied the magnification ratio of teeth length in orthopantomogram through intraoral film taken by standardized paralleling technique. In this study, intraoral radiograph and orthopantomogram were taken in 2 cases of dry skull and 36 adults (504 teeth). The obtained results were as follows: 1. In case of dry skull, the magnification ratio of standard films was 4.6% to 5.9% and that of Orthopantomograph 5 was 15.1% to 33.1%. The magnification ratio of to the standard film was 9.2% to 26.5%. 2. In case of adults, the magnification ratio Orthopantomograph 5 to the standard films was 9.5% to 24.6%. 3. There were no significant difference in magnification between left and right. 4. Anterior teeth had lesser magnification than teeth. 5. It was considered that teeth length showed in Orthopantomograph 5 was magnified 15.4% to 31.3% than actual teeth length.

  5. Symplectic Synchronization of Lorenz-Stenflo System with Uncertain Chaotic Parameters via Adaptive Control

    Directory of Open Access Journals (Sweden)

    Cheng-Hsiung Yang

    2013-01-01

    Full Text Available A new symplectic chaos synchronization of chaotic systems with uncertain chaotic parameters is studied. The traditional chaos synchronizations are special cases of the symplectic chaos synchronization. A sufficient condition is given for the asymptotical stability of the null solution of error dynamics and a parameter difference. The symplectic chaos synchronization with uncertain chaotic parameters may be applied to the design of secure communication systems. Finally, numerical results are studied for symplectic chaos synchronized from two identical Lorenz-Stenflo systems in three different cases.

  6. Aerodynamic Parameters of a UK City Derived from Morphological Data

    Science.gov (United States)

    Millward-Hopkins, J. T.; Tomlin, A. S.; Ma, L.; Ingham, D. B.; Pourkashanian, M.

    2013-03-01

    Detailed three-dimensional building data and a morphometric model are used to estimate the aerodynamic roughness length z 0 and displacement height d over a major UK city (Leeds). Firstly, using an adaptive grid, the city is divided into neighbourhood regions that are each of a relatively consistent geometry throughout. Secondly, for each neighbourhood, a number of geometric parameters are calculated. Finally, these are used as input into a morphometric model that considers the influence of height variability to predict aerodynamic roughness length and displacement height. Predictions are compared with estimations made using standard tables of aerodynamic parameters. The comparison suggests that the accuracy of plan-area-density based tables is likely to be limited, and that height-based tables of aerodynamic parameters may be more accurate for UK cities. The displacement heights in the standard tables are shown to be lower than the current predictions. The importance of geometric details in determining z 0 and d is then explored. Height variability is observed to greatly increase the predicted values. However, building footprint shape only has a significant influence upon the predictions when height variability is not considered. Finally, we develop simple relations to quantify the influence of height variation upon predicted z 0 and d via the standard deviation of building heights. The difference in these predictions compared to the more complex approach highlights the importance of considering the specific shape of the building-height distributions. Collectively, these results suggest that to accurately predict aerodynamic parameters of real urban areas, height variability must be considered in detail, but it may be acceptable to make simple assumptions about building layout and footprint shape.

  7. Estimating RASATI scores using acoustical parameters

    International Nuclear Information System (INIS)

    Agüero, P D; Tulli, J C; Moscardi, G; Gonzalez, E L; Uriz, A J

    2011-01-01

    Acoustical analysis of speech using computers has reached an important development in the latest years. The subjective evaluation of a clinician is complemented with an objective measure of relevant parameters of voice. Praat, MDVP (Multi Dimensional Voice Program) and SAV (Software for Voice Analysis) are some examples of software for speech analysis. This paper describes an approach to estimate the subjective characteristics of RASATI scale given objective acoustical parameters. Two approaches were used: linear regression with non-negativity constraints, and neural networks. The experiments show that such approach gives correct evaluations with ±1 error in 80% of the cases.

  8. Gauge-invariant formulation of the S, T, and U parameters

    International Nuclear Information System (INIS)

    Degrassi, G.; Kniehl, B.A.; Sirlin, A.

    1993-06-01

    It is shown that the bosonic contributions to the S, T, and U parameters, defined in terms of conventional self-energies, are gauge dependent in the Standard Model (SM). Moreover, T and U are divergent unless a constraint is imposed among the gauge parameters. Implications of this result for renormalization schemes of the SM are discussed. A gauge-invariant formulation of S, T, and U is proposed in the pinch-technique framework. The modified S, T, and U parameters provide a gauge-invariant parametrization of leading electroweak radiative corrections in the SM and some of its extensions. (orig.)

  9. Catalogue of HI PArameters (CHIPA)

    Science.gov (United States)

    Saponara, J.; Benaglia, P.; Koribalski, B.; Andruchow, I.

    2015-08-01

    The catalogue of HI parameters of galaxies HI (CHIPA) is the natural continuation of the compilation by M.C. Martin in 1998. CHIPA provides the most important parameters of nearby galaxies derived from observations of the neutral Hydrogen line. The catalogue contains information of 1400 galaxies across the sky and different morphological types. Parameters like the optical diameter of the galaxy, the blue magnitude, the distance, morphological type, HI extension are listed among others. Maps of the HI distribution, velocity and velocity dispersion can also be display for some cases. The main objective of this catalogue is to facilitate the bibliographic queries, through searching in a database accessible from the internet that will be available in 2015 (the website is under construction). The database was built using the open source `` mysql (SQL, Structured Query Language, management system relational database) '', while the website was built with ''HTML (Hypertext Markup Language)'' and ''PHP (Hypertext Preprocessor)''.

  10. Analysis of groundwater discharge with a lumped-parameter model, using a case study from Tajikistan

    Science.gov (United States)

    Pozdniakov, S. P.; Shestakov, V. M.

    A lumped-parameter model of groundwater balance is proposed that permits an estimate of discharge variability in comparison with the variability of recharge, by taking into account the influence of aquifer parameters. Recharge-discharge relationships are analysed with the model for cases of deterministic and stochastic recharge time-series variations. The model is applied to study the temporal variability of groundwater discharge in a river valley in the territory of Tajikistan, an independent republic in Central Asia. Résumé Un modèle global de bilan d'eau souterraine a été développé pour estimer la variabilité de l'écoulement par rapport à celle de la recharge, en prenant en compte l'influence des paramètres de l'aquifère. Les relations entre recharge et écoulement sont analysées à l'aide du modèle pour des variations des chroniques de recharge soit déterministes, soit stochastiques. Le modèle est appliquéà l'étude de la variabilité temporelle de l'écoulement souterrain vers une rivière, dans le Tadjikistan, une république indépendante d'Asie centrale. Resumen Se propone un modelo de parámetros concentrados para realizar el balance de aguas subterráneas, el cual permite estimar la variabilidad en la descarga con respecto a la variabilidad en la recarga, en función de los parámetros que caracterizan el acuífero. Las relaciones entre recarga y descarga se analizan con el modelo para distintos casos de series temporales de recarga, tanto deterministas como estocásticas. El modelo se aplica al estudio de la variabilidad temporal de la descarga en un valle aluvial de Tadyikistán, una república independiente del Asia Central.

  11. Feasibility of using training cases from International Spinal Cord Injury Core Data Set for testing of International Standards for Neurological Classification of Spinal Cord Injury items

    DEFF Research Database (Denmark)

    Liu, N; Hu, Z W; Zhou, M W

    2014-01-01

    STUDY DESIGN: Descriptive comparison analysis. OBJECTIVE: To evaluate whether five training cases of International Spinal Cord Injury Core Data Set (ISCICDS) are appropriate for testing the facts within the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI...... include information about zone of partial preservation, sensory score or motor score. CONCLUSION: Majority of the facts related to SL, ML and AIS are included in the five training cases of ISCICDS. Thus, using these training cases, it is feasible to test the above facts within the ISNCSCI. It is suggested...

  12. Global Standards of Market Civilization

    DEFF Research Database (Denmark)

    Global Standards of Market Civilization brings together leading scholars, representing a range of political views, to investigate how global 'standards of market civilization' have emerged, their justification, and their political, economic and social impact. Key chapters show how as the modern...... thought, as well as its historical application part II presents original case studies that demonstrate the emergence of such standards and explore the diffusion of liberal capitalist ideas through the global political economy and the consequences for development and governance; the International Monetary...... Fund's capacity to formulate a global standard of civilization in its reform programs; and problems in the development of the global trade, including the issue of intellectual property rights. This book will be of strong interest to students and scholars in wide range of fields relating to the study...

  13. One-parameter families of supersymmetric isospectral potentials from Riccati solutions in function composition form

    Energy Technology Data Exchange (ETDEWEB)

    Rosu, Haret C., E-mail: hcr@ipicyt.edu.mx [IPICYT, Instituto Potosino de Investigacion Cientifica y Tecnologica, Camino a la presa San José 2055, Col. Lomas 4a Sección, 78216 San Luis Potosí, S.L.P. (Mexico); Mancas, Stefan C., E-mail: mancass@erau.edu [Department of Mathematics, Embry–Riddle Aeronautical University, Daytona Beach, FL 32114-3900 (United States); Chen, Pisin, E-mail: pisinchen@phys.ntu.edu.tw [Leung Center for Cosmology and Particle Astrophysics (LeCosPA) and Department of Physics, National Taiwan University, Taipei 10617, Taiwan (China)

    2014-04-15

    In the context of supersymmetric quantum mechanics, we define a potential through a particular Riccati solution of the composition form (F∘f)(x)=F(f(x)) and obtain a generalized Mielnik construction of one-parameter isospectral potentials when we use the general Riccati solution. Some examples for special cases of F and f are given to illustrate the method. An interesting result is obtained in the case of a parametric double well potential generated by this method, for which it is shown that the parameter of the potential controls the heights of the localization probability in the two wells, and for certain values of the parameter the height of the localization probability can be higher in the smaller well. -- Highlights: •Function-composition generalization of parametric isospectral potentials is presented. •Mielnik one-parameter family of harmonic potentials is obtained as a particular case. •Graphical discussion of regular and singular regions in the parameter space is given.

  14. Optimum parameters controlling distortion and noise of ...

    Indian Academy of Sciences (India)

    ALAA MAHMOUD

    2018-04-03

    Apr 3, 2018 ... modulation conditions and design parameters of the .... First the SL is modulated under the simple case of two- channel .... spans (channel 2: 55.25 ∼ channel 80: 559.25 MHz) at ..... bridge University Press, New York, 2004).

  15. RESEND, Infinitely Dilute Point Cross-Sections Calculation from ENDF/B Resonance Parameter. ADLER, ENDF/B Adler-Adler Resonance Parameter to Point Cross-Sections with Doppler Broadening

    International Nuclear Information System (INIS)

    Bhat, M.R.; Ozer, O.

    1982-01-01

    1 - Description of problem or function: RESEND generates infinitely- dilute, un-broadened, point cross sections in the ENDF format by combining ENDF File 3 background cross sections with points calculated from ENDF File 2 resonance parameter data. ADLER calculates total, capture, and fission cross sections from the corresponding Adler-Adler parameters in the ENDF/B File 2 Version II data and also Doppler-broadens cross sections. 2 - Method of solution: RESEND calculations are done in two steps by two separate sections of the program. The first section does the resonance calculation and stores the results on a scratch file. The second section combines the data from the scratch file with background cross sections and prints the results. ADLER uses the Adler-Adler formalism. 3 - Restrictions on the complexity of the problem: RESEND expects its input to be a standard mode BCD ENDF file (Version II/III). Since the output is also a standard mode BCD ENDF file, the program is limited by the six significant figure accuracy inherent in the ENDF formats. (If the cross section has been calculated at two points so close in energy that only their least significant figures differ, that interval is assumed to have converged, even if other convergence criteria may not be satisfied.) In the unresolved range the cross sections have been averaged over a Porter-Thomas distribution. In some regions the calculated resonance cross sections may be negative. In such cases the standard convergence criterion would cause an unnecessarily large number of points to be produced in the region where the cross section becomes zero. For this reason an additional input convergence criterion (AVERR) may be used. If the absolute value of the cross section at both ends of an interval is determined to be less than AVERR then the interval is assumed to have converged. There are no limitations on the total number of points generated. The present ENDF (Version II/III) formats restrict the total number of

  16. Mechanical Testing of Polymeric Composites for Aircraft Applications: Standards, Requirements and Limitations

    Science.gov (United States)

    Chinchan, Levon; Shevtsov, Sergey; Soloviev, Arcady; Shevtsova, Varvara; Huang, Jiun-Ping

    The high-loaded parts of modern aircrafts and helicopters are often produced from polymeric composite materials. Such materials consist of reinforcing fibers, packed by layers with the different angles, and resin, which uniformly distributes the structural stresses between fibers. These composites should have an orthotropic symmetry of mechanical properties to obtain the desirable spatial distribution of elastic moduli consistent to the external loading pattern. Main requirements to the aircraft composite materials are the specified elastic properties (9 for orthotropic composite), long-term strength parameters, high resistance against the environmental influences, low thermal expansion to maintain the shape stability. These properties are ensured by an exact implementation of technological conditions and many testing procedures performed with the fibers, resin, prepregs and ready components. Most important mechanical testing procedures are defined by ASTM, SACMA and other standards. However in each case the wide diversity of components (dimensions and lay-up of fibers, rheological properties of thermosetting resins) requires a specific approach to the sample preparation, testing, and numerical processing of the testing results to obtain the veritable values of tested parameters. We pay the special attention to the cases where the tested specimens are cut not from the plates recommended by standards, but from the ready part manufactured with the specific lay-up, tension forces on the reinforcing fiber at the filament winding, and curing schedule. These tests can provide most useful information both for the composite structural design and to estimate a quality of the ready parts. We consider an influence of relation between specimen dimensions and pattern of the fibers winding (or lay-up) on the results of mechanical testing for determination of longitudinal, transverse and in-plane shear moduli, an original numerical scheme for reconstruction of in-plane shear

  17. A Comparison of Higher-Order Thinking between the Common Core State Standards and the 2009 New Jersey Content Standards in High School

    Science.gov (United States)

    Sforza, Dario; Tienken, Christopher H.; Kim, Eunyoung

    2016-01-01

    The creators and supporters of the Common Core State Standards claim that the Standards require greater emphasis on higher-order thinking than previous state standards in mathematics and English language arts. We used a qualitative case study design with content analysis methods to test the claim. We compared the levels of thinking required by the…

  18. American College of Radiology-American Brachytherapy Society practice parameter for electronically generated low-energy radiation sources.

    Science.gov (United States)

    Devlin, Phillip M; Gaspar, Laurie E; Buzurovic, Ivan; Demanes, D Jeffrey; Kasper, Michael E; Nag, Subir; Ouhib, Zoubir; Petit, Joshua H; Rosenthal, Seth A; Small, William; Wallner, Paul E; Hartford, Alan C

    This collaborative practice parameter technical standard has been created between the American College of Radiology and American Brachytherapy Society to guide the usage of electronically generated low energy radiation sources (ELSs). It refers to the use of electronic X-ray sources with peak voltages up to 120 kVp to deliver therapeutic radiation therapy. The parameter provides a guideline for utilizing ELS, including patient selection and consent, treatment planning, and delivery processes. The parameter reviews the published clinical data with regard to ELS results in skin, breast, and other cancers. This technical standard recommends appropriate qualifications of the involved personnel. The parameter reviews the technical issues relating to equipment specifications as well as patient and personnel safety. Regarding suggestions for educational programs with regard to this parameter,it is suggested that the training level for clinicians be equivalent to that for other radiation therapies. It also suggests that ELS must be done using the same standards of quality and safety as those in place for other forms of radiation therapy. Copyright © 2017 American Brachytherapy Society and American College of Radiology. Published by Elsevier Inc. All rights reserved.

  19. Work Incapacity and Treatment Costs After Severe Accidents: Standard Versus Intensive Case Management in a 6-Year Randomized Controlled Trial.

    Science.gov (United States)

    Scholz, Stefan M; Andermatt, Peter; Tobler, Benno L; Spinnler, Dieter

    2016-09-01

    Purpose Case management is widely accepted as an effective method to support medical rehabilitation and vocational reintegration of accident victims with musculoskeletal injuries. This study investigates whether more intensive case management improves outcomes such as work incapacity and treatment costs for severely injured patients. Methods 8,050 patients were randomly allocated either to standard case management (SCM, administered by claims specialists) or intensive case management (ICM, administered by case managers). These study groups differ mainly by caseload, which was approximately 100 cases in SCM and 35 in ICM. The setting is equivalent to a prospective randomized controlled trial. A 6-year follow-up period was chosen in order to encompass both short-term insurance benefits and permanent disability costs. All data were extracted from administrative insurance databases. Results Average work incapacity over the 6-year follow-up, including contributions from daily allowances and permanent losses from disability, was slightly but insignificantly higher under ICM than under SCM (21.6 vs. 21.3 % of pre-accident work capacity). Remaining work incapacity after 6 years of follow-up showed no difference between ICM and SCM (8.9 vs. 8.8 % of pre-accident work incapacity). Treatment costs were 43,500 Swiss Francs (CHF) in ICM compared to 39,800 in SCM (+9.4 %, p = 0.01). The number of care providers involved in ICM was 10.5 compared to 10.0 in ICM (+5.0 %, p accident victims.

  20. Binomial Distribution Sample Confidence Intervals Estimation 1. Sampling and Medical Key Parameters Calculation

    Directory of Open Access Journals (Sweden)

    Tudor DRUGAN

    2003-08-01

    Full Text Available The aim of the paper was to present the usefulness of the binomial distribution in studying of the contingency tables and the problems of approximation to normality of binomial distribution (the limits, advantages, and disadvantages. The classification of the medical keys parameters reported in medical literature and expressing them using the contingency table units based on their mathematical expressions restrict the discussion of the confidence intervals from 34 parameters to 9 mathematical expressions. The problem of obtaining different information starting with the computed confidence interval for a specified method, information like confidence intervals boundaries, percentages of the experimental errors, the standard deviation of the experimental errors and the deviation relative to significance level was solves through implementation in PHP programming language of original algorithms. The cases of expression, which contain two binomial variables, were separately treated. An original method of computing the confidence interval for the case of two-variable expression was proposed and implemented. The graphical representation of the expression of two binomial variables for which the variation domain of one of the variable depend on the other variable was a real problem because the most of the software used interpolation in graphical representation and the surface maps were quadratic instead of triangular. Based on an original algorithm, a module was implements in PHP in order to represent graphically the triangular surface plots. All the implementation described above was uses in computing the confidence intervals and estimating their performance for binomial distributions sample sizes and variable.

  1. Halogens determination in vegetable NBS standard reference materials

    International Nuclear Information System (INIS)

    Stella, R.; Genova, N.; Di Casa, M.

    1977-01-01

    Levels of all four halogens in Orchard Leaves, Pine Needles and Tomato Leaves NBS reference standards were determined. For fluorine a spiking isotope dilution method was used followed by HF absorption on glass beads. Instrumental nuclear activation analysis was adopted for chlorine and bromine determination. Radiochemical separation by a distillation procedure was necessary for iodine nuclear activation analysis after irradiation. Activation parameters of Cl, Br and I are reported. Results of five determinations for each halogen in Orchard Leaves, Pine Needles and Tomato Leaves NBS Standard Materials and Standard deviations of the mean are reported. (T.I.)

  2. Standardized mappings--a framework to combine different semantic mappers into a standardized web-API.

    Science.gov (United States)

    Neuhaus, Philipp; Doods, Justin; Dugas, Martin

    2015-01-01

    Automatic coding of medical terms is an important, but highly complicated and laborious task. To compare and evaluate different strategies a framework with a standardized web-interface was created. Two UMLS mapping strategies are compared to demonstrate the interface. The framework is a Java Spring application running on a Tomcat application server. It accepts different parameters and returns results in JSON format. To demonstrate the framework, a list of medical data items was mapped by two different methods: similarity search in a large table of terminology codes versus search in a manually curated repository. These mappings were reviewed by a specialist. The evaluation shows that the framework is flexible (due to standardized interfaces like HTTP and JSON), performant and reliable. Accuracy of automatically assigned codes is limited (up to 40%). Combining different semantic mappers into a standardized Web-API is feasible. This framework can be easily enhanced due to its modular design.

  3. On entanglement of light and Stokes parameters

    International Nuclear Information System (INIS)

    Żukowski, Marek; Laskowski, Wiesław; Wieśniak, Marcin

    2016-01-01

    We present a new approach to Stokes parameters, which enables one to see better non-classical properties of bright quantum light, and of undefined overall photon numbers. The crucial difference is as follows. The standard quantum optical Stokes parameters are averages of differences of intensities of light registered at the two exits of polarization analyzers, and one gets their normalized version by dividing them by the average total intensity. The new ones are averages of the registered normalized Stokes parameters, for the duration of the experiment. That is, we redefine each Stokes observable as the difference of photon number operators at the two exits of a polarizing beam splitter multiplied by the inverse of their sum. The vacuum eigenvalue of the operator is defined a zero. We show that with such an approach one can obtain more sensitive entanglement indicators based on polarization measurements. (paper)

  4. On entanglement of light and Stokes parameters

    Science.gov (United States)

    Żukowski, Marek; Laskowski, Wiesław; Wieśniak, Marcin

    2016-08-01

    We present a new approach to Stokes parameters, which enables one to see better non-classical properties of bright quantum light, and of undefined overall photon numbers. The crucial difference is as follows. The standard quantum optical Stokes parameters are averages of differences of intensities of light registered at the two exits of polarization analyzers, and one gets their normalized version by dividing them by the average total intensity. The new ones are averages of the registered normalized Stokes parameters, for the duration of the experiment. That is, we redefine each Stokes observable as the difference of photon number operators at the two exits of a polarizing beam splitter multiplied by the inverse of their sum. The vacuum eigenvalue of the operator is defined a zero. We show that with such an approach one can obtain more sensitive entanglement indicators based on polarization measurements.

  5. Beyond the Standard Model

    International Nuclear Information System (INIS)

    Lykken, Joseph D.

    2010-01-01

    'BSM physics' is a phrase used in several ways. It can refer to physical phenomena established experimentally but not accommodated by the Standard Model, in particular dark matter and neutrino oscillations (technically also anything that has to do with gravity, since gravity is not part of the Standard Model). 'Beyond the Standard Model' can also refer to possible deeper explanations of phenomena that are accommodated by the Standard Model but only with ad hoc parameterizations, such as Yukawa couplings and the strong CP angle. More generally, BSM can be taken to refer to any possible extension of the Standard Model, whether or not the extension solves any particular set of puzzles left unresolved in the SM. In this general sense one sees reference to the BSM 'theory space' of all possible SM extensions, this being a parameter space of coupling constants for new interactions, new charges or other quantum numbers, and parameters describing possible new degrees of freedom or new symmetries. Despite decades of model-building it seems unlikely that we have mapped out most of, or even the most interesting parts of, this theory space. Indeed we do not even know what is the dimensionality of this parameter space, or what fraction of it is already ruled out by experiment. Since Nature is only implementing at most one point in this BSM theory space (at least in our neighborhood of space and time), it might seem an impossible task to map back from a finite number of experimental discoveries and measurements to a unique BSM explanation. Fortunately for theorists the inevitable limitations of experiments themselves, in terms of resolutions, rates, and energy scales, means that in practice there are only a finite number of BSM model 'equivalence classes' competing at any given time to explain any given set of results. BSM phenomenology is a two-way street: not only do experimental results test or constrain BSM models, they also suggest - to those who get close enough to listen

  6. Beyond the Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Lykken, Joseph D.; /Fermilab

    2010-05-01

    'BSM physics' is a phrase used in several ways. It can refer to physical phenomena established experimentally but not accommodated by the Standard Model, in particular dark matter and neutrino oscillations (technically also anything that has to do with gravity, since gravity is not part of the Standard Model). 'Beyond the Standard Model' can also refer to possible deeper explanations of phenomena that are accommodated by the Standard Model but only with ad hoc parameterizations, such as Yukawa couplings and the strong CP angle. More generally, BSM can be taken to refer to any possible extension of the Standard Model, whether or not the extension solves any particular set of puzzles left unresolved in the SM. In this general sense one sees reference to the BSM 'theory space' of all possible SM extensions, this being a parameter space of coupling constants for new interactions, new charges or other quantum numbers, and parameters describing possible new degrees of freedom or new symmetries. Despite decades of model-building it seems unlikely that we have mapped out most of, or even the most interesting parts of, this theory space. Indeed we do not even know what is the dimensionality of this parameter space, or what fraction of it is already ruled out by experiment. Since Nature is only implementing at most one point in this BSM theory space (at least in our neighborhood of space and time), it might seem an impossible task to map back from a finite number of experimental discoveries and measurements to a unique BSM explanation. Fortunately for theorists the inevitable limitations of experiments themselves, in terms of resolutions, rates, and energy scales, means that in practice there are only a finite number of BSM model 'equivalence classes' competing at any given time to explain any given set of results. BSM phenomenology is a two-way street: not only do experimental results test or constrain BSM models, they also suggest

  7. Standardized Interpretation of Chest Radiographs in Cases of Pediatric Pneumonia From the PERCH Study.

    Science.gov (United States)

    Fancourt, Nicholas; Deloria Knoll, Maria; Barger-Kamate, Breanna; de Campo, John; de Campo, Margaret; Diallo, Mahamadou; Ebruke, Bernard E; Feikin, Daniel R; Gleeson, Fergus; Gong, Wenfeng; Hammitt, Laura L; Izadnegahdar, Rasa; Kruatrachue, Anchalee; Madhi, Shabir A; Manduku, Veronica; Matin, Fariha Bushra; Mahomed, Nasreen; Moore, David P; Mwenechanya, Musaku; Nahar, Kamrun; Oluwalana, Claire; Ominde, Micah Silaba; Prosperi, Christine; Sande, Joyce; Suntarattiwong, Piyarat; O'Brien, Katherine L

    2017-06-15

    Chest radiographs (CXRs) are a valuable diagnostic tool in epidemiologic studies of pneumonia. The World Health Organization (WHO) methodology for the interpretation of pediatric CXRs has not been evaluated beyond its intended application as an endpoint measure for bacterial vaccine trials. The Pneumonia Etiology Research for Child Health (PERCH) study enrolled children aged 1-59 months hospitalized with WHO-defined severe and very severe pneumonia from 7 low- and middle-income countries. An interpretation process categorized each CXR into 1 of 5 conclusions: consolidation, other infiltrate, both consolidation and other infiltrate, normal, or uninterpretable. Two members of a 14-person reading panel, who had undertaken training and standardization in CXR interpretation, interpreted each CXR. Two members of an arbitration panel provided additional independent reviews of CXRs with discordant interpretations at the primary reading, blinded to previous reports. Further discordance was resolved with consensus discussion. A total of 4172 CXRs were obtained from 4232 cases. Observed agreement for detecting consolidation (with or without other infiltrate) between primary readers was 78% (κ = 0.50) and between arbitrators was 84% (κ = 0.61); agreement for primary readers and arbitrators across 5 conclusion categories was 43.5% (κ = 0.25) and 48.5% (κ = 0.32), respectively. Disagreement was most frequent between conclusions of other infiltrate and normal for both the reading panel and the arbitration panel (32% and 30% of discordant CXRs, respectively). Agreement was similar to that of previous evaluations using the WHO methodology for detecting consolidation, but poor for other infiltrates despite attempts at a rigorous standardization process. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America.

  8. Structural parameter identifiability analysis for dynamic reaction networks

    DEFF Research Database (Denmark)

    Davidescu, Florin Paul; Jørgensen, Sten Bay

    2008-01-01

    method based on Lie derivatives. The proposed systematic two phase methodology is illustrated on a mass action based model for an enzymatically catalyzed reaction pathway network where only a limited set of variables is measured. The methodology clearly pinpoints the structurally identifiable parameters...... where for a given set of measured variables it is desirable to investigate which parameters may be estimated prior to spending computational effort on the actual estimation. This contribution addresses the structural parameter identifiability problem for the typical case of reaction network models....... The proposed analysis is performed in two phases. The first phase determines the structurally identifiable reaction rates based on reaction network stoichiometry. The second phase assesses the structural parameter identifiability of the specific kinetic rate expressions using a generating series expansion...

  9. Application of function-oriented roughness parameters using confocal microscopy

    Directory of Open Access Journals (Sweden)

    K. Klauer

    2018-06-01

    Full Text Available Optical measuring instruments are widely used for the functional characterization of surface topography. However, due to the interaction of the surface with the incident light, effects occur that can influence the measured topography height values and the obtained surface texture parameters. Therefore, we describe a systematic investigation of the influences of optical surface topography measurement on the acquisition of function-oriented roughness parameters. The same evaluation areas of varying cylinder liners which represent a typical application of function-oriented roughness parameters were measured with a confocal microscope and a stylus instrument. Functional surface texture parameters as given in the standards ISO 13565–2, ISO 13565–3 and ISO 25178–2 were evaluated for both measurement methods and compared. The transmission of specific surface features was described and a correlation analysis for the surface topographies obtained with the different measurement methods and their resulting functional roughness parameters was carried out. Keywords: Functional surface characterization, Optical metrology, Topography measurement, Roughness

  10. The generalized cosmic equation of state. A revised study with cosmological standard rulers

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Yubo [Shanxi Datong University, School of Physics, Datong (China); Zhang, Jia [Weinan Normal University, Department of Physics, School of Mathematics and Physics, Weinan, Shanxi (China); Cao, Shuo; Zheng, Xiaogang; Xu, Tengpeng; Qi, Jingzhao [Beijing Normal University, Department of Astronomy, Beijing (China)

    2017-12-15

    In this paper, the generalized equation of state (GEoS) for dark energy (w{sub β} = w{sub 0} - w{sub β}[(1+z){sup -β} - 1]/β) is investigated with the combined standard ruler data from the observations of intermediate-luminosity radio quasars, galaxy clusters, BAO and CMB. The constraint results show that the best-fit EoS parameters are w{sub 0} = -0.94{sup +0.57}{sub -0.41}, w{sub β} = -0.17{sup +2.45}{sub -4.81} and β = -1.42 (with a lower limit of β > -2.70 at 68.3% C.L.), which implies that at early times the dark energy vanishes. In the framework of nine truncated GEoS models with different β parameters, our findings present very clear evidence disfavoring the case that dark energy always dominates over the other material components in the early universe. Moreover, stringent constraints can be obtained in combination with the latest measurements of Hubble parameter at different redshifts: w{sub 0} = -1.01{sup +0.56}{sub -0.31}, w{sub β} = 0.01{sup +2.33}{sub -4.52} and β = -0.42 (with a lower limit of β > -2.40 at 68.3% C.L.). Finally, the results obtained from the transition redshift (z{sub t}) and Om(z) diagnostic indicate that: (1) The above constraints on the GEoS model agree very well with the transition redshift interval 0.49 ≤ z{sub t} ≤ 0.88 within 1σ error region. (2) At the current observational level, the GEoS model is practically indistinguishable from ΛCDM, although a small deviation from ΛCDM cosmology is present in the combined standard ruler data. (orig.)

  11. Quantitative x-ray fluorescent analysis using fundamental parameters

    International Nuclear Information System (INIS)

    Sparks, C.J. Jr.

    1976-01-01

    A monochromatic source of x-rays for sample excitation permits the use of pure elemental standards and relatively simple calculations to convert the measured fluorescent intensities to an absolute basis of weight per unit weight of sample. Only the mass absorption coefficients of the sample for the exciting and the fluorescent radiation need be determined. Besides the direct measurement of these absorption coefficients in the sample, other techniques are considered which require fewer sample manipulations and measurements. These fundamental parameters methods permit quantitative analysis without recourse to the time-consuming process of preparing nearly identical standards

  12. [Cardiac safety of electroconvulsive therapy in an elderly patient--a case report].

    Science.gov (United States)

    Karakuła-Juchnowicz, Hanna; Próchnicki, Michał; Kiciński, Paweł; Olajossy, Marcin; Pelczarska-Jamroga, Agnieszka; Dzikowski, Michał; Jaroszyński, Andrzej

    2015-10-01

    Since electroconvulsive therapy (ECT) was introduced as treatment for psychiatric disorders in 1938, it has remained one of the most effective therapeutic methods. ECT is often used as a "treatment of last resort" when other methods fail, and a life-saving procedure in acute clinical states when a rapid therapeutic effect is needed. Mortality associated with ECT is lower, compared to the treatment with tricyclic antidepressants, and comparable to that observed in so-called minor surgery. In the literature, cases of effective and safe electroconvulsive therapy have been described in patients of advanced age, with a burden of many somatic disorders. However, cases of acute cardiac episodes have also been reported during ECT. The qualification of patients for ECT and the selection of a group of patients at the highest risk of cardiovascular complications remains a serious clinical problem. An assessment of the predictive value of parameters of standard electrocardiogram (ECG), which is a simple, cheap and easily available procedure, deserves special attention. This paper reports a case of a 74-year-old male patient treated with ECT for a severe depressive episode, in the context of cardiologic safety. Both every single ECT session and the full course were assessed to examine their impact on levels of troponin T, which is a basic marker of cardiac damage, and selected ECG parameters (QTc, QRS). In the presented case ECT demonstrated its high general and cardiac safety with no negative effect on cardiac troponin (TnT) levels, corrected QT interval (QTc) duration, or other measured ECG parameters despite initially increased troponin levels, the patient's advanced age, the burden of a severe somatic disease and its treatment (anticancer therapy). © 2015 MEDPRESS.

  13. Analysis the Appropriate using Standard Costing Applying in Land Cost Component of Real Estate Development Activities: A Case Study of PT Subur Agung

    Directory of Open Access Journals (Sweden)

    Elfrida Yanti

    2011-05-01

    Full Text Available Standard cost is generally used by manufacturing business, which direct material, labor, and factory overhead are cleared allocated. On real estate business in this case PT Subur Agung use standard cost based on three costs, raw land, land improvement and interest expense categories instead of direct material, direct labor and overhead. Developer use these cost to predict the project cost and estimate the pre-selling price, in accordance with the cost estimation classification matrix, the variance range is in the expected accuracy rate by testing the variance percentage between standard cost and actual cost. The additional similar projects in PT Subur Agung also follow the same scope. All these evidences have proved the appropriate using standard costing in land cost component of real estate development activities but how it applies this article will analyze in this particular project with using descriptive and exploratory method. The analysis started by knowing the conceptual situation of PT Subur Agung and the data was presented in tables and calculation with detail explanation. 

  14. Determination of a PWR key neutron parameters uncertainties and conformity studies applications

    International Nuclear Information System (INIS)

    Bernard, D.

    2002-01-01

    The aim of this thesis was to evaluate uncertainties of key neutron parameters of slab reactors. Uncertainties sources have many origins, technologic origin for parameters of fabrication and physical origin for nuclear data. First, each contribution of uncertainties is calculated and finally, a factor of uncertainties is associated to key slab parameter like reactivity, isotherm reactivity coefficient, control rod efficiency, power form factor before irradiation and lifetime. This factors of uncertainties were computed by Generalized Perturbations Theory in case of step 0 and by directs calculations in case of irradiation problems. One of neutronic conformity applications was about fabrication and nuclear data targets precision adjustments. Statistic (uncertainties) and deterministic (deviations) approaches were studied. Then neutronics key slab parameters uncertainties were reduced and so nuclear performances were optimised. (author)

  15. An overview of reactor physics standards: Past, present and future

    International Nuclear Information System (INIS)

    Cokinos, D.M.

    1992-07-01

    This report discusses for determining key static reactor physics parameters which have been developed by groups of experts (working groups) under the aegis of ANS-19, the ANS Reactor Physics Standards Committee. Following a series of sequential reviews, augmented by feedback from potential users, a proposed standard is brought into final form by the working group before it is adopted as a formal standard by the American National Standards Institute (ANSI); Reactor Physics standards are intended to provide guidance in the performance and qualification of complex sequences of reactor calculations and/or measurements and are regularly reviewed for possible updates and/or revisions. The reactor physics standards developed to date are listed and standards now being developed by the respective working groups are also provided

  16. Sequential Interval Estimation of a Location Parameter with Fixed Width in the Nonregular Case

    OpenAIRE

    Koike, Ken-ichi

    2007-01-01

    For a location-scale parameter family of distributions with a finite support, a sequential confidence interval with a fixed width is obtained for the location parameter, and its asymptotic consistency and efficiency are shown. Some comparisons with the Chow-Robbins procedure are also done.

  17. ASTROPHYSICAL PRIOR INFORMATION AND GRAVITATIONAL-WAVE PARAMETER ESTIMATION

    International Nuclear Information System (INIS)

    Pankow, Chris; Sampson, Laura; Perri, Leah; Chase, Eve; Coughlin, Scott; Zevin, Michael; Kalogera, Vassiliki

    2017-01-01

    The detection of electromagnetic counterparts to gravitational waves (GWs) has great promise for the investigation of many scientific questions. While it is well known that certain orientation parameters can reduce uncertainty in other related parameters, it was also hoped that the detection of an electromagnetic signal in conjunction with a GW could augment the measurement precision of the mass and spin from the gravitational signal itself. That is, knowledge of the sky location, inclination, and redshift of a binary could break degeneracies between these extrinsic, coordinate-dependent parameters and the physical parameters that are intrinsic to the binary. In this paper, we investigate this issue by assuming perfect knowledge of extrinsic parameters, and assessing the maximal impact of this knowledge on our ability to extract intrinsic parameters. We recover similar gains in extrinsic recovery to earlier work; however, we find only modest improvements in a few intrinsic parameters—namely the primary component’s spin. We thus conclude that, even in the best case, the use of additional information from electromagnetic observations does not improve the measurement of the intrinsic parameters significantly.

  18. ASTROPHYSICAL PRIOR INFORMATION AND GRAVITATIONAL-WAVE PARAMETER ESTIMATION

    Energy Technology Data Exchange (ETDEWEB)

    Pankow, Chris; Sampson, Laura; Perri, Leah; Chase, Eve; Coughlin, Scott; Zevin, Michael; Kalogera, Vassiliki [Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA) and Department of Physics and Astronomy, Northwestern University, 2145 Sheridan Road, Evanston, IL 60208 (United States)

    2017-01-10

    The detection of electromagnetic counterparts to gravitational waves (GWs) has great promise for the investigation of many scientific questions. While it is well known that certain orientation parameters can reduce uncertainty in other related parameters, it was also hoped that the detection of an electromagnetic signal in conjunction with a GW could augment the measurement precision of the mass and spin from the gravitational signal itself. That is, knowledge of the sky location, inclination, and redshift of a binary could break degeneracies between these extrinsic, coordinate-dependent parameters and the physical parameters that are intrinsic to the binary. In this paper, we investigate this issue by assuming perfect knowledge of extrinsic parameters, and assessing the maximal impact of this knowledge on our ability to extract intrinsic parameters. We recover similar gains in extrinsic recovery to earlier work; however, we find only modest improvements in a few intrinsic parameters—namely the primary component’s spin. We thus conclude that, even in the best case, the use of additional information from electromagnetic observations does not improve the measurement of the intrinsic parameters significantly.

  19. Standardization and optimization of arthropod inventories-the case of Iberian spiders

    DEFF Research Database (Denmark)

    Bondoso Cardoso, Pedro Miguel

    2009-01-01

    and optimization of sampling protocols, especially for mega-diverse arthropod taxa. This study had two objectives: (1) propose guidelines and statistical methods to improve the standardization and optimization of arthropod inventories, and (2) to propose a standardized and optimized protocol for Iberian spiders......, by finding common results between the optimal options for the different sites. The steps listed were successfully followed in the determination of a sampling protocol for Iberian spiders. A protocol with three sub-protocols of varying degrees of effort (24, 96 and 320 h of sampling) is proposed. I also...

  20. Two-loop corrections to the ρ parameter in Two-Higgs-Doublet models

    Energy Technology Data Exchange (ETDEWEB)

    Hessenberger, Stephan; Hollik, Wolfgang [Max-Planck-Institut fuer Physik (Werner-Heisenberg-Institut), Muenchen (Germany)

    2017-03-15

    Models with two scalar doublets are among the simplest extensions of the Standard Model which fulfill the relation ρ = 1 at lowest order for the ρ parameter as favored by experimental data for electroweak observables allowing only small deviations from unity. Such small deviations Δρ originate exclusively from quantum effects with special sensitivity to mass splittings between different isospin components of fermions and scalars. In this paper the dominant two-loop electroweak corrections to Δρ are calculated in the CP-conserving THDM, resulting from the top-Yukawa coupling and the self-couplings of the Higgs bosons in the gauge-less limit. The on-shell renormalization scheme is applied. With the assumption that one of the CP-even neutral scalars represents the scalar boson observed by the LHC experiments, with standard properties, the two-loop non-standard contributions in Δρ can be separated from the standard ones. These contributions are of particular interest since they increase with mass splittings between non-standard Higgs bosons and can be additionally enhanced by tanβ and λ{sub 5}, an additional free coefficient of the Higgs potential, and can thus modify the one-loop result substantially. Numerical results are given for the dependence on the various non-standard parameters, and the influence on the calculation of electroweak precision observables is discussed. (orig.)

  1. Sexual Dimorphism and Estimation of Height from Body Length Anthropometric Parameters among the Hausa Ethnic Group of Nigeria

    Directory of Open Access Journals (Sweden)

    Jaafar Aliyu

    2018-01-01

    Full Text Available The study was carried out to investigate the sexual dimorphism in length and other anthropometric parameters. To also generate formulae for height estimation using anthropometric measurements of some length parameters among Hausa ethnic group of Kaduna State, Nigeria. A cross sectional study was conducted and a total of 500 subjects participated in this study which was mainly secondary school students between the age ranges of 16-27 years, anthropometric measurements were obtained using standard protocols. It was observed that there was significant sexual dimorphism in all the parameters except for body mass index. In all the parameters males tend to have significantly (P < 0.05 higher mean values except biaxillary distances. Height showed positive and strongest correlations with demispan length, followed by knee height, thigh length, sitting height, hand length, foot length, humeral length, forearm length and weight respectively. There were weak and positive correlations between height and neck length as well as biaxillary length. The demi span length showed the strongest correlation coefficient and low standard error of estimate indicating the strong estimation ability than other parameters. The combination of two parameters tends to give better estimations and low standard error of estimates, so also combining the three parameters gives better estimations with a lower standard error of estimates. The better correlation coefficient was also observed with the double and triple parameters respectively. Male Hausa tend to have larger body proportion compared to female. Height showed positive and strongest correlations with demispan length. Body length anthropometric proved to be useful in estimation of stature among Hausa ethnic group of Kaduna state Nigeria.

  2. Calibration of discrete element model parameters: soybeans

    Science.gov (United States)

    Ghodki, Bhupendra M.; Patel, Manish; Namdeo, Rohit; Carpenter, Gopal

    2018-05-01

    Discrete element method (DEM) simulations are broadly used to get an insight of flow characteristics of granular materials in complex particulate systems. DEM input parameters for a model are the critical prerequisite for an efficient simulation. Thus, the present investigation aims to determine DEM input parameters for Hertz-Mindlin model using soybeans as a granular material. To achieve this aim, widely acceptable calibration approach was used having standard box-type apparatus. Further, qualitative and quantitative findings such as particle profile, height of kernels retaining the acrylic wall, and angle of repose of experiments and numerical simulations were compared to get the parameters. The calibrated set of DEM input parameters includes the following (a) material properties: particle geometric mean diameter (6.24 mm); spherical shape; particle density (1220 kg m^{-3} ), and (b) interaction parameters such as particle-particle: coefficient of restitution (0.17); coefficient of static friction (0.26); coefficient of rolling friction (0.08), and particle-wall: coefficient of restitution (0.35); coefficient of static friction (0.30); coefficient of rolling friction (0.08). The results may adequately be used to simulate particle scale mechanics (grain commingling, flow/motion, forces, etc) of soybeans in post-harvest machinery and devices.

  3. The determinants of voluntary traceability standards. The case of the wine sector

    Directory of Open Access Journals (Sweden)

    Stefanella Stranieri

    2018-06-01

    Full Text Available The aim of this paper is to study the determinants leading firms to choose among different kinds of voluntary traceability standards in the wine sector. To achieve this goal, we referred both to individual and institutional-level determinants, which are identified to play an important role in the literature related to the implementation of quality and safety standards. In specific, we referred to two theoretical approaches to better understand the industry behaviour towards the adoption of voluntary traceability, i.e. the Theory of Reasoned Action and the Institutional Theory. We developed a vis-à-vis survey through a questionnaire on a sample of Italian wineries approached during the most important Italian wine exhibitions in 2016. The results suggest that when wineries show positive cognitive beliefs towards voluntary traceability standards, they will probably implement complex traceability systems, which require high investments and efforts for their management. On the contrary, when the institutional environment plays a key role in the perception of wine processors, a simple and flexible traceability system seems to be preferred. Keywords: Voluntary traceability standards, Institutional determinants, Cognitive determinants

  4. Use of Statistical Estimators as Virtual Observatory Search ParametersEnabling Access to Solar and Planetary Resources through the Virtual Observatory

    Science.gov (United States)

    Merka, J.; Dolan, C. F.

    2015-12-01

    Finding and retrieving space physics data is often a complicated taskeven for publicly available data sets: Thousands of relativelysmall and many large data sets are stored in various formats and, inthe better case, accompanied by at least some documentation. VirtualHeliospheric and Magnetospheric Observatories (VHO and VMO) help researches by creating a single point of uniformdiscovery, access, and use of heliospheric (VHO) and magnetospheric(VMO) data.The VMO and VHO functionality relies on metadata expressed using theSPASE data model. This data model is developed by the SPASE WorkingGroup which is currently the only international group supporting globaldata management for Solar and Space Physics. The two Virtual Observatories(VxOs) have initiated and lead a development of a SPASE-related standardnamed SPASE Query Language for provided a standard way of submittingqueries and receiving results.The VMO and VHO use SPASE and SPASEQL for searches based on various criteria such as, for example, spatial location, time of observation, measurement type, parameter values, etc. The parameter values are represented by their statisticalestimators calculated typically over 10-minute intervals: mean, median, standard deviation, minimum, and maximum. The use of statistical estimatorsenables science driven data queries that simplify and shorten the effort tofind where and/or how often the sought phenomenon is observed, as we will present.

  5. Orbital parameters of extrasolar planets derived from polarimetry

    Science.gov (United States)

    Fluri, D. M.; Berdyugina, S. V.

    2010-03-01

    Context. Polarimetry of extrasolar planets becomes a new tool for their investigation, which requires the development of diagnostic techniques and parameter case studies. Aims: Our goal is to develop a theoretical model which can be applied to interpret polarimetric observations of extrasolar planets. Here we present a theoretical parameter study that shows the influence of the various involved parameters on the polarization curves. Furthermore, we investigate the robustness of the fitting procedure. We focus on the diagnostics of orbital parameters and the estimation of the scattering radius of the planet. Methods: We employ the physics of Rayleigh scattering to obtain polarization curves of an unresolved extrasolar planet. Calculations are made for two cases: (i) assuming an angular distribution for the intensity of the scattered light as from a Lambert sphere and for polarization as from a Rayleigh-type scatterer; and (ii) assuming that both the intensity and polarization of the scattered light are distributed according to the Rayleigh law. We show that the difference between these two cases is negligible for the shapes of the polarization curves. In addition, we take the size of the host star into account, which is relevant for hot Jupiters orbiting giant stars. Results: We discuss the influence of the inclination of the planetary orbit, the position angle of the ascending node, and the eccentricity on the linearly polarized light curves both in Stokes Q/I and U/I. We also analyze errors that arise from the assumption of a point-like star in numerical modeling of polarization as compared to consistent calculations accounting for the finite size of the host star. We find that errors due to the point-like star approximation are reduced with the size of the orbit, but still amount to about 5% for known hot Jupiters. Recovering orbital parameters from simulated data is shown to be very robust even for very noisy data because the polarization curves react

  6. Prognostic value of tumor-to-blood standardized uptake ratio in patients with resectable non-small-cell lung cancer

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Seung Hyeon; Pak, Kyoung June; Kim, In Joo [Dept. of Nuclear Medicine and Biomedical Research Institute, Pusan National University Hospital, Busan(Korea, Republic of); Kim, Bum Soo; Kim, Seong Jang [Dept. of Nuclear Medicine and Research Institute for Convergence of Biomedical Science and Technology, Pusan National University Yangsan Hospital, Yangsan (Korea, Republic of)

    2017-09-15

    Previously published studies showed that the standard tumor-to-blood standardized uptake value (SUV) ratio (SUR) was a more accurate prognostic method than tumor maximum standardized uptake value (SUVmax). This study evaluated and compared prognostic value of positron emission tomography (PET) parameters and normalized value of PET parameters by blood pool SUV in non-small-cell lung cancer (NSCLC) patients who received curative surgery.

  7. Prognostic value of tumor-to-blood standardized uptake ratio in patients with resectable non-small-cell lung cancer

    International Nuclear Information System (INIS)

    Shin, Seung Hyeon; Pak, Kyoung June; Kim, In Joo; Kim, Bum Soo; Kim, Seong Jang

    2017-01-01

    Previously published studies showed that the standard tumor-to-blood standardized uptake value (SUV) ratio (SUR) was a more accurate prognostic method than tumor maximum standardized uptake value (SUVmax). This study evaluated and compared prognostic value of positron emission tomography (PET) parameters and normalized value of PET parameters by blood pool SUV in non-small-cell lung cancer (NSCLC) patients who received curative surgery

  8. The reasonable woman standard: effects on sexual harassment court decisions.

    Science.gov (United States)

    Perry, Elissa L; Kulik, Carol T; Bourhis, Anne C

    2004-02-01

    Some federal courts have used a reasonable woman standard rather than the traditional reasonable man or reasonable person standard to determine whether hostile environment sexual harassment has occurred. The current research examined the impact of the reasonable woman standard on federal district court decisions, controlling for other factors found to affect sexual harassment court decisions. Results indicated that there was a weak relationship between whether a case followed a reasonable woman precedent-setting case and the likelihood that the court decision favored the plaintiff. The implications of our findings for individuals and organizations involved in sexual harassment claims are discussed.

  9. Standardless quantification approach of TXRF analysis using fundamental parameter method

    International Nuclear Information System (INIS)

    Szaloki, I.; Taniguchi, K.

    2000-01-01

    New standardless evaluation procedure based on the fundamental parameter method (FPM) has been developed for TXRF analysis. The theoretical calculation describes the relationship between characteristic intensities and the geometrical parameters of the excitation, detection system and the specimen parameters: size, thickness, angle of the excitation beam to the surface and the optical properties of the specimen holder. Most of the TXRF methods apply empirical calibration, which requires the application of special preparation technique. However, the characteristic lines of the specimen holder (Si Kα,β) present information from the local excitation and geometrical conditions on the substrate surface. On the basis of the theoretically calculation of the substrate characteristic intensity the excitation beam flux can be approximated. Taking into consideration the elements are in the specimen material a system of non-linear equation can be given involving the unknown concentration values and the geometrical and detection parameters. In order to solve this mathematical problem PASCAL software was written, which calculates the sample composition and the average sample thickness by gradient algorithm. Therefore, this quantitative estimation of the specimen composition requires neither external nor internal standard sample. For verification of the theoretical calculation and the numerical procedure, several experiments were carried out using mixed standard solution containing elements of K, Sc, V, Mn, Co and Cu in 0.1 - 10 ppm concentration range. (author)

  10. Compensation for the Effects of Ambient Conditions on the Calibration of Multi-Capillary Pressure Drop Standards

    Directory of Open Access Journals (Sweden)

    Colard S

    2014-12-01

    Full Text Available Cigarette draw resistance and filter pressure drop (PD are both major physical parameters for the tobacco industry. Therefore these parameters must be measured reliably. For these measurements, specific equipment calibrated with PD transfer standards is used. Each transfer standard must have a known and stable PD value, such standards usually being composed of several capillary tubes associated in parallel. However, PD values are modified by ambient conditions during calibration of such standards, i.e. by temperature and relative humidity (RH of air, and atmospheric pressure. In order to reduce the influence of these ambient factors, a simplified model was developed for compensating the effects of ambient conditions on the calibration of multi-capillary PD standards.

  11. Robustness of Adaptive Survey Designs to Inaccuracy of Design Parameters

    Directory of Open Access Journals (Sweden)

    Burger Joep

    2017-09-01

    Full Text Available Adaptive survey designs (ASDs optimize design features, given 1 the interactions between the design features and characteristics of sampling units and 2 a set of constraints, such as a budget and a minimum number of respondents. Estimation of the interactions is subject to both random and systematic error. In this article, we propose and evaluate four viewpoints to assess robustness of ASDs to inaccuracy of design parameter estimates: the effect of both imprecision and bias on both ASD structure and ASD performance. We additionally propose three distance measures to compare the structure of ASDs. The methodology is illustrated using a simple simulation study and a more complex but realistic case study on the Dutch Travel Survey. The proposed methodology can be applied to other ASD optimization problems. In our simulation study and case study, the ASD was fairly robust to imprecision, but not to realistic dynamics in the design parameters. To deal with the sensitivity of ASDs to changing design parameters, we recommend to learn and update the design parameters.

  12. Evaluation of physico-chemical parameters of agricultural soils ...

    African Journals Online (AJOL)

    Evaluation of physico-chemical parameters of agricultural soils irrigated by the waters of the hydrolic basin of Sebou River and their influences on the transfer of trace elements into sugar crops (the case of sugar cane)

  13. Assessment of hospital performance with a case-mix standardized mortality model using an existing administrative database in Japan.

    Science.gov (United States)

    Miyata, Hiroaki; Hashimoto, Hideki; Horiguchi, Hiromasa; Fushimi, Kiyohide; Matsuda, Shinya

    2010-05-19

    Few studies have examined whether risk adjustment is evenly applicable to hospitals with various characteristics and case-mix. In this study, we applied a generic prediction model to nationwide discharge data from hospitals with various characteristics. We used standardized data of 1,878,767 discharged patients provided by 469 hospitals from July 1 to October 31, 2006. We generated and validated a case-mix in-hospital mortality prediction model using 50/50 split sample validation. We classified hospitals into two groups based on c-index value (hospitals with c-index > or = 0.8; hospitals with c-index /=0.8 and were classified as the higher c-index group. A significantly higher proportion of hospitals in the lower c-index group were specialized hospitals and hospitals with convalescent wards. The model fits well to a group of hospitals with a wide variety of acute care events, though model fit is less satisfactory for specialized hospitals and those with convalescent wards. Further sophistication of the generic prediction model would be recommended to obtain optimal indices to region specific conditions.

  14. The negotiation of fair trade standards within Ecuadorian flower plantations

    Directory of Open Access Journals (Sweden)

    Angus Lyall

    2014-03-01

    Full Text Available Fairtrade International (FLO applies requirements or “standards” for certification in agroindustries in order to channel resources to workers, improve their conditions, and “empower” them, even requiring freedom of association. Researchers have signaled a “dilution” (Jaffee, 2012 of standards in recent years. I take into account the case of the Ecuadorian cut-flower industry to show that FLO’s impacts on power relations ought to be analyzed not in terms of the standards, but rather in terms of the local negotiation of standards within territorial power relations. For this case, I point out that before FLO’s criticisms of conventional market mechanisms, standards generate better conditions, but also facilitate the re-consolidation of labor control.   

  15. Determination of parameters in elasto-plastic models of aluminium.

    NARCIS (Netherlands)

    Meuwissen, M.H.H.; Oomens, C.W.J.; Baaijens, F.P.T.; Petterson, R.; Janssen, J.D.; Sol, H.; Oomens, C.W.J.

    1997-01-01

    A mixed numerical-experimental method is used to determine parameters in elasto-plastic constitutive models. An aluminium plate of non-standard geometry is mounted in a uniaxial tensile testing machine at which some adjustments are made to carry out shear tests. The sample is loaded and the total

  16. From conservation laws to port-Hamiltonian representations of distributed-parameter systems

    NARCIS (Netherlands)

    Maschke, B.M.; van der Schaft, Arjan; Piztek, P.

    Abstract: In this paper it is shown how the port-Hamiltonian formulation of distributed-parameter systems is closely related to the general thermodynamic framework of systems of conservation laws and closure equations. The situation turns out to be similar to the lumped-parameter case where the

  17. Exploring Parameter Tuning for Analysis and Optimization of a Computational Model

    NARCIS (Netherlands)

    Mollee, J.S.; Fernandes de Mello Araujo, E.; Klein, M.C.A.

    2017-01-01

    Computational models of human processes are used for many different purposes and in many different types of applications. A common challenge in using such models is to find suitable parameter values. In many cases, the ideal parameter values are those that yield the most realistic simulation

  18. Thermodynamic criteria for estimating the kinetic parameters of catalytic reactions

    Science.gov (United States)

    Mitrichev, I. I.; Zhensa, A. V.; Kol'tsova, E. M.

    2017-01-01

    Kinetic parameters are estimated using two criteria in addition to the traditional criterion that considers the consistency between experimental and modeled conversion data: thermodynamic consistency and the consistency with entropy production (i.e., the absolute rate of the change in entropy due to exchange with the environment is consistent with the rate of entropy production in the steady state). A special procedure is developed and executed on a computer to achieve the thermodynamic consistency of a set of kinetic parameters with respect to both the standard entropy of a reaction and the standard enthalpy of a reaction. A problem of multi-criterion optimization, reduced to a single-criterion problem by summing weighted values of the three criteria listed above, is solved. Using the reaction of NO reduction with CO on a platinum catalyst as an example, it is shown that the set of parameters proposed by D.B. Mantri and P. Aghalayam gives much worse agreement with experimental values than the set obtained on the basis of three criteria: the sum of the squares of deviations for conversion, the thermodynamic consistency, and the consistency with entropy production.

  19. What Clinical and Laboratory Parameters Distinguish Between ...

    African Journals Online (AJOL)

    Introduction: In developing countries, a large number of patients presenting acutely in renal failure are indeed cases of advanced chronic renal failure. In this study, we compared clinical and laboratory parameters between patients with acute renal failure (ARF) and chronic renal failure (CRF), to identify discriminatory ...

  20. Normal form of linear systems depending on parameters

    International Nuclear Information System (INIS)

    Nguyen Huynh Phan.

    1995-12-01

    In this paper we resolve completely the problem to find normal forms of linear systems depending on parameters for the feedback action that we have studied for the special case of controllable linear systems. (author). 24 refs

  1. Transuranic biokinetic parameters for marine invertebrates--a review.

    Science.gov (United States)

    Ryan, T P

    2002-04-01

    A catalogue of biokinetic parameters for the transuranic elements plutonium, americium, curium, neptunium, and californium in marine invertebrates is presented. The parameters considered are: the seawater-animal concentration factor (CF); the sediment-animal concentration ratio (CR); transuranic assimilation efficiency; transuranic tissue distribution and transuranic elimination rates. With respect to the seawater-animal CF, authors differ considerably on how they define this parameter and a seven-point reporting system is suggested. Transuranic uptake from sediment by animals is characterised by low CRs. The assimilation efficiencies of transuranic elements in marine invertebrates are high compared to vertebrates and mammals in general and the distribution of transuranics within the body tissue of an animal is dependent on the uptake path. The elimination of transuranics from most species examined conformed to a standard biphasic exponential model though some examples with three elimination phases were identified.

  2. Biochemical and neurophysiological parameters in hemodialyzed patients with chronic renal failure

    NARCIS (Netherlands)

    Schoots, A.C.; Vries, de P.M.J.M.; Thiemann, R.C.J.; Hazejager, W.A.; Visser, S.L.; Oe, P.L.

    1989-01-01

    Serum concentrations of accumulated solutes, standard clinical biochemistry, and parameters of clinical neuropathy, were determined in hemodialyzed patients with chronic renal failure. Analyses by high-performance liquid chromatography included creatinine, pseudouridine, urate, p-hydroxyhippuric

  3. An alternative to the standard model

    International Nuclear Information System (INIS)

    Baek, Seungwon; Ko, Pyungwon; Park, Wan-Il

    2014-01-01

    We present an extension of the standard model to dark sector with an unbroken local dark U(1) X symmetry. Including various singlet portal interactions provided by the standard model Higgs, right-handed neutrinos and kinetic mixing, we show that the model can address most of phenomenological issues (inflation, neutrino mass and mixing, baryon number asymmetry, dark matter, direct/indirect dark matter searches, some scale scale puzzles of the standard collisionless cold dark matter, vacuum stability of the standard model Higgs potential, dark radiation) and be regarded as an alternative to the standard model. The Higgs signal strength is equal to one as in the standard model for unbroken U(1) X case with a scalar dark matter, but it could be less than one independent of decay channels if the dark matter is a dark sector fermion or if U(1) X is spontaneously broken, because of a mixing with a new neutral scalar boson in the models

  4. Nationwide Standards Eyed Anew

    Science.gov (United States)

    Olson, Lynn

    2005-01-01

    With the federal No Child Left Behind Act underscoring the wide variation in what states demand of their students, people on both sides of the political aisle are again making the case for national standards, curricula, and tests. It wasn't so long ago--during the Clinton and George H.W. Bush administrations--that similar proposals went down in…

  5. Improvement of Diagnostic Accuracy by Standardization in Diuretic Renal Scan

    International Nuclear Information System (INIS)

    Hyun, In Young; Lee, Dong Soo; Lee, Kyung Han; Chung, June Key; Lee, Myung Chul; Koh, Chang Soon; Kim, Kwang Myung; Choi, Hwang; Choi, Yong

    1995-01-01

    We evaluated diagnostic accuracy of diuretic renal scan with standardization in 45 children(107 hydronephrotic kidneys) with 91 diuretic assessments. Sensitivity was 100% specificity was 78%, and accuracy was 84% in 49 hydronephrotic kidneys with standardization. Diuretic renal scan without standardization, sensitivity was 100%, specificity was 38%, and accuracy was 57% in 58 hydronephrotic kidneys. The false-positive results were observed in 25 cases without standardization, and in 8 cases with standardization. In duretic renal scans without standardization, the causes of false-positive results were 10 early injection of lasix before mixing of radioactivity in loplsty, 6 extrarenal pelvis, and 3 immature kidneys of false-positive results were 2 markedly dilated systems postpyeloplsty, 2 etrarenal pevis, 1 immature kidney of neonate , and 2 severe renal dysfunction, 1 vesicoureteral, reflux. In diuretic renal scan without standardization the false-positive results by inadequate study were common, but false-positive results by inadequate study were not found after standardization. The false-positive results by dilated pelvo-calyceal systems postpyeloplsty, extrarenal pelvis, and immature kidneys of, neonates were not dissolved after standardization. In conclusion, diagnostic accuracy of diuretic renal scan with standardization was useful in children with renal outflow tract obstruction by improving specificity significantly.

  6. Several organic parameters on underlying hazardous constituents list can not be measured at the universal treatment standards

    International Nuclear Information System (INIS)

    Johnson, H.C.

    1998-01-01

    The Idaho National Engineering and Environmental Laboratory (INEEL) has several permitted treatment, storage and disposal facilities. The INEEL Sample Management Office (SMO) conducts all analysis subcontracting activities for Department of Energy Environmental Management programs at the INEEL. In this role, the INEEL SMO has had the opportunity to subcontract the analyses of various wastes (including ash from an interim status incinerator) requesting a target analyte list equivalent to the constituents listed in 40 Code of Federal Regulations. These analyses are required to ensure that treated wastes do not contain underlying hazardous constituents (UHC) at concentrations greater than the universal treatment standards (UTS) prior to land disposal. The INEEL SMO has conducted a good-faith effort by negotiating with several commercial laboratories to identify the lowest possible quantitation and detection limits that can be achieved for the organic UHC analytes. The results of this negotiating effort has been the discovery that no single laboratory (currently under subcontract with the INEEL SMO) can achieve a detection level that is within an order of magnitude of the UTS for all organic parameters on a clean sample matrix (e.g., sand). This does not mean that there is no laboratory that can achieve the order of magnitude requirements for all organic UHCs on a clean sample matrix. The negotiations held to date indicate that it is likely that no laboratory can achieve the order of magnitude requirements for a difficult sample matrix (e.g., an incinerator ash). The authors suggest that the regulation needs to be revised to address the disparity between what is achievable in the laboratory and the regulatory levels required by the UTS

  7. DBCG hypo trial validation of radiotherapy parameters from a national data bank versus manual reporting.

    Science.gov (United States)

    Brink, Carsten; Lorenzen, Ebbe L; Krogh, Simon Long; Westberg, Jonas; Berg, Martin; Jensen, Ingelise; Thomsen, Mette Skovhus; Yates, Esben Svitzer; Offersen, Birgitte Vrou

    2018-01-01

    The current study evaluates the data quality achievable using a national data bank for reporting radiotherapy parameters relative to the classical manual reporting method of selected parameters. The data comparison is based on 1522 Danish patients of the DBCG hypo trial with data stored in the Danish national radiotherapy data bank. In line with standard DBCG trial practice selected parameters were also reported manually to the DBCG database. Categorical variables are compared using contingency tables, and comparison of continuous parameters is presented in scatter plots. For categorical variables 25 differences between the data bank and manual values were located. Of these 23 were related to mistakes in the manual reported value whilst the remaining two were a wrong classification in the data bank. The wrong classification in the data bank was related to lack of dose information, since the two patients had been treated with an electron boost based on a manual calculation, thus data was not exported to the data bank, and this was not detected prior to comparison with the manual data. For a few database fields in the manual data an ambiguity of the parameter definition of the specific field is seen in the data. This was not the case for the data bank, which extract all data consistently. In terms of data quality the data bank is superior to manually reported values. However, there is a need to allocate resources for checking the validity of the available data as well as ensuring that all relevant data is present. The data bank contains more detailed information, and thus facilitates research related to the actual dose distribution in the patients.

  8. GEMSFITS: Code package for optimization of geochemical model parameters and inverse modeling

    International Nuclear Information System (INIS)

    Miron, George D.; Kulik, Dmitrii A.; Dmytrieva, Svitlana V.; Wagner, Thomas

    2015-01-01

    Highlights: • Tool for generating consistent parameters against various types of experiments. • Handles a large number of experimental data and parameters (is parallelized). • Has a graphical interface and can perform statistical analysis on the parameters. • Tested on fitting the standard state Gibbs free energies of aqueous Al species. • Example on fitting interaction parameters of mixing models and thermobarometry. - Abstract: GEMSFITS is a new code package for fitting internally consistent input parameters of GEM (Gibbs Energy Minimization) geochemical–thermodynamic models against various types of experimental or geochemical data, and for performing inverse modeling tasks. It consists of the gemsfit2 (parameter optimizer) and gfshell2 (graphical user interface) programs both accessing a NoSQL database, all developed with flexibility, generality, efficiency, and user friendliness in mind. The parameter optimizer gemsfit2 includes the GEMS3K chemical speciation solver ( (http://gems.web.psi.ch/GEMS3K)), which features a comprehensive suite of non-ideal activity- and equation-of-state models of solution phases (aqueous electrolyte, gas and fluid mixtures, solid solutions, (ad)sorption. The gemsfit2 code uses the robust open-source NLopt library for parameter fitting, which provides a selection between several nonlinear optimization algorithms (global, local, gradient-based), and supports large-scale parallelization. The gemsfit2 code can also perform comprehensive statistical analysis of the fitted parameters (basic statistics, sensitivity, Monte Carlo confidence intervals), thus supporting the user with powerful tools for evaluating the quality of the fits and the physical significance of the model parameters. The gfshell2 code provides menu-driven setup of optimization options (data selection, properties to fit and their constraints, measured properties to compare with computed counterparts, and statistics). The practical utility, efficiency, and

  9. Standard sonography and arthrosonography in the study of rotator cuff tears

    International Nuclear Information System (INIS)

    El-Dalati, Ghassan; Martone, Enrico; Caffarri, Sabrina; Fusaro, Michele; Pozzi Mucelli, Roberto; Castellarin, Gianluca; Ricci, Matteo; Vecchini, Eugenio

    2005-01-01

    Purpose. The aim of this study was to evaluate the sensitivity of ultrasonography, integrating standard ultrasound and arthrosonography after injecting a saline solution into the glenohumeral cavity in cases of suspected rotator cuff tears. Materials and methods. We respectively examinated 40 patients awaiting shoulder arthroscopy for suspected or diagnosed tears of the rotator cuff. A radiologist, unaware of the pre-operative diagnosis, performed an ultrasound scan on all the patients before and after the injection of saline solution into the glenohumeral cavity. The parameters considered were presence or absence of a rotator cuff injury; type of injury according to Snyder and its extent along the longitudinal and transverse planes; presence or absence of effusion into the articular cavity; subacromial/subdeltoid bursal distension. All the patients underwent arthroscopy either the same day of the day after the ultrasound examination. Results. Standard sonography showed 26 complete rotator cuff tears (type C according to Snyder), 2 partial tears (type B according to Snyder) and 12 intact rotator cuffs. Arthrosonography detected 31 complete rotator cuff tears (type C according to Snyder), 1 partial tear (type B according to Snyder) and 8 intact rotator cuffs. Arthroscopy identified 32 complete rotator cuff tears (type C according to Snyder), 1 partial tear (type B according to Snyder) and 8 intact rotator cuffs. Analysis of the results shows that, taking arthroscopy as the gold standard, the sensitivity of normal sonography is 81.2%, whereas that of arthosonography is 96.8% (p [it

  10. LHC Higgs physics beyond the Standard Model

    International Nuclear Information System (INIS)

    Spannowsky, M.

    2007-01-01

    The Large Hadron Collider (LHC) at CERN will be able to perform proton collisions at a much higher center-of-mass energy and luminosity than any other collider. Its main purpose is to detect the Higgs boson, the last unobserved particle of the Standard Model, explaining the riddle of the origin of mass. Studies have shown, that for the whole allowed region of the Higgs mass processes exist to detect the Higgs at the LHC. However, the Standard Model cannot be a theory of everything and is not able to provide a complete understanding of physics. It is at most an effective theory up to a presently unknown energy scale. Hence, extensions of the Standard Model are necessary which can affect the Higgs-boson signals. We discuss these effects in two popular extensions of the Standard Model: the Minimal Supersymmetric Standard Model (MSSM) and the Standard Model with four generations (SM4G). Constraints on these models come predominantly from flavor physics and electroweak precision measurements. We show, that the SM4G is still viable and that a fourth generation has strong impact on decay and production processes of the Higgs boson. Furthermore, we study the charged Higgs boson in the MSSM, yielding a clear signal for physics beyond the Standard Model. For small tan β in minimal flavor violation (MFV) no processes for the detection of a charged Higgs boson do exist at the LHC. However, MFV is just motivated by the experimental agreement of results from flavor physics with Standard Model predictions, but not by any basic theoretical consideration. In this thesis, we calculate charged Higgs boson production cross sections beyond the assumption of MFV, where a large number of free parameters is present in the MSSM. We find that the soft-breaking parameters which enhance the charged-Higgs boson production most are just bound to large values, e.g. by rare B-meson decays. Although the charged-Higgs boson cross sections beyond MFV turn out to be sizeable, only a detailed

  11. LHC Higgs physics beyond the Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Spannowsky, M.

    2007-09-22

    The Large Hadron Collider (LHC) at CERN will be able to perform proton collisions at a much higher center-of-mass energy and luminosity than any other collider. Its main purpose is to detect the Higgs boson, the last unobserved particle of the Standard Model, explaining the riddle of the origin of mass. Studies have shown, that for the whole allowed region of the Higgs mass processes exist to detect the Higgs at the LHC. However, the Standard Model cannot be a theory of everything and is not able to provide a complete understanding of physics. It is at most an effective theory up to a presently unknown energy scale. Hence, extensions of the Standard Model are necessary which can affect the Higgs-boson signals. We discuss these effects in two popular extensions of the Standard Model: the Minimal Supersymmetric Standard Model (MSSM) and the Standard Model with four generations (SM4G). Constraints on these models come predominantly from flavor physics and electroweak precision measurements. We show, that the SM4G is still viable and that a fourth generation has strong impact on decay and production processes of the Higgs boson. Furthermore, we study the charged Higgs boson in the MSSM, yielding a clear signal for physics beyond the Standard Model. For small tan {beta} in minimal flavor violation (MFV) no processes for the detection of a charged Higgs boson do exist at the LHC. However, MFV is just motivated by the experimental agreement of results from flavor physics with Standard Model predictions, but not by any basic theoretical consideration. In this thesis, we calculate charged Higgs boson production cross sections beyond the assumption of MFV, where a large number of free parameters is present in the MSSM. We find that the soft-breaking parameters which enhance the charged-Higgs boson production most are just bound to large values, e.g. by rare B-meson decays. Although the charged-Higgs boson cross sections beyond MFV turn out to be sizeable, only a detailed

  12. Case studies on the physical-chemical parameters' variation during three different purification approaches destined to treat wastewaters from food industry.

    Science.gov (United States)

    Ghimpusan, Marieta; Nechifor, Gheorghe; Nechifor, Aurelia-Cristina; Dima, Stefan-Ovidiu; Passeri, Piero

    2017-12-01

    The paper presents a set of three interconnected case studies on the depuration of food processing wastewaters by using aeration & ozonation and two types of hollow-fiber membrane bioreactor (MBR) approaches. A secondary and more extensive objective derived from the first one is to draw a clearer, broader frame on the variation of physical-chemical parameters during the purification of wastewaters from food industry through different operating modes with the aim of improving the management of water purification process. Chemical oxygen demand (COD), pH, mixed liquor suspended solids (MLSS), total nitrogen, specific nitrogen (NH 4 + , NO 2 - , NO 3 - ) total phosphorous, and total surfactants were the measured parameters, and their influence was discussed in order to establish the best operating mode to achieve the purification performances. The integrated air-ozone aeration process applied in the second operating mode lead to a COD decrease by up to 90%, compared to only 75% obtained in a conventional biological activated sludge process. The combined purification process of MBR and ozonation produced an additional COD decrease of 10-15%, and made the Total Surfactants values to comply to the specific legislation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Influence of resonance parameters' correlations on the resonance integral uncertainty; 55Mn case

    International Nuclear Information System (INIS)

    Zerovnik, Gasper; Trkov, Andrej; Capote, Roberto; Rochman, Dimitri

    2011-01-01

    For nuclides with a large number of resonances the covariance matrix of resonance parameters can become very large and expensive to process in terms of the computation time. By converting covariance matrix of resonance parameters into covariance matrices of background cross-section in a more or less coarse group structure a considerable amount of computer time and memory can be saved. The question is how important is the information that is discarded in the process. First, the uncertainty of the 55 Mn resonance integral was estimated in narrow resonance approximation for different levels of self-shielding using Bondarenko method by random sampling of resonance parameters according to their covariance matrices from two different 55 Mn evaluations: one from Nuclear Research and Consultancy Group NRG (with large uncertainties but no correlations between resonances), the other from Oak Ridge National Laboratory (with smaller uncertainties but full covariance matrix). We have found out that if all (or at least significant part of the) resonance parameters are correlated, the resonance integral uncertainty greatly depends on the level of self-shielding. Second, it was shown that the commonly used 640-group SAND-II representation cannot describe the increase of the resonance integral uncertainty. A much finer energy mesh for the background covariance matrix would have to be used to take the resonance structure into account explicitly, but then the objective of a more compact data representation is lost.

  14. Systematic parameter inference in stochastic mesoscopic modeling

    Energy Technology Data Exchange (ETDEWEB)

    Lei, Huan; Yang, Xiu [Pacific Northwest National Laboratory, Richland, WA 99352 (United States); Li, Zhen [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States); Karniadakis, George Em, E-mail: george_karniadakis@brown.edu [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States)

    2017-02-01

    We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are “sparse”. The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.

  15. Darboux invariants of integrable equations with variable spectral parameters

    International Nuclear Information System (INIS)

    Shin, H J

    2008-01-01

    The Darboux transformation for integrable equations with variable spectral parameters is introduced. Darboux invariant quantities are calculated, which are used in constructing the Lax pair of integrable equations. This approach serves as a systematic method for constructing inhomogeneous integrable equations and their soliton solutions. The structure functions of variable spectral parameters determine the integrability and nonlinear coupling terms. Three cases of integrable equations are treated as examples of this approach

  16. Computational parameters in thermal recovery of crude oil

    Energy Technology Data Exchange (ETDEWEB)

    Kashai, L; Geineman, Z

    1965-12-01

    In this mathematical simulation of the in-situ combustion process, the effect is calculated of various parameters on the temperature distribution within the combustion zone. Among the parameters included in the mathematical analysis are (1) quantity of residual coke, oil, and oxidizer, (2) formation thickness, (3) heat conductivity and heat capacity of the formation, and (4) degree of rock heterogeneity. The problem is solved for the linear flow case with the use of a computer. Five temperatures profiles for various conditions are illustrated.

  17. What to use to express the variability of data: Standard deviation or standard error of mean?

    Science.gov (United States)

    Barde, Mohini P; Barde, Prajakt J

    2012-07-01

    Statistics plays a vital role in biomedical research. It helps present data precisely and draws the meaningful conclusions. While presenting data, one should be aware of using adequate statistical measures. In biomedical journals, Standard Error of Mean (SEM) and Standard Deviation (SD) are used interchangeably to express the variability; though they measure different parameters. SEM quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean. As readers are generally interested in knowing the variability within sample, descriptive data should be precisely summarized with SD. Use of SEM should be limited to compute CI which measures the precision of population estimate. Journals can avoid such errors by requiring authors to adhere to their guidelines.

  18. Standardized X-ray reports of the spine in osteogenesis imperfecta; Standard zur Befundung von Roentgenaufnahmen der Wirbelsaeule bei Patienten mit Osteogenesis imperfecta

    Energy Technology Data Exchange (ETDEWEB)

    Koerber, Friederike; Demant, A.W.; Koerber, S. [Universitaetsklinikum Koeln (Germany). Kinderradiologie, Inst. und Poliklinik fuer Radiologische Diagnostik; Semler, O.; Schoenau, E. [Universitaetsklinikum Koeln (Germany). Osteologie, Klinik und Poliklinik fuer Allgemeine Kinderheilkunde; Lackner, K.J. [Universitaetsklinikum Koeln (Germany). Inst. und Poliklinik fuer Radiologische Diagnostik

    2011-05-15

    Purpose: In this study we present a standard for radiological reports in patients with osteogenesis imperfecta (OI). The parameters can be used to describe X-rays of the lateral spine and give an impartial description of anatomical structures during a treatment with bisphosphonates. Material and Methods: In this retrospective analysis we included 48 patients with OI (31 female, 17 male [1.5 months - 19 years, mean age 9.0 years]). Lateral spine X-rays were analyzed by 2 radiologists before and during treatment. The parameters of the standardized report are degree of kyphoscoliosis, compression of single vertebrae, predominant type of vertebral deformities and extent of vertebral compression (score 1 - 5). Results: There was no clear trend in the change of compression of single vertebrae. Some vertebrae with ventral compression showed an upgrowth to vertebrae with harmon