WorldWideScience

Sample records for simplified estimation method

  1. A simplified dynamic method for field capacity estimation and its parameter analysis

    Institute of Scientific and Technical Information of China (English)

    Zhen-tao CONG; Hua-fang LÜ; Guang-heng NI

    2014-01-01

    This paper presents a simplified dynamic method based on the definition of field capacity. Two soil hydraulic characteristics models, the Brooks-Corey (BC) model and the van Genuchten (vG) model, and four soil data groups were used in this study. The relative drainage rate, which is a unique parameter and independent of the soil type in the simplified dynamic method, was analyzed using the pressure-based method with a matric potential of−1/3 bar and the flux-based method with a drainage flux of 0.005 cm/d. As a result, the relative drainage rate of the simplified dynamic method was determined to be 3% per day. This was verified by the similar field capacity results estimated with the three methods for most soils suitable for cultivating plants. In addition, the drainage time calculated with the simplified dynamic method was two to three days, which agrees with the classical definition of field capacity. We recommend the simplified dynamic method with a relative drainage rate of 3% per day due to its simple application and clearly physically-based concept.

  2. Regional and longitudinal estimation of product lifespan distribution: a case study for automobiles and a simplified estimation method.

    Science.gov (United States)

    Oguchi, Masahiro; Fuse, Masaaki

    2015-02-03

    Product lifespan estimates are important information for understanding progress toward sustainable consumption and estimating the stocks and end-of-life flows of products. Publications reported actual lifespan of products; however, quantitative data are still limited for many countries and years. This study presents regional and longitudinal estimation of lifespan distribution of consumer durables, taking passenger cars as an example, and proposes a simplified method for estimating product lifespan distribution. We estimated lifespan distribution parameters for 17 countries based on the age profile of in-use cars. Sensitivity analysis demonstrated that the shape parameter of the lifespan distribution can be replaced by a constant value for all the countries and years. This enabled a simplified estimation that does not require detailed data on the age profile. Applying the simplified method, we estimated the trend in average lifespans of passenger cars from 2000 to 2009 for 20 countries. Average lifespan differed greatly between countries (9-23 years) and was increasing in many countries. This suggests consumer behavior differs greatly among countries and has changed over time, even in developed countries. The results suggest that inappropriate assumptions of average lifespan may cause significant inaccuracy in estimating the stocks and end-of-life flows of products.

  3. Simplified Probabilistic Analysis of Settlement of Cyclically Loaded Soil Stratum by Point Estimate Method

    Science.gov (United States)

    Przewłócki, Jarosław; Górski, Jarosław; Świdziński, Waldemar

    2016-12-01

    The paper deals with the probabilistic analysis of the settlement of a non-cohesive soil layer subjected to cyclic loading. Originally, the settlement assessment is based on a deterministic compaction model, which requires integration of a set of differential equations. However, with the use of the Bessel functions, the settlement of a soil stratum can be calculated by a simplified algorithm. The compaction model parameters were determined for soil samples taken from subsoil near the Izmit Bay, Turkey. The computations were performed for various sets of random variables. The point estimate method was applied, and the results were verified by the Monte Carlo method. The outcome leads to a conclusion that can be useful in the prediction of soil settlement under seismic loading.

  4. 3.6 simplified methods for design

    International Nuclear Information System (INIS)

    Nickell, R.E.; Yahr, G.T.

    1981-01-01

    Simplified design analysis methods for elevated temperature construction are classified and reviewed. Because the major impetus for developing elevated temperature design methodology during the past ten years has been the LMFBR program, considerable emphasis is placed upon results from this source. The operating characteristics of the LMFBR are such that cycles of severe transient thermal stresses can be interspersed with normal elevated temperature operational periods of significant duration, leading to a combination of plastic and creep deformation. The various simplified methods are organized into two general categories, depending upon whether it is the material, or constitutive, model that is reduced, or the geometric modeling that is simplified. Because the elastic representation of material behavior is so prevalent, an entire section is devoted to elastic analysis methods. Finally, the validation of the simplified procedures is discussed

  5. Systematization of simplified J-integral evaluation method for flaw evaluation at high temperature

    International Nuclear Information System (INIS)

    Miura, Naoki; Takahashi, Yukio; Nakayama, Yasunari; Shimakawa, Takashi

    2000-01-01

    J-integral is an effective inelastic fracture parameter for the flaw evaluation of cracked components at high temperature. The evaluation of J-integral for an arbitrary crack configuration and an arbitrary loading condition can be generally accomplished by detailed numerical analysis such as finite element analysis, however, it is time-consuming and requires a high degree of expertise for its implementation. Therefore, it is important to develop simplified J-integral estimation techniques from the viewpoint of industrial requirements. In this study, a simplified J-integral evaluation method is proposed to estimate two types of J-integral parameters. One is the fatigue J-integral range to describe fatigue crack propagation behavior, and the other is the creep J-integral to describe creep crack propagation behavior. This paper presents the systematization of the simplified J-integral evaluation method incorporated with the reference stress method and the concept of elastic follow-up, and proposes a comprehensive evaluation procedure. The verification of the proposed method is presented in Part II of this paper. (author)

  6. Electrical estimating methods

    CERN Document Server

    Del Pico, Wayne J

    2014-01-01

    Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el

  7. Simplified method of ''push-pull'' test data analysis for determining in situ reaction rate coefficients

    International Nuclear Information System (INIS)

    Haggerty, R.; Schroth, M.H.; Istok, J.D.

    1998-01-01

    The single-well, ''''push-pull'''' test method is useful for obtaining information on a wide variety of aquifer physical, chemical, and microbiological characteristics. A push-pull test consists of the pulse-type injection of a prepared test solution into a single monitoring well followed by the extraction of the test solution/ground water mixture from the same well. The test solution contains a conservative tracer and one or more reactants selected to investigate a particular process. During the extraction phase, the concentrations of tracer, reactants, and possible reaction products are measured to obtain breakthrough curves for all solutes. This paper presents a simplified method of data analysis that can be used to estimate a first-order reaction rate coefficient from these breakthrough curves. Rate coefficients are obtained by fitting a regression line to a plot of normalized concentrations versus elapsed time, requiring no knowledge of aquifer porosity, dispersivity, or hydraulic conductivity. A semi-analytical solution to the advective-dispersion equation is derived and used in a sensitivity analysis to evaluate the ability of the simplified method to estimate reaction rate coefficients in simulated push-pull tests in a homogeneous, confined aquifer with a fully-penetrating injection/extraction well and varying porosity, dispersivity, test duration, and reaction rate. A numerical flow and transport code (SUTRA) is used to evaluate the ability of the simplified method to estimate reaction rate coefficients in simulated push-pull tests in a heterogeneous, unconfined aquifer with a partially penetrating well. In all cases the simplified method provides accurate estimates of reaction rate coefficients; estimation errors ranged from 0.1 to 8.9% with most errors less than 5%

  8. Simplified theory of plastic zones based on Zarka's method

    CERN Document Server

    Hübel, Hartwig

    2017-01-01

    The present book provides a new method to estimate elastic-plastic strains via a series of linear elastic analyses. For a life prediction of structures subjected to variable loads, frequently encountered in mechanical and civil engineering, the cyclically accumulated deformation and the elastic plastic strain ranges are required. The Simplified Theory of Plastic Zones (STPZ) is a direct method which provides the estimates of these and all other mechanical quantities in the state of elastic and plastic shakedown. The STPZ is described in detail, with emphasis on the fact that not only scientists but engineers working in applied fields and advanced students are able to get an idea of the possibilities and limitations of the STPZ. Numerous illustrations and examples are provided to support the reader's understanding.

  9. Simplified method evaluation for piping elastic follow-up

    International Nuclear Information System (INIS)

    Severud, L.K.

    1983-05-01

    A proposed simplified method for evaluating elastic follow-up effects in high temperature pipelines is presented. The method was evaluated by comparing the simplified analysis results with those obtained from detailed inelastic solutions. Nine different pipelines typical of a nuclear breeder reactor power plant were analyzed; the simplified method is attractive because it appears to give fairly accurate and conservative results. It is easy to apply and inexpensive since it employs iterative elastic solutions for the pipeline coupled with the readily available isochronous stress-strain data provided in the ASME Code

  10. A Novel Interference Detection Method of STAP Based on Simplified TT Transform

    Directory of Open Access Journals (Sweden)

    Qiang Wang

    2017-01-01

    Full Text Available Training samples contaminated by target-like signals is one of the major reasons for inhomogeneous clutter environment. In such environment, clutter covariance matrix in STAP (space-time adaptive processing is estimated inaccurately, which finally leads to detection performance reduction. In terms of this problem, a STAP interference detection method based on simplified TT (time-time transform is proposed in this letter. Considering the sparse physical property of clutter in the space-time plane, data on each range cell is first converted into a discrete slow time series. Then, the expression of simplified TT transform about sample data is derived step by step. Thirdly, the energy of each training sample is focalized and extracted by simplified TT transform from energy-variant difference between the unpolluted and polluted stage, and the physical significance of discarding the contaminated samples is analyzed. Lastly, the contaminated samples are picked out in light of the simplified TT transform-spectrum difference. The result on Monte Carlo simulation indicates that when training samples are contaminated by large power target-like signals, the proposed method is more effective in getting rid of the contaminated samples, reduces the computational complexity significantly, and promotes the target detection performance compared with the method of GIP (generalized inner product.

  11. 20 CFR 404.241 - 1977 simplified old-start method.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false 1977 simplified old-start method. 404.241... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Old-Start Method of Computing Primary Insurance Amounts § 404.241 1977 simplified old-start method. (a) Who is qualified. To qualify for the old...

  12. Use of simplified methods for predicting natural resource damages

    International Nuclear Information System (INIS)

    Loreti, C.P.; Boehm, P.D.; Gundlach, E.R.; Healy, E.A.; Rosenstein, A.B.; Tsomides, H.J.; Turton, D.J.; Webber, H.M.

    1995-01-01

    To reduce transaction costs and save time, the US Department of the Interior (DOI) and the National Oceanic and Atmospheric Administration (NOAA) have developed simplified methods for assessing natural resource damages from oil and chemical spills. DOI has proposed the use of two computer models, the Natural Resource Damage Assessment Model for Great Lakes Environments (NRDAM/GLE) and a revised Natural Resource Damage Assessment Model for Coastal and Marine Environments (NRDAM/CME) for predicting monetary damages for spills of oils and chemicals into the Great Lakes and coastal and marine environments. NOAA has used versions of these models to create Compensation Formulas, which it has proposed for calculating natural resource damages for oil spills of up to 50,000 gallons anywhere in the US. Based on a review of the documentation supporting the methods, the results of hundreds of sample runs of DOI's models, and the outputs of the thousands of model runs used to create NOAA's Compensation Formulas, this presentation discusses the ability of these simplified assessment procedures to make realistic damage estimates. The limitations of these procedures are described, and the need for validating the assumptions used in predicting natural resource injuries is discussed

  13. Fundamental characteristics and simplified evaluation method of dynamic earth pressure

    International Nuclear Information System (INIS)

    Nukui, Y.; Inagaki, Y.; Ohmiya, Y.

    1989-01-01

    In Japan, a method is commonly used in the evaluation of dynamic earth pressure acting on the underground walls of a deeply embedded nuclear reactor building. However, since this method was developed on the basis of the limit state of soil supported by retaining walls, the behavior of dynamic earth pressure acting on the embedded part of a nuclear reactor building may differ from the estimated by this method. This paper examines the fundamental characteristics of dynamic earth pressure through dynamic soil-structure interaction analysis. A simplified method to evaluate dynamic earth pressure for the design of underground walls of a nuclear reactor building is described. The dynamic earth pressure is fluctuating earth pressure during earthquake

  14. A simplified model for the estimation of energy production of PV systems

    International Nuclear Information System (INIS)

    Aste, Niccolò; Del Pero, Claudio; Leonforte, Fabrizio; Manfren, Massimiliano

    2013-01-01

    The potential of solar energy is far higher than any other renewable source, although several limits exist. In detail the fundamental factors that must be analyzed by investors and policy makers are the cost-effectiveness and the production of PV power plants, respectively, for the decision of investment schemes and energy policy strategies. Tools suitable to be used even by non-specialists, are therefore becoming increasingly important. Many research and development effort have been devoted to this goal in recent years. In this study, a simplified model for PV annual production estimation that can provide results with a level of accuracy comparable with the more sophisticated simulation tools from which it derives is fundamental data. The main advantage of the presented model is that it can be used by virtually anyone, without requiring a specific field expertise. The inherent limits of the model are related to its empirical base, but the methodology presented can be effectively reproduced in the future with a different spectrum of data in order to assess, for example, the effect of technological evolution on the overall performance of PV power generation or establishing performance benchmarks for a much larger variety kinds of PV plants and technologies. - Highlights: • We have analyzed the main methods for estimating the electricity production of photovoltaic systems. • We simulated the same system with two different software in different European locations and estimated the electric production. • We have studied the main losses of a plant PV. • We provide a simplified model to estimate the electrical production of any PV system well designed. • We validated the data obtained by the proposed model with experimental data from three PV systems

  15. Application of a Simplified Method for Estimating Perfusion Derived from Diffusion-Weighted MR Imaging in Glioma Grading.

    Science.gov (United States)

    Cao, Mengqiu; Suo, Shiteng; Han, Xu; Jin, Ke; Sun, Yawen; Wang, Yao; Ding, Weina; Qu, Jianxun; Zhang, Xiaohua; Zhou, Yan

    2017-01-01

    Purpose : To evaluate the feasibility of a simplified method based on diffusion-weighted imaging (DWI) acquired with three b -values to measure tissue perfusion linked to microcirculation, to validate it against from perfusion-related parameters derived from intravoxel incoherent motion (IVIM) and dynamic contrast-enhanced (DCE) magnetic resonance (MR) imaging, and to investigate its utility to differentiate low- from high-grade gliomas. Materials and Methods : The prospective study was approved by the local institutional review board and written informed consent was obtained from all patients. From May 2016 and May 2017, 50 patients confirmed with glioma were assessed with multi- b -value DWI and DCE MR imaging at 3.0 T. Besides conventional apparent diffusion coefficient (ADC 0,1000 ) map, perfusion-related parametric maps for IVIM-derived perfusion fraction ( f ) and pseudodiffusion coefficient (D*), DCE MR imaging-derived pharmacokinetic metrics, including K trans , v e and v p , as well as a metric named simplified perfusion fraction (SPF), were generated. Correlation between perfusion-related parameters was analyzed by using the Spearman rank correlation. All imaging parameters were compared between the low-grade ( n = 19) and high-grade ( n = 31) groups by using the Mann-Whitney U test. The diagnostic performance for tumor grading was evaluated with receiver operating characteristic (ROC) analysis. Results : SPF showed strong correlation with IVIM-derived f and D* ( ρ = 0.732 and 0.716, respectively; both P simplified method to measure tissue perfusion based on DWI by using three b -values may be helpful to differentiate low- from high-grade gliomas. SPF may serve as a valuable alternative to measure tumor perfusion in gliomas in a noninvasive, convenient and efficient way.

  16. The large break LOCA evaluation method with the simplified statistic approach

    International Nuclear Information System (INIS)

    Kamata, Shinya; Kubo, Kazuo

    2004-01-01

    USNRC published the Code Scaling, Applicability and Uncertainty (CSAU) evaluation methodology to large break LOCA which supported the revised rule for Emergency Core Cooling System performance in 1989. In USNRC regulatory guide 1.157, it is required that the peak cladding temperature (PCT) cannot exceed 2200deg F with high probability 95th percentile. In recent years, overseas countries have developed statistical methodology and best estimate code with the model which can provide more realistic simulation for the phenomena based on the CSAU evaluation methodology. In order to calculate PCT probability distribution by Monte Carlo trials, there are approaches such as the response surface technique using polynomials, the order statistics method, etc. For the purpose of performing rational statistic analysis, Mitsubishi Heavy Industries, LTD (MHI) tried to develop the statistic LOCA method using the best estimate LOCA code MCOBRA/TRAC and the simplified code HOTSPOT. HOTSPOT is a Monte Carlo heat conduction solver to evaluate the uncertainties of the significant fuel parameters at the PCT positions of the hot rod. The direct uncertainty sensitivity studies can be performed without the response surface because the Monte Carlo simulation for key parameters can be performed in short time using HOTSPOT. With regard to the parameter uncertainties, MHI established the treatment that the bounding conditions are given for LOCA boundary and plant initial conditions, the Monte Carlo simulation using HOTSPOT is applied to the significant fuel parameters. The paper describes the large break LOCA evaluation method with the simplified statistic approach and the results of the application of the method to the representative four-loop nuclear power plant. (author)

  17. Creep-fatigue evaluation method for weld joint of Mod.9Cr-1Mo steel Part II: Plate bending test and proposal of a simplified evaluation method

    Energy Technology Data Exchange (ETDEWEB)

    Ando, Masanori, E-mail: ando.masanori@jaea.go.jp; Takaya, Shigeru, E-mail: takaya.shigeru@jaea.go.jp

    2016-12-15

    Highlights: • Creep-fatigue evaluation method for weld joint of Mod.9Cr-1Mo steel is proposed. • A simplified evaluation method is also proposed for the codification. • Both proposed evaluation method was validated by the plate bending test. • For codification, the local stress and strain behavior was analyzed. - Abstract: In the present study, to develop an evaluation procedure and design rules for Mod.9Cr-1Mo steel weld joints, a method for evaluating the creep-fatigue life of Mod.9Cr-1Mo steel weld joints was proposed based on finite element analysis (FEA) and a series of cyclic plate bending tests of longitudinal and horizontal seamed plates. The strain concentration and redistribution behaviors were evaluated and the failure cycles were estimated using FEA by considering the test conditions and metallurgical discontinuities in the weld joints. Inelastic FEA models consisting of the base metal, heat-affected zone and weld metal were employed to estimate the elastic follow-up behavior caused by the metallurgical discontinuities. The elastic follow-up factors determined by comparing the elastic and inelastic FEA results were determined to be less than 1.5. Based on the estimated elastic follow-up factors obtained via inelastic FEA, a simplified technique using elastic FEA was proposed for evaluating the creep-fatigue life in Mod.9Cr-1Mo steel weld joints. The creep-fatigue life obtained using the plate bending test was compared to those estimated from the results of inelastic FEA and by a simplified evaluation method.

  18. Creep-fatigue evaluation method for weld joint of Mod.9Cr-1Mo steel Part II: Plate bending test and proposal of a simplified evaluation method

    International Nuclear Information System (INIS)

    Ando, Masanori; Takaya, Shigeru

    2016-01-01

    Highlights: • Creep-fatigue evaluation method for weld joint of Mod.9Cr-1Mo steel is proposed. • A simplified evaluation method is also proposed for the codification. • Both proposed evaluation method was validated by the plate bending test. • For codification, the local stress and strain behavior was analyzed. - Abstract: In the present study, to develop an evaluation procedure and design rules for Mod.9Cr-1Mo steel weld joints, a method for evaluating the creep-fatigue life of Mod.9Cr-1Mo steel weld joints was proposed based on finite element analysis (FEA) and a series of cyclic plate bending tests of longitudinal and horizontal seamed plates. The strain concentration and redistribution behaviors were evaluated and the failure cycles were estimated using FEA by considering the test conditions and metallurgical discontinuities in the weld joints. Inelastic FEA models consisting of the base metal, heat-affected zone and weld metal were employed to estimate the elastic follow-up behavior caused by the metallurgical discontinuities. The elastic follow-up factors determined by comparing the elastic and inelastic FEA results were determined to be less than 1.5. Based on the estimated elastic follow-up factors obtained via inelastic FEA, a simplified technique using elastic FEA was proposed for evaluating the creep-fatigue life in Mod.9Cr-1Mo steel weld joints. The creep-fatigue life obtained using the plate bending test was compared to those estimated from the results of inelastic FEA and by a simplified evaluation method.

  19. Simplified approach for estimating large early release frequency

    International Nuclear Information System (INIS)

    Pratt, W.T.; Mubayi, V.; Nourbakhsh, H.; Brown, T.; Gregory, J.

    1998-04-01

    The US Nuclear Regulatory Commission (NRC) Policy Statement related to Probabilistic Risk Analysis (PRA) encourages greater use of PRA techniques to improve safety decision-making and enhance regulatory efficiency. One activity in response to this policy statement is the use of PRA in support of decisions related to modifying a plant's current licensing basis (CLB). Risk metrics such as core damage frequency (CDF) and Large Early Release Frequency (LERF) are recommended for use in making risk-informed regulatory decisions and also for establishing acceptance guidelines. This paper describes a simplified approach for estimating LERF, and changes in LERF resulting from changes to a plant's CLB

  20. A simplified multisupport response spectrum method

    Science.gov (United States)

    Ye, Jihong; Zhang, Zhiqiang; Liu, Xianming

    2012-03-01

    A simplified multisupport response spectrum method is presented. The structural response is a sum of two components of a structure with a first natural period less than 2 s. The first component is the pseudostatic response caused by the inconsistent motions of the structural supports, and the second is the structural dynamic response to ground motion accelerations. This method is formally consistent with the classical response spectrum method, and the effects of multisupport excitation are considered for any modal response spectrum or modal superposition. If the seismic inputs at each support are the same, the support displacements caused by the pseudostatic response become rigid body displacements. The response spectrum in the case of multisupport excitations then reduces to that for uniform excitations. In other words, this multisupport response spectrum method is a modification and extension of the existing response spectrum method under uniform excitation. Moreover, most of the coherency coefficients in this formulation are simplified by approximating the ground motion excitation as white noise. The results indicate that this simplification can reduce the calculation time while maintaining accuracy. Furthermore, the internal forces obtained by the multisupport response spectrum method are compared with those produced by the traditional response spectrum method in two case studies of existing long-span structures. Because the effects of inconsistent support displacements are not considered in the traditional response spectrum method, the values of internal forces near the supports are underestimated. These regions are important potential failure points and deserve special attention in the seismic design of reticulated structures.

  1. Study on simplified estimation of J-integral under thermal loading

    International Nuclear Information System (INIS)

    Takahashi, Y.

    1993-01-01

    For assessing structural integrity or safety of nuclear power plants, strength of structures under the presence of flaws sometimes needs to be evaluated. Because relative large inelastic deformation is anticipated in the liquid metal reactor components even without flaws due to high operating temperature and large temperature gradients, inelastic effects should be properly taken into account in the flaw assessment procedures. It is widely recognized that J-integral and its variations - e.g. fatigue J-integral range and creep J-integral - play substantial roles in the flaw assessment under the presence of large inelastic deformation. Therefore their utilization has been promoted in the recent flaw assessment procedure both for low and high temperature plants. However, it is not very practical to conduct a detailed numerical computation for cracked structures to estimate the values of these parameters for the purpose of trailing crack growth history. Thus development of simplified estimation methods which do not require full numerical calculation for cracked structures is desirable. A method using normalized J-integral solutions tabulated in the handbook is a direct extension of linear fracture mechanics counterpart and it can be used for standard specimen and simple structural configurations subjected to specified loading type. The reference stress method has also been developed but in this case limit load solutions, which are often difficult to obtain for general stress distribution, are necessary instead of nonlinear J-integral solutions. However, both methods have been developed mainly for mechanical loading and thus applying these techniques to thermal stress problem is rather difficult except the cases where the thermal stress can be properly substituted by equivalent mechanical loading as in the case of simple thermal expansion loading. Therefore alternative approach should be pursued for estimating J-integral and their variations in thermal stress problems

  2. Simplified dose calculation method for mantle technique

    International Nuclear Information System (INIS)

    Scaff, L.A.M.

    1984-01-01

    A simplified dose calculation method for mantle technique is described. In the routine treatment of lymphom as using this technique, the daily doses at the midpoints at five anatomical regions are different because the thicknesses are not equal. (Author) [pt

  3. Study on a pattern classification method of soil quality based on simplified learning sample dataset

    Science.gov (United States)

    Zhang, Jiahua; Liu, S.; Hu, Y.; Tian, Y.

    2011-01-01

    Based on the massive soil information in current soil quality grade evaluation, this paper constructed an intelligent classification approach of soil quality grade depending on classical sampling techniques and disordered multiclassification Logistic regression model. As a case study to determine the learning sample capacity under certain confidence level and estimation accuracy, and use c-means algorithm to automatically extract the simplified learning sample dataset from the cultivated soil quality grade evaluation database for the study area, Long chuan county in Guangdong province, a disordered Logistic classifier model was then built and the calculation analysis steps of soil quality grade intelligent classification were given. The result indicated that the soil quality grade can be effectively learned and predicted by the extracted simplified dataset through this method, which changed the traditional method for soil quality grade evaluation. ?? 2011 IEEE.

  4. Simplified propagation of standard uncertainties

    International Nuclear Information System (INIS)

    Shull, A.H.

    1997-01-01

    An essential part of any measurement control program is adequate knowledge of the uncertainties of the measurement system standards. Only with an estimate of the standards'' uncertainties can one determine if the standard is adequate for its intended use or can one calculate the total uncertainty of the measurement process. Purchased standards usually have estimates of uncertainty on their certificates. However, when standards are prepared and characterized by a laboratory, variance propagation is required to estimate the uncertainty of the standard. Traditional variance propagation typically involves tedious use of partial derivatives, unfriendly software and the availability of statistical expertise. As a result, the uncertainty of prepared standards is often not determined or determined incorrectly. For situations meeting stated assumptions, easier shortcut methods of estimation are now available which eliminate the need for partial derivatives and require only a spreadsheet or calculator. A system of simplifying the calculations by dividing into subgroups of absolute and relative uncertainties is utilized. These methods also incorporate the International Standards Organization (ISO) concepts for combining systematic and random uncertainties as published in their Guide to the Expression of Measurement Uncertainty. Details of the simplified methods and examples of their use are included in the paper

  5. Development of simplified decommissioning cost estimation code for nuclear facilities

    International Nuclear Information System (INIS)

    Tachibana, Mitsuo; Shiraishi, Kunio; Ishigami, Tsutomu

    2010-01-01

    The simplified decommissioning cost estimation code for nuclear facilities (DECOST code) was developed in consideration of features and structures of nuclear facilities and similarity of dismantling methods. The DECOST code could calculate 8 evaluation items of decommissioning cost. Actual dismantling in the Japan Atomic Energy Agency (JAEA) was evaluated; unit conversion factors used to calculate the manpower of dismantling activities were evaluated. Consequently, unit conversion factors of general components could be classified into three kinds. Weights of components and structures of the facility were necessary for calculation of manpower. Methods for evaluating weights of components and structures of the facility were studied. Consequently, the weight of components in the facility was proportional to the weight of structures of the facility. The weight of structures of the facility was proportional to the total area of floors in the facility. Decommissioning costs of 7 nuclear facilities in the JAEA were calculated by using the DECOST code. To verify the calculated results, the calculated manpower was compared with the manpower gained from actual dismantling. Consequently, the calculated manpower and actual manpower were almost equal. The outline of the DECOST code, evaluation results of unit conversion factors, the evaluation method of the weights of components and structures of the facility are described in this report. (author)

  6. Simple design of slanted grating with simplified modal method.

    Science.gov (United States)

    Li, Shubin; Zhou, Changhe; Cao, Hongchao; Wu, Jun

    2014-02-15

    A simplified modal method (SMM) is presented that offers a clear physical image for subwavelength slanted grating. The diffraction characteristic of the slanted grating under Littrow configuration is revealed by the SMM as an equivalent rectangular grating, which is in good agreement with rigorous coupled-wave analysis. Based on the equivalence, we obtained an effective analytic solution for simplifying the design and optimization of a slanted grating. It offers a new approach for design of the slanted grating, e.g., a 1×2 beam splitter can be easily designed. This method should be helpful for designing various new slanted grating devices.

  7. Simplified MPN method for enumeration of soil naphthalene degraders using gaseous substrate.

    Science.gov (United States)

    Wallenius, Kaisa; Lappi, Kaisa; Mikkonen, Anu; Wickström, Annika; Vaalama, Anu; Lehtinen, Taru; Suominen, Leena

    2012-02-01

    We describe a simplified microplate most-probable-number (MPN) procedure to quantify the bacterial naphthalene degrader population in soil samples. In this method, the sole substrate naphthalene is dosed passively via gaseous phase to liquid medium and the detection of growth is based on the automated measurement of turbidity using an absorbance reader. The performance of the new method was evaluated by comparison with a recently introduced method in which the substrate is dissolved in inert silicone oil and added individually to each well, and the results are scored visually using a respiration indicator dye. Oil-contaminated industrial soil showed slightly but significantly higher MPN estimate with our method than with the reference method. This suggests that gaseous naphthalene was dissolved in an adequate concentration to support the growth of naphthalene degraders without being too toxic. The dosing of substrate via gaseous phase notably reduced the work load and risk of contamination. The result scoring by absorbance measurement was objective and more reliable than measurement with indicator dye, and it also enabled further analysis of cultures. Several bacterial genera were identified by cloning and sequencing of 16S rRNA genes from the MPN wells incubated in the presence of gaseous naphthalene. In addition, the applicability of the simplified MPN method was demonstrated by a significant positive correlation between the level of oil contamination and the number of naphthalene degraders detected in soil.

  8. A simplified method of estimating noise power spectra

    International Nuclear Information System (INIS)

    Hanson, K.M.

    1998-01-01

    A technique to estimate the radial dependence of the noise power spectrum of images is proposed in which the calculations are conducted solely in the spatial domain of the noise image. The noise power spectrum averaged over a radial spatial-frequency interval is obtained form the variance of a noise image that has been convolved with a small kernel that approximates a Laplacian operator. Recursive consolidation of the image by factors of two in each dimension yields estimates of the noise power spectrum over that full range of spatial frequencies

  9. Estimation of optimum time interval for neutron- γ discrimination by simplified digital charge collection method

    International Nuclear Information System (INIS)

    Singh, Harleen; Singh, Sarabjeet

    2014-01-01

    The discrimination of mixed radiation field is of prime importance due to its application in neutron detection which leads to radiation safety, nuclear material detection etc. The liquid scintillators are one of the most important radiation detectors because the relative decay rate of neutron pulse is slower as compared to gamma radiation in these detectors. There are techniques like zero crossing and charge comparison which are very popular and implemented using analogue electronics. In the recent years due to availability of fast ADC and FPGA, digital methods for discrimination of mixed field radiations have been investigated. Some of the digital time domain techniques developed are pulse gradient analysis (PGA), simplified digital charge collection method (SDCC), digital zero crossing method. The performance of these methods depends on the appropriate selection of gate time for which the pulse is processed. In this paper, the SDCC method is investigated for a neutron-gamma mixed field. The main focus of the study is to get the knowledge of optimum gate time which is very important in neutron gamma discrimination analysis in a mixed radiation field. The comparison with charge collection (CC) method is also investigated

  10. Application of a Simplified Method for Estimating Perfusion Derived from Diffusion-Weighted MR Imaging in Glioma Grading

    Directory of Open Access Journals (Sweden)

    Mengqiu Cao

    2018-01-01

    Full Text Available Purpose: To evaluate the feasibility of a simplified method based on diffusion-weighted imaging (DWI acquired with three b-values to measure tissue perfusion linked to microcirculation, to validate it against from perfusion-related parameters derived from intravoxel incoherent motion (IVIM and dynamic contrast-enhanced (DCE magnetic resonance (MR imaging, and to investigate its utility to differentiate low- from high-grade gliomas.Materials and Methods: The prospective study was approved by the local institutional review board and written informed consent was obtained from all patients. From May 2016 and May 2017, 50 patients confirmed with glioma were assessed with multi-b-value DWI and DCE MR imaging at 3.0 T. Besides conventional apparent diffusion coefficient (ADC0,1000 map, perfusion-related parametric maps for IVIM-derived perfusion fraction (f and pseudodiffusion coefficient (D*, DCE MR imaging-derived pharmacokinetic metrics, including Ktrans, ve and vp, as well as a metric named simplified perfusion fraction (SPF, were generated. Correlation between perfusion-related parameters was analyzed by using the Spearman rank correlation. All imaging parameters were compared between the low-grade (n = 19 and high-grade (n = 31 groups by using the Mann-Whitney U test. The diagnostic performance for tumor grading was evaluated with receiver operating characteristic (ROC analysis.Results: SPF showed strong correlation with IVIM-derived f and D* (ρ = 0.732 and 0.716, respectively; both P < 0.001. Compared with f, SPF was more correlated with DCE MR imaging-derived Ktrans (ρ = 0.607; P < 0.001 and vp (ρ = 0.397; P = 0.004. Among all parameters, SPF achieved the highest accuracy for differentiating low- from high-grade gliomas, with an area under the ROC curve value of 0.942, which was significantly higher than that of ADC0,1000 (P = 0.004. By using SPF as a discriminative index, the diagnostic sensitivity and specificity were 87.1% and 94

  11. Simplified Model for the Hybrid Method to Design Stabilising Piles Placed at the Toe of Slopes

    Directory of Open Access Journals (Sweden)

    Dib M.

    2018-01-01

    Full Text Available Stabilizing precarious slopes by installing piles has become a widespread technique for landslides prevention. The design of slope-stabilizing piles by the finite element method is more accurate comparing to the conventional methods. This accuracy is because of the ability of this method to simulate complex configurations, and to analyze the soil-pile interaction effect. However, engineers prefer to use the simplified analytical techniques to design slope stabilizing piles, this is due to the high computational resources required by the finite element method. Aiming to combine the accuracy of the finite element method with simplicity of the analytical approaches, a hybrid methodology to design slope stabilizing piles was proposed in 2012. It consists of two steps; (1: an analytical estimation of the resisting force needed to stabilize the precarious slope, and (2: a numerical analysis to define the adequate pile configuration that offers the required resisting force. The hybrid method is applicable only for the analysis and the design of stabilizing piles placed in the middle of the slope, however, in certain cases like road constructions, piles are needed to be placed at the toe of the slope. Therefore, in this paper a simplified model for the hybrid method is dimensioned to analyze and design stabilizing piles placed at the toe of a precarious slope. The validation of the simplified model is presented by a comparative analysis with the full coupled finite element model.

  12. Seismic analysis of long tunnels: A review of simplified and unified methods

    Directory of Open Access Journals (Sweden)

    Haitao Yu

    2017-06-01

    Full Text Available Seismic analysis of long tunnels is important for safety evaluation of the tunnel structure during earthquakes. Simplified models of long tunnels are commonly adopted in seismic design by practitioners, in which the tunnel is usually assumed as a beam supported by the ground. These models can be conveniently used to obtain the overall response of the tunnel structure subjected to seismic loading. However, simplified methods are limited due to the assumptions that need to be made to reach the solution, e.g. shield tunnels are assembled with segments and bolts to form a lining ring and such structural details may not be included in the simplified model. In most cases, the design will require a numerical method that does not have the shortcomings of the analytical solutions, as it can consider the structural details, non-linear behavior, etc. Furthermore, long tunnels have significant length and pass through different strata. All of these would require large-scale seismic analysis of long tunnels with three-dimensional models, which is difficult due to the lack of available computing power. This paper introduces two types of methods for seismic analysis of long tunnels, namely simplified and unified methods. Several models, including the mass-spring-beam model, and the beam-spring model and its analytical solution are presented as examples of the simplified method. The unified method is based on a multiscale framework for long tunnels, with coarse and refined finite element meshes, or with the discrete element method and the finite difference method to compute the overall seismic response of the tunnel while including detailed dynamic response at positions of potential damage or of interest. A bridging scale term is introduced in the framework so that compatibility of dynamic behavior between the macro- and meso-scale subdomains is enforced. Examples are presented to demonstrate the applicability of the simplified and the unified methods.

  13. Implementation of a Simplified State Estimator for Wind Turbine Monitoring on an Embedded System

    DEFF Research Database (Denmark)

    Rasmussen, Theis Bo; Yang, Guangya; Nielsen, Arne Hejde

    2017-01-01

    system, including individual DER, is time consuming and numerically challenging. This paper presents the approach and results of implementing a simplified state estimator onto an embedded system for improving DER monitoring. The implemented state estimator is based on numerically robust orthogonal......The transition towards a cyber-physical energy system (CPES) entails an increased dependency on valid data. Simultaneously, an increasing implementation of renewable generation leads to possible control actions at individual distributed energy resources (DERs). A state estimation covering the whole...

  14. Rules of thumb and simplified methods

    International Nuclear Information System (INIS)

    Lahti, G.P.

    1985-01-01

    The author points out the value of a thorough grounding in fundamental physics combined with experience of applied practice when using simplified methods and rules of thumb in shield engineering. Present-day quality assurance procedures and good engineering practices require careful documentation of all calculations. The aforementioned knowledge of rules of thumb and back-of-the-envelope calculations can assure both the preparer and the reviewer that the results in the quality assurance documentation are the physically correct ones

  15. Estimation of chromium-51 ethylene diamine tetra-acetic acid plasma clearance: A comparative assessment of simplified techniques

    International Nuclear Information System (INIS)

    Picciotto, G.; Cacace, G.; Mosso, R.; De Filippi, P.G.; Cesana, P.; Ropolo, R.

    1992-01-01

    Chromium-51 ethylene diamine tetra-acetic acid ( 51 Cr-EDTA) total plasma clearance was evaluated using a multi-sample method (i.e. 12 blood samples) as the reference compared with several simplified methods which necessitated only one or few blood samples. The following 5 methods were evaluated: Terminal slope-intercept method with 3 blood samples, simplified method of Broechner-Mortensen and 3 single-sample methods (Constable, Christensen and Groth, Tauxe). Linear regression analysis was performed. Standard error of estimate, bias and imprecision of different methods were evaluated. For 51 Cr-EDTA total plasma clearance greater than 30 ml.min -1 , the results which most approximated the reference source were obtained by the Christensen and Groth method at a sampling time of 300 min (inaccuracy of 4.9%). For clearances between 10 and 30 ml.min -1 , single-sample methods failed to give reliable results. Terminal slope-intercept and Broechner-Mortensen methods were better, with inaccuracies of 17.7% and 16.9%, respectively. Although sampling times at 180, 240 and 300 min are time-consuming for patients, 51 Cr-EDTA total plasma clearance can be accurately calculated for values greater than 10 ml.min -1 using the Broechner-Mortensen method. In patients with clearance greater than 30 ml.min -1 , single-sample techniques provide a good alternative to the multi-sample method; the choice of the method to be used depends on the degree of accuracy required. (orig.)

  16. Simplified Methodology to Estimate the Maximum Liquid Helium (LHe) Cryostat Pressure from a Vacuum Jacket Failure

    Science.gov (United States)

    Ungar, Eugene K.; Richards, W. Lance

    2015-01-01

    The aircraft-based Stratospheric Observatory for Infrared Astronomy (SOFIA) is a platform for multiple infrared astronomical observation experiments. These experiments carry sensors cooled to liquid helium temperatures. The liquid helium supply is contained in large (i.e., 10 liters or more) vacuum-insulated dewars. Should the dewar vacuum insulation fail, the inrushing air will condense and freeze on the dewar wall, resulting in a large heat flux on the dewar's contents. The heat flux results in a rise in pressure and the actuation of the dewar pressure relief system. A previous NASA Engineering and Safety Center (NESC) assessment provided recommendations for the wall heat flux that would be expected from a loss of vacuum and detailed an appropriate method to use in calculating the maximum pressure that would occur in a loss of vacuum event. This method involved building a detailed supercritical helium compressible flow thermal/fluid model of the vent stack and exercising the model over the appropriate range of parameters. The experimenters designing science instruments for SOFIA are not experts in compressible supercritical flows and do not generally have access to the thermal/fluid modeling packages that are required to build detailed models of the vent stacks. Therefore, the SOFIA Program engaged the NESC to develop a simplified methodology to estimate the maximum pressure in a liquid helium dewar after the loss of vacuum insulation. The method would allow the university-based science instrument development teams to conservatively determine the cryostat's vent neck sizing during preliminary design of new SOFIA Science Instruments. This report details the development of the simplified method, the method itself, and the limits of its applicability. The simplified methodology provides an estimate of the dewar pressure after a loss of vacuum insulation that can be used for the initial design of the liquid helium dewar vent stacks. However, since it is not an exact

  17. Update and Improve Subsection NH - Alternative Simplified Creep-Fatigue Design Methods

    International Nuclear Information System (INIS)

    Asayama, Tai

    2009-01-01

    This report described the results of investigation on Task 10 of DOE/ASME Materials NGNP/Generation IV Project based on a contract between ASME Standards Technology, LLC (ASME ST-LLC) and Japan Atomic Energy Agency (JAEA). Task 10 is to Update and Improve Subsection NH -- Alternative Simplified Creep-Fatigue Design Methods. Five newly proposed promising creep-fatigue evaluation methods were investigated. Those are (1) modified ductility exhaustion method, (2) strain range separation method, (3) approach for pressure vessel application, (4) hybrid method of time fraction and ductility exhaustion, and (5) simplified model test approach. The outlines of those methods are presented first, and predictability of experimental results of these methods is demonstrated using the creep-fatigue data collected in previous Tasks 3 and 5. All the methods (except the simplified model test approach which is not ready for application) predicted experimental results fairly accurately. On the other hand, predicted creep-fatigue life in long-term regions showed considerable differences among the methodologies. These differences come from the concepts each method is based on. All the new methods investigated in this report have advantages over the currently employed time fraction rule and offer technical insights that should be thought much of in the improvement of creep-fatigue evaluation procedures. The main points of the modified ductility exhaustion method, the strain range separation method, the approach for pressure vessel application and the hybrid method can be reflected in the improvement of the current time fraction rule. The simplified mode test approach would offer a whole new advantage including robustness and simplicity which are definitely attractive but this approach is yet to be validated for implementation at this point. Therefore, this report recommends the following two steps as a course of improvement of NH based on newly proposed creep-fatigue evaluation

  18. Application of the simplified J-estimation scheme Aramis to mismatching welds in CCP

    International Nuclear Information System (INIS)

    Eripret, C.; Franco, C.; Gilles, P.

    1995-01-01

    The J-based criteria give reasonable predictions of the failure behaviour of ductile cracked metallic structures, even if the material characterization may be sensitive to the size of the specimens. However in cracked welds, this phenomenon due to stress triaxiality effects could be enhanced. Furthermore, the application of conventional methods of toughness measurement (ESIS or ASTM standard) have evidenced a strong influence of the portion of the weld metal in the specimen. Several authors have shown the inadequacy of the simplified J-estimation methods developed for homogeneous materials. These heterogeneity effects mainly related to the mismatch ratio (ratio of weld metal yield strength upon base metal yield strength) as well as to the geometrical parameter h/W-a (weld width upon ligament size). In order to make decisive progress in this field, the Atomic Energy Commission (CEA), the PWR manufacturer FRAMATOME, and the French utility (EDF) have launched a large research program on cracked piping welds behaviour. As part of this program, a new J-estimation scheme, so called ARAMIS, has been developed to account for the influence of both materials, i.e. base metal and weld metal, on the structural resistance of cracked welds. It has been shown that, when the mismatch is high, and when the ligament size is small compared to the weld width, a classical J-based method using the softer material properties is very conservative. On the opposite the ARAMIS method provides a good estimate of J, because it predicts pretty well the shift of the cracked weld limit load, due to the presence of the weld. the influence of geometrical parameters such as crack size, weld width, or specimen length is property accounted for. (authors). 23 refs., 8 figs., 1 tab., 1 appendix

  19. Modulating functions-based method for parameters and source estimation in one-dimensional partial differential equations

    KAUST Repository

    Asiri, Sharefa M.; Laleg-Kirati, Taous-Meriem

    2016-01-01

    In this paper, modulating functions-based method is proposed for estimating space–time-dependent unknowns in one-dimensional partial differential equations. The proposed method simplifies the problem into a system of algebraic equations linear

  20. Immersed boundary-simplified lattice Boltzmann method for incompressible viscous flows

    Science.gov (United States)

    Chen, Z.; Shu, C.; Tan, D.

    2018-05-01

    An immersed boundary-simplified lattice Boltzmann method is developed in this paper for simulations of two-dimensional incompressible viscous flows with immersed objects. Assisted by the fractional step technique, the problem is resolved in a predictor-corrector scheme. The predictor step solves the flow field without considering immersed objects, and the corrector step imposes the effect of immersed boundaries on the velocity field. Different from the previous immersed boundary-lattice Boltzmann method which adopts the standard lattice Boltzmann method (LBM) as the flow solver in the predictor step, a recently developed simplified lattice Boltzmann method (SLBM) is applied in the present method to evaluate intermediate flow variables. Compared to the standard LBM, SLBM requires lower virtual memories, facilitates the implementation of physical boundary conditions, and shows better numerical stability. The boundary condition-enforced immersed boundary method, which accurately ensures no-slip boundary conditions, is implemented as the boundary solver in the corrector step. Four typical numerical examples are presented to demonstrate the stability, the flexibility, and the accuracy of the present method.

  1. A simplified parsimonious higher order multivariate Markov chain model

    Science.gov (United States)

    Wang, Chao; Yang, Chuan-sheng

    2017-09-01

    In this paper, a simplified parsimonious higher-order multivariate Markov chain model (SPHOMMCM) is presented. Moreover, parameter estimation method of TPHOMMCM is give. Numerical experiments shows the effectiveness of TPHOMMCM.

  2. Applicability of simplified methods to evaluate consequences of criticality accident using past accident data

    International Nuclear Information System (INIS)

    Nakajima, Ken

    2003-01-01

    Applicability of four simplified methods to evaluate the consequences of criticality accident was investigated. Fissions in the initial burst and total fissions were evaluated using the simplified methods and those results were compared with the past accident data. The simplified methods give the number of fissions in the initial burst as a function of solution volume; however the accident data did not show such tendency. This would be caused by the lack of accident data for the initial burst with high accuracy. For total fissions, simplified almost reproduced the upper envelope of the accidents. However several accidents, which were beyond the applicable conditions, resulted in the larger total fissions than the evaluations. In particular, the Tokai-mura accident in 1999 gave in the largest total specific fissions, because the activation of cooling system brought the relatively high power for a long time. (author)

  3. RESEARCH ON A SIMPLIFIED MIXED MODEL VERSUS CONTEMPORARY COMPARISON USED IN BREEDING VALUE ESTIMATION AND BULLS CLASSIFICATION FOR MILK PRODUCTION CHARACTERS

    Directory of Open Access Journals (Sweden)

    Agatha POPESCU

    2014-10-01

    Full Text Available The paper goal was to set up a simplified BLUP model in order to estimate the bulls' breeding value for milk production characters and establish their hierarchy, Also, it aimed to compare the bulls' hierarchy set up by means of the simplified BLUP model with their hierarchy established by using the traditional contemporary comparison method. In this purpose, a number of 51 Romanian Friesian bulls were used for evaluating their breeding value for milk production characters: milk yield, fat percentage and fat yield during the 305 days of the 1st lactation of a number of 1,989 daughters in various dairy herds. The simplified BLUP model set up in this research work has demonstrated its high precision of breeding value, which varied between 55 and 92, and more than this it proved that in some cases, the position occupied by bulls could be similar with the one registered by using the contemporary comparison. The higher precision assured by the simplified BLUP model is the guarantee that the bulls' hierarchy in catalogues is a correct one. In this way, farmers could chose the best bulls for improving milk yield in their dairy herds.

  4. Simplified inelastic analysis methods applied to fast breeder reactor core design

    International Nuclear Information System (INIS)

    Abo-El-Ata, M.M.

    1978-01-01

    The paper starts with a review of some currently available simplified inelastic analysis methods used in elevated temperature design for evaluating plastic and thermal creep strains. The primary purpose of the paper is to investigate how these simplified methods may be applied to fast breeder reactor core design where neutron irradiation effects are significant. One of the problems discussed is irradiation-induced creep and its effect on shakedown, ratcheting, and plastic cycling. Another problem is the development of swelling-induced stress which is an additional loading mechanism and must be taken into account. In this respect an expression for swelling-induced stress in the presence of irradiation creep is derived and a model for simplifying the stress analysis under these conditions is proposed. As an example, the effects of irradiation creep and swelling induced stress on the analysis of a thin walled tube under constant internal pressure and intermittent heat fluxes, simulating a fuel pin, is presented

  5. A simplified procedure for mass and stiffness estimation of existing structures

    Science.gov (United States)

    Nigro, Antonella; Ditommaso, Rocco; Carlo Ponzo, Felice; Salvatore Nigro, Domenico

    2016-04-01

    This work focuses the attention on a parametric method for mass and stiffness identification of framed structures, based on frequencies evaluation. The assessment of real structures is greatly affected by the consistency of information retrieved on materials and on the influence of both non-structural components and soil. One of the most important matter is the correct definition of the distribution, both in plan and in elevation, of mass and stiffness: depending on concentrated and distributed loads, the presence of infill panels and the distribution of structural elements. In this study modal identification is performed under several mass-modified conditions and structural parameters consistent with the identified modal parameters are determined. Modal parameter identification of a structure before and after the introduction of additional masses is conducted. By considering the relationship between the additional masses and modal properties before and after the mass modification, structural parameters of a damped system, i.e. mass, stiffness and damping coefficient are inversely estimated from these modal parameters variations. The accuracy of the method can be improved by using various mass-modified conditions. The proposed simplified procedure has been tested on both numerical and experimental models by means linear numerical analyses and shaking table tests performed on scaled structures at the Seismic Laboratory of the University of Basilicata (SISLAB). Results confirm the effectiveness of the proposed procedure to estimate masses and stiffness of existing real structures with a maximum error equal to 10%, under the worst conditions. Acknowledgements This study was partially funded by the Italian Civil Protection Department within the project DPC-RELUIS 2015 - RS4 ''Seismic observatory of structures and health monitoring''.

  6. Simplified methods to assess thermal fatigue due to turbulent mixing

    International Nuclear Information System (INIS)

    Hannink, M.H.C.; Timperi, A.

    2011-01-01

    Thermal fatigue is a safety relevant damage mechanism in pipework of nuclear power plants. A well-known simplified method for the assessment of thermal fatigue due to turbulent mixing is the so-called sinusoidal method. Temperature fluctuations in the fluid are described by a sinusoidally varying signal at the inner wall of the pipe. Because of limited information on the thermal loading conditions, this approach generally leads to overconservative results. In this paper, a new assessment method is presented, which has the potential of reducing the overconservatism of existing procedures. Artificial fluid temperature signals are generated by superposition of harmonic components with different amplitudes and frequencies. The amplitude-frequency spectrum of the components is modelled by a formula obtained from turbulence theory, whereas the phase differences are assumed to be randomly distributed. Lifetime predictions generated with the new simplified method are compared with lifetime predictions based on real fluid temperature signals, measured in an experimental setup of a mixing tee. Also, preliminary steady-state Computational Fluid Dynamics (CFD) calculations of the total power of the fluctuations are presented. The total power is needed as an input parameter for the spectrum formula in a real-life application. Solution of the transport equation for the total power was included in a CFD code and comparisons with experiments were made. The newly developed simplified method for generating the temperature signal is shown to be adequate for the investigated geometry and flow conditions, and demonstrates possibilities of reducing the conservatism of the sinusoidal method. CFD calculations of the total power show promising results, but further work is needed to develop the approach. (author)

  7. CSA C873 Building Energy Estimation Methodology - A simplified monthly calculation for quick building optimization

    NARCIS (Netherlands)

    Legault, A.; Scott, L.; Rosemann, A.L.P.; Hopkins, M.

    2014-01-01

    CSA C873 Building Energy Estimation Methodology (BEEM) is a new series of (10) standards that is intended to simplify building energy calculations. The standard is based upon the German DIN Standard 18599 that has 8 years of proven track record and has been modified for the Canadian market. The BEEM

  8. Calculation methods for single-sided natural ventilation - simplified or detailed?

    DEFF Research Database (Denmark)

    Larsen, Tine Steen; Plesner, Christoffer; Leprince, Valérie

    2016-01-01

    A great energy saving potential lies within increased use of natural ventilation, not only during summer and midseason periods, where it is mainly used today, but also during winter periods, where the outdoor air holds a great cooling potential for ventilative cooling if draft problems can...... be handled. This paper presents a newly developed simplified calculation method for single-sided natural ventilation, which is proposed for the revised standard FprEN 16798-7 (earlier EN 15242:2007) for design of ventilative cooling. The aim for predicting ventilative cooling is to find the most suitable......, while maintaining an acceptable correlation with measurements on average and the authors consider the simplified calculation method well suited for the use in standards such as FprEN 16798-7 for the ventilative cooling effects from single-sided natural ventilation The comparison of different design...

  9. Evaluation of different methods to estimate daily reference evapotranspiration in ungauged basins in Southern Brazil

    Science.gov (United States)

    Ribeiro Fontoura, Jessica; Allasia, Daniel; Herbstrith Froemming, Gabriel; Freitas Ferreira, Pedro; Tassi, Rutineia

    2016-04-01

    Evapotranspiration is a key process of hydrological cycle and a sole term that links land surface water balance and land surface energy balance. Due to the higher information requirements of the Penman-Monteith method and the existing data uncertainty, simplified empirical methods for calculating potential and actual evapotranspiration are widely used in hydrological models. This is especially important in Brazil, where the monitoring of meteorological data is precarious. In this study were compared different methods for estimating evapotranspiration for Rio Grande do Sul, the Southernmost State of Brazil, aiming to suggest alternatives to the recommended method (Penman-Monteith-FAO 56) for estimate daily reference evapotranspiration (ETo) when meteorological data is missing or not available. The input dataset included daily and hourly-observed data from conventional and automatic weather stations respectively maintained by the National Weather Institute of Brazil (INMET) from the period of 1 January 2007 to 31 January 2010. Dataset included maximum temperature (Tmax, °C), minimum temperature (Tmin, °C), mean relative humidity (%), wind speed at 2 m height (u2, m s-1), daily solar radiation (Rs, MJ m- 2) and atmospheric pressure (kPa) that were grouped at daily time-step. Was tested the Food and Agriculture Organization of the United Nations (FAO) Penman-Monteith method (PM) at its full form, against PM assuming missing several variables not normally available in Brazil in order to calculate daily reference ETo. Missing variables were estimated as suggested in FAO56 publication or from climatological means. Furthermore, PM was also compared against the following simplified empirical methods: Hargreaves-Samani, Priestley-Taylor, Mccloud, McGuiness-Bordne, Romanenko, Radiation-Temperature, Tanner-Pelton. The statistical analysis indicates that even if just Tmin and Tmax are available, it is better to use PM estimating missing variables from syntetic data than

  10. A simplified method for processing dynamic images of gastric antrum

    DEFF Research Database (Denmark)

    Madsen, J L; Graff, J; Fugisang, S

    2000-01-01

    versus geometric centre curve. In all subjects, our technique gave unequivocal frequencies of antral contractions at each time point. Statistical analysis did not reveal any intraindividual variation in this frequency during gastric emptying. We believe that the simplified scintigraphic method...

  11. Simplified discrete ordinates method in spherical geometry

    International Nuclear Information System (INIS)

    Elsawi, M.A.; Abdurrahman, N.M.; Yavuz, M.

    1999-01-01

    The authors extend the method of simplified discrete ordinates (SS N ) to spherical geometry. The motivation for such an extension is that the appearance of the angular derivative (redistribution) term in the spherical geometry transport equation makes it difficult to decide which differencing scheme best approximates this term. In the present method, the angular derivative term is treated implicitly and thus avoids the need for the approximation of such term. This method can be considered to be analytic in nature with the advantage of being free from spatial truncation errors from which most of the existing transport codes suffer. In addition, it treats the angular redistribution term implicitly with the advantage of avoiding approximations to that term. The method also can handle scattering in a very general manner with the advantage of spending almost the same computational effort for all scattering modes. Moreover, the methods can easily be applied to higher-order S N calculations

  12. A successive over-relaxation for slab geometry Simplified SN method with interface flux iteration

    International Nuclear Information System (INIS)

    Yavuz, M.

    1995-01-01

    A Successive Over-Relaxation scheme is proposed for speeding up the solution of one-group slab geometry transport problems using a Simplified S N method. The solution of the Simplified S N method that is completely free from all spatial truncation errors is based on the expansion of the angular flux in spherical-harmonics solutions. One way to obtain the (numerical) solution of the Simplified S N method is to use Interface Flux Iteration, which can be considered as the Gauss-Seidel relaxation scheme; the new information is immediately used in the calculations. To accelerate the convergence, an over relaxation parameter is employed in the solution algorithm. The over relaxation parameters for a number of cases depending on scattering ratios and mesh sizes are determined by Fourier analyzing infinite-medium Simplified S 2 equations. Using such over relaxation parameters in the iterative scheme, a significant increase in the convergence of transport problems can be achieved for coarse spatial cells whose spatial widths are greater than one mean-free-path

  13. Simplified methods and application to preliminary design of piping for elevated temperature service

    International Nuclear Information System (INIS)

    Severud, L.K.

    1975-01-01

    A number of simplified stress analysis methods and procedures that have been used on the FFTF project for preliminary design of piping operating at elevated temperatures are described. The rationale and considerations involved in developing the procedures and preliminary design guidelines are given. Applications of the simplified methods to a few FFTF pipelines are described and the success of these guidelines are measured by means of comparisons to pipeline designs that have had detailed Code type stress analyses. (U.S.)

  14. Sensitivity Analysis of a Simplified Fire Dynamic Model

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt; Nielsen, Anker

    2015-01-01

    This paper discusses a method for performing a sensitivity analysis of parameters used in a simplified fire model for temperature estimates in the upper smoke layer during a fire. The results from the sensitivity analysis can be used when individual parameters affecting fire safety are assessed...

  15. Simplified method for measuring the response time of scram release electromagnet in a nuclear reactor

    Energy Technology Data Exchange (ETDEWEB)

    Patri, Sudheer, E-mail: patri@igcar.gov.in; Mohana, M.; Kameswari, K.; Kumar, S. Suresh; Narmadha, S.; Vijayshree, R.; Meikandamurthy, C.; Venkatesan, A.; Palanisami, K.; Murthy, D. Thirugnana; Babu, B.; Prakash, V.; Rajan, K.K.

    2015-04-15

    Highlights: • An alternative method for estimating the electromagnet clutch release time. • A systematic approach to develop a computer based measuring system. • Prototype tests on the measurement system. • Accuracy of the method is ±6% and repeatability error is within 2%. - Abstract: The delay time in electromagnet clutch release during a reactor trip (scram action) is an important safety parameter, having a bearing on the plant safety during various design basis events. Generally, it is measured using current decay characteristics of electromagnet coil and its energising circuit. A simplified method of measuring the same in a Sodium cooled fast reactors (SFR) is proposed in this paper. The method utilises the position data of control rod to estimate the delay time in electromagnet clutch release. A computer based real time measurement system for measuring the electromagnet clutch delay time is developed and qualified for retrofitting in prototype fast breeder reactor. Various stages involved in the development of the system are principle demonstration, experimental verification of hardware capabilities and prototype system testing. Tests on prototype system have demonstrated the satisfactory performance of the system with intended accuracy and repeatability.

  16. Experimental Quasi-Microwave Whole-Body Averaged SAR Estimation Method Using Cylindrical-External Field Scanning

    OpenAIRE

    Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio

    2010-01-01

    The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and nume...

  17. J evaluation by simplified method for cracked pipes under mechanical loading

    International Nuclear Information System (INIS)

    Lacire, M.H.; Michel, B.; Gilles, P.

    2001-01-01

    The integrity of structures behaviour is an important subject for the nuclear reactor safety. Most of assessment methods of cracked components are based on the evaluation of the parameter J. However to avoid complex elastic-plastic finite element calculations of J, a simplified method has been jointly developed by CEA, EDF and Framatome. This method, called Js, is based on the reference stress approach and a new KI handbook. To validate this method, a complete set of 2D and 3D elastic-plastic finite element calculations of J have been performed on pipes (more than 300 calculations are available) for different types of part through wall crack (circumferential or longitudinal); mechanical loading (pressure, bending moment, axial load, torsion moment, and combination of these loading); different kind of materials (austenitic or ferritic steel). This paper presents a comparison between the simplified assessment of J and finite element results on these configurations for mechanical loading. Then, validity of the method is discussed and an applicability domain is proposed. (author)

  18. On-line scheme for parameter estimation of nonlinear lithium ion battery equivalent circuit models using the simplified refined instrumental variable method for a modified Wiener continuous-time model

    International Nuclear Information System (INIS)

    Allafi, Walid; Uddin, Kotub; Zhang, Cheng; Mazuir Raja Ahsan Sha, Raja; Marco, James

    2017-01-01

    Highlights: •Off-line estimation approach for continuous-time domain for non-invertible function. •Model reformulated to multi-input-single-output; nonlinearity described by sigmoid. •Method directly estimates parameters of nonlinear ECM from the measured-data. •Iterative on-line technique leads to smoother convergence. •The model is validated off-line and on-line using NCA battery. -- Abstract: The accuracy of identifying the parameters of models describing lithium ion batteries (LIBs) in typical battery management system (BMS) applications is critical to the estimation of key states such as the state of charge (SoC) and state of health (SoH). In applications such as electric vehicles (EVs) where LIBs are subjected to highly demanding cycles of operation and varying environmental conditions leading to non-trivial interactions of ageing stress factors, this identification is more challenging. This paper proposes an algorithm that directly estimates the parameters of a nonlinear battery model from measured input and output data in the continuous time-domain. The simplified refined instrumental variable method is extended to estimate the parameters of a Wiener model where there is no requirement for the nonlinear function to be invertible. To account for nonlinear battery dynamics, in this paper, the typical linear equivalent circuit model (ECM) is enhanced by a block-oriented Wiener configuration where the nonlinear memoryless block following the typical ECM is defined to be a sigmoid static nonlinearity. The nonlinear Weiner model is reformulated in the form of a multi-input, single-output linear model. This linear form allows the parameters of the nonlinear model to be estimated using any linear estimator such as the well-established least squares (LS) algorithm. In this paper, the recursive least square (RLS) method is adopted for online parameter estimation. The approach was validated on experimental data measured from an 18650-type Graphite

  19. Stress estimation in reservoirs using an integrated inverse method

    Science.gov (United States)

    Mazuyer, Antoine; Cupillard, Paul; Giot, Richard; Conin, Marianne; Leroy, Yves; Thore, Pierre

    2018-05-01

    Estimating the stress in reservoirs and their surroundings prior to the production is a key issue for reservoir management planning. In this study, we propose an integrated inverse method to estimate such initial stress state. The 3D stress state is constructed with the displacement-based finite element method assuming linear isotropic elasticity and small perturbations in the current geometry of the geological structures. The Neumann boundary conditions are defined as piecewise linear functions of depth. The discontinuous functions are determined with the CMA-ES (Covariance Matrix Adaptation Evolution Strategy) optimization algorithm to fit wellbore stress data deduced from leak-off tests and breakouts. The disregard of the geological history and the simplified rheological assumptions mean that only the stress field, statically admissible and matching the wellbore data should be exploited. The spatial domain of validity of this statement is assessed by comparing the stress estimations for a synthetic folded structure of finite amplitude with a history constructed assuming a viscous response.

  20. 77 FR 54482 - Allocation of Costs Under the Simplified Methods

    Science.gov (United States)

    2012-09-05

    ... cost of goods sold cash or trade discounts that taxpayers do not capitalize for book purposes (and... to adjust additional section 263A costs for cash or trade discounts described in Sec. 1.471-3(b... Allocation of Costs Under the Simplified Methods AGENCY: Internal Revenue Service (IRS), Treasury. ACTION...

  1. Simplified analytical methods and experimental correlations of damping in piping during dynamic high-level inelastic response

    International Nuclear Information System (INIS)

    Severud, L.K.

    1987-01-01

    Simplified methods for predicting equivalent viscous damping are used to assess damping contributions due to piping inelastic plastic hinge action and support snubbers. These increments are compared to experimental findings from shake and snap-back tests of several pipe systems. Good correlations were found confirming the usefulness of the simplified methods

  2. Technical note: Use of a simplified equation for estimating glomerular filtration rate in beef cattle.

    Science.gov (United States)

    Murayama, I; Miyano, A; Sasaki, Y; Hirata, T; Ichijo, T; Satoh, H; Sato, S; Furuhama, K

    2013-11-01

    This study was performed to clarify whether a formula (Holstein equation) based on a single blood sample and the isotonic, nonionic, iodine contrast medium iodixanol in Holstein dairy cows can apply to the estimation of glomerular filtration rate (GFR) for beef cattle. To verify the application of iodixanol in beef cattle, instead of the standard tracer inulin, both agents were coadministered as a bolus intravenous injection to identical animals at doses of 10 mg of I/kg of BW and 30 mg/kg. Blood was collected 30, 60, 90, and 120 min after the injection, and the GFR was determined by the conventional multisample strategies. The GFR values from iodixanol were well consistent with those from inulin, and no effects of BW, age, or parity on GFR estimates were noted. However, the GFR in cattle weighing less than 300 kg, aged<1 yr old, largely fluctuated, presumably due to the rapid ruminal growth and dynamic changes in renal function at young adult ages. Using clinically healthy cattle and those with renal failure, the GFR values estimated from the Holstein equation were in good agreement with those by the multisample method using iodixanol (r=0.89, P=0.01). The results indicate that the simplified Holstein equation using iodixanol can be used for estimating the GFR of beef cattle in the same dose regimen as Holstein dairy cows, and provides a practical and ethical alternative.

  3. Comments on Simplified Calculation Method for Fire Exposed Concrete Columns

    DEFF Research Database (Denmark)

    Hertz, Kristian Dahl

    1998-01-01

    The author has developed new simplified calculation methods for fire exposed columns. Methods, which are found In ENV 1992-1-2 chapter 4.3 and in proposal for Danish code of Practise DS411 chapter 9. In the present supporting document the methods are derived and 50 eccentrically loaded fire expos...... columns are calculated and compared to results of full-scale tests. Furthermore 500 columns are calculated in order to present each test result related to a variation of the calculation in time of fire resistance....

  4. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film

    Directory of Open Access Journals (Sweden)

    Tatsuhiro Gotanda

    2016-01-01

    Full Text Available Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were −32.336 and −33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range.

  5. Adjusting for treatment switching in randomised controlled trials - A simulation study and a simplified two-stage method.

    Science.gov (United States)

    Latimer, Nicholas R; Abrams, K R; Lambert, P C; Crowther, M J; Wailoo, A J; Morden, J P; Akehurst, R L; Campbell, M J

    2017-04-01

    Estimates of the overall survival benefit of new cancer treatments are often confounded by treatment switching in randomised controlled trials (RCTs) - whereby patients randomised to the control group are permitted to switch onto the experimental treatment upon disease progression. In health technology assessment, estimates of the unconfounded overall survival benefit associated with the new treatment are needed. Several switching adjustment methods have been advocated in the literature, some of which have been used in health technology assessment. However, it is unclear which methods are likely to produce least bias in realistic RCT-based scenarios. We simulated RCTs in which switching, associated with patient prognosis, was permitted. Treatment effect size and time dependency, switching proportions and disease severity were varied across scenarios. We assessed the performance of alternative adjustment methods based upon bias, coverage and mean squared error, related to the estimation of true restricted mean survival in the absence of switching in the control group. We found that when the treatment effect was not time-dependent, rank preserving structural failure time models (RPSFTM) and iterative parameter estimation methods produced low levels of bias. However, in the presence of a time-dependent treatment effect, these methods produced higher levels of bias, similar to those produced by an inverse probability of censoring weights method. The inverse probability of censoring weights and structural nested models produced high levels of bias when switching proportions exceeded 85%. A simplified two-stage Weibull method produced low bias across all scenarios and provided the treatment switching mechanism is suitable, represents an appropriate adjustment method.

  6. A simplified method for evaluating thermal performance of unglazed transpired solar collectors under steady state

    International Nuclear Information System (INIS)

    Wang, Xiaoliang; Lei, Bo; Bi, Haiquan; Yu, Tao

    2017-01-01

    Highlights: • A simplified method for evaluating thermal performance of UTC is developed. • Experiments, numerical simulations, dimensional analysis and data fitting are used. • The correlation of absorber plate temperature for UTC is established. • The empirical correlation of heat exchange effectiveness for UTC is proposed. - Abstract: Due to the advantages of low investment and high energy efficiency, unglazed transpired solar collectors (UTC) have been widely used for heating in buildings. However, it is difficult for designers to quickly evaluate the thermal performance of UTC based on the conventional methods such as experiments and numerical simulations. Therefore, a simple and fast method to determine the thermal performance of UTC is indispensable. The objective of this work is to provide a simplified calculation method to easily evaluate the thermal performance of UTC under steady state. Different parameters are considered in the simplified method, including pitch, perforation diameter, solar radiation, solar absorptivity, approach velocity, ambient air temperature, absorber plate temperature, and so on. Based on existing design parameters and operating conditions, correlations for the absorber plate temperature and the heat exchange effectiveness are developed using dimensional analysis and data fitting, respectively. Results show that the proposed simplified method has a high accuracy and can be employed to evaluate the collector efficiency, the heat exchange effectiveness and the air temperature rise. The proposed method in this paper is beneficial to directly determine design parameters and operating status for UTC.

  7. A Qualitative Method to Estimate HSI Display Complexity

    International Nuclear Information System (INIS)

    Hugo, Jacques; Gertman, David

    2013-01-01

    There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation

  8. Java Programs for Using Newmark's Method and Simplified Decoupled Analysis to Model Slope Performance During Earthquakes

    Science.gov (United States)

    Jibson, Randall W.; Jibson, Matthew W.

    2003-01-01

    Landslides typically cause a large proportion of earthquake damage, and the ability to predict slope performance during earthquakes is important for many types of seismic-hazard analysis and for the design of engineered slopes. Newmark's method for modeling a landslide as a rigid-plastic block sliding on an inclined plane provides a useful method for predicting approximate landslide displacements. Newmark's method estimates the displacement of a potential landslide block as it is subjected to earthquake shaking from a specific strong-motion record (earthquake acceleration-time history). A modification of Newmark's method, decoupled analysis, allows modeling landslides that are not assumed to be rigid blocks. This open-file report is available on CD-ROM and contains Java programs intended to facilitate performing both rigorous and simplified Newmark sliding-block analysis and a simplified model of decoupled analysis. For rigorous analysis, 2160 strong-motion records from 29 earthquakes are included along with a search interface for selecting records based on a wide variety of record properties. Utilities are available that allow users to add their own records to the program and use them for conducting Newmark analyses. Also included is a document containing detailed information about how to use Newmark's method to model dynamic slope performance. This program will run on any platform that supports the Java Runtime Environment (JRE) version 1.3, including Windows, Mac OSX, Linux, Solaris, etc. A minimum of 64 MB of available RAM is needed, and the fully installed program requires 400 MB of disk space.

  9. Estimation of 131J-Jodohippurateclearance by a simplified method using a single plasma sample

    International Nuclear Information System (INIS)

    Botsch, H.; Golde, G.; Kampf, D.

    1980-01-01

    Theoretical volumes calculated from the reciprocal of the plasma concentration of 131 J-Jodohippurate were compared in 95 patients with clearance values calculated by the 2-compartment-method and in 18 patients with conventional PAH-clearance. For estimating Hippurate-clearance from a single blood sampling the most favorable time is 45 min. after injection (r = 0.96; clearance 400/ml/min.: r = 0.98). Clearance values may be derived from the formula: C = 0.4 + 7.26 V - 0.021 x V 2 (V = injected activity/activity per l plasma taken 45 min. after injection). The simplicity, precision and reproducibility of the above mentioned clearance-method is emphasized. (orig.) [de

  10. Simplified Analytical Method for Optimized Initial Shape Analysis of Self-Anchored Suspension Bridges and Its Verification

    Directory of Open Access Journals (Sweden)

    Myung-Rag Jung

    2015-01-01

    Full Text Available A simplified analytical method providing accurate unstrained lengths of all structural elements is proposed to find the optimized initial state of self-anchored suspension bridges under dead loads. For this, equilibrium equations of the main girder and the main cable system are derived and solved by evaluating the self-weights of cable members using unstrained cable lengths and iteratively updating both the horizontal tension component and the vertical profile of the main cable. Furthermore, to demonstrate the validity of the simplified analytical method, the unstrained element length method (ULM is applied to suspension bridge models based on the unstressed lengths of both cable and frame members calculated from the analytical method. Through numerical examples, it is demonstrated that the proposed analytical method can indeed provide an optimized initial solution by showing that both the simplified method and the nonlinear FE procedure lead to practically identical initial configurations with only localized small bending moment distributions.

  11. The Numerical Welding Simulation - Developments and Validation of Simplified and Bead Lumping Methods

    International Nuclear Information System (INIS)

    Baup, Olivier

    2001-01-01

    The aim of this work was to study the TIG multipass welding process on stainless steel, by means of numerical methods and then to work out simplified and bead lumping methods in order to reduce adjusting and realisation times of these calculations. A simulation was used as reference for the validation of these methods; after the presentation of the test series having led to the option choices of this calculation (2D generalised plane strains, elastoplastic model with an isotropic hardening, hardening restoration due to high temperatures), various simplifications were tried on a plate geometry. These simplifications related various modelling points with a correct plastic flow representation in the plate. The use of a reduced number of thermal fields characterising the bead deposit and a low number of tensile curves allow to obtain interesting results, decreasing significantly the Computing times. In addition various lumping bead methods have been studied and concerning both the shape and the thermic of the macro-deposits. The macro-deposit shapes studied are in 'L', or in layer or they represent two beads one on top of the other. Among these three methods, only those using a few number of lumping beads gave bad results since thermo-mechanical history was deeply modified near and inside the weld. Thereafter, simplified methods have been applied to a tubular geometry. On this new geometry, experimental measurements were made during welding, which allow a validation of the reference calculation. Simplified and reference calculations gave approximately the same stress fields as found on plate geometry. Finally, in the last part of this document a procedure for automatic data setting permitting to reduce significantly the calculation phase preparation is presented. It has been applied to the calculation of thick pipe welding in 90 beads; the results are compared with a simplified simulation realised by Framatome and with experimental measurements. A bead by

  12. Simplified solutions of the Cox-Thompson inverse scattering method at fixed energy

    International Nuclear Information System (INIS)

    Palmai, Tamas; Apagyi, Barnabas; Horvath, Miklos

    2008-01-01

    Simplified solutions of the Cox-Thompson inverse quantum scattering method at fixed energy are derived if a finite number of partial waves with only even or odd angular momenta contribute to the scattering process. Based on new formulae various approximate methods are introduced which also prove applicable to the generic scattering events

  13. A Manual of Simplified Laboratory Methods for Operators of Wastewater Treatment Facilities.

    Science.gov (United States)

    Westerhold, Arnold F., Ed.; Bennett, Ernest C., Ed.

    This manual is designed to provide the small wastewater treatment plant operator, as well as the new or inexperienced operator, with simplified methods for laboratory analysis of water and wastewater. It is emphasized that this manual is not a replacement for standard methods but a guide for plants with insufficient equipment to perform analyses…

  14. A Simplified Method for Tissue Engineering Skeletal Muscle Organoids in Vitro

    Science.gov (United States)

    Shansky, Janet; DelTatto, Michael; Chromiak, Joseph; Vandenburgh, Herman

    1996-01-01

    Tissue-engineered three dimensional skeletal muscle organ-like structures have been formed in vitro from primary myoblasts by several different techniques. This report describes a simplified method for generating large numbers of muscle organoids from either primary embryonic avian or neonatal rodent myoblasts, which avoids the requirements for stretching and other mechanical stimulation.

  15. Investigation on method of estimating the excitation spectrum of vibration source

    International Nuclear Information System (INIS)

    Zhang Kun; Sun Lei; Lin Song

    2010-01-01

    In practical engineer area, it is hard to obtain the excitation spectrum of the auxiliary machines of nuclear reactor through direct measurement. To solve this problem, the general method of estimating the excitation spectrum of vibration source through indirect measurement is proposed. First, the dynamic transfer matrix between the virtual excitation points and the measure points is obtained through experiment. The matrix combined with the response spectrum at the measure points under practical work condition can be used to calculate the excitation spectrum acts on the virtual excitation points. Then a simplified method is proposed which is based on the assumption that the vibration machine can be regarded as rigid body. The method treats the centroid as the excitation point and the dynamic transfer matrix is derived by using the sub structure mobility synthesis method. Thus, the excitation spectrum can be obtained by the inverse of the transfer matrix combined with the response spectrum at the measure points. Based on the above method, a computing example is carried out to estimate the excitation spectrum acts on the centroid of a electrical pump. By comparing the input excitation and the estimated excitation, the reliability of this method is verified. (authors)

  16. A Qualitative Method to Estimate HSI Display Complexity

    Energy Technology Data Exchange (ETDEWEB)

    Hugo, Jacques; Gertman, David [Idaho National Laboratory, Idaho (United States)

    2013-04-15

    There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation.

  17. A Simple Method to Estimate Large Fixed Effects Models Applied to Wage Determinants and Matching

    OpenAIRE

    Mittag, Nikolas

    2016-01-01

    Models with high dimensional sets of fixed effects are frequently used to examine, among others, linked employer-employee data, student outcomes and migration. Estimating these models is computationally difficult, so simplifying assumptions that are likely to cause bias are often invoked to make computation feasible and specification tests are rarely conducted. I present a simple method to estimate large two-way fixed effects (TWFE) and worker-firm match effect models without additional assum...

  18. Evaluation of single-sided natural ventilation using a simplified and fair calculation method

    DEFF Research Database (Denmark)

    Plesner, Christoffer; Larsen, Tine Steen; Leprince, Valérie

    2016-01-01

    the scope of standards and regulations in the best way. This has been done by comparing design expressions using parameter variations, comparison to wind-tunnel experiments and full-scale outdoor measurements. A modified De Gids & Phaff method showed to be a simplified and fair calculation method that would...

  19. RESEARCH ON THE BREEDING VALUE ESTIMATION FOR BEEF TRAITS BY A SIMPLIFIED MIXED MODEL

    Directory of Open Access Journals (Sweden)

    Agatha POPESCU

    2014-10-01

    Full Text Available The paper purpose was to apply a simplified mixed model BLUP for estimating bulls' breeding value for meat production in terms of weight daily gain and establish their hierarchy, Also, it aimed to compare the bulls' ranging obtained by a simplified BLUP mixed model with their hierarchy set up by contemporary comparison. A sample of 1,705 half sibs steers, offspring of 106 Friesian bulls were used as biological material. Bulls' breeding value varied between + 244.5 g for the best bull and -204.7 g for the bull with the weakest records. A number of 57 bulls ( 53.77% registered positive breeding values. The accuracy of the breeding value estimation varied between 80, the highest precision, in case of the bull number 21 and 53, the lowest precision, in case of the bull number 38. A number of 7 bulls of the total of 57 with a positive breeding value were situated aproximately on the same positions at a difference of 0 to 1 points on the both lists established by BLUP and contemporary comparison. As a conclusion, BLUP could be largely and easily applied in bull evaluation for meat production traits in term of weight daily gain, considered the key parameter during the fattening period and its precision is very high, a guarantee that the bulls' hierarchy is a correct one. If a farmer would chose a high breeding value bull from a catalogue, he could be sure of the improvement of beef production by genetic gain.

  20. Automated local line rolling forming and simplified deformation simulation method for complex curvature plate of ships

    Directory of Open Access Journals (Sweden)

    Y. Zhao

    2017-06-01

    Full Text Available Local line rolling forming is a common forming approach for the complex curvature plate of ships. However, the processing mode based on artificial experience is still applied at present, because it is difficult to integrally determine relational data for the forming shape, processing path, and process parameters used to drive automation equipment. Numerical simulation is currently the major approach for generating such complex relational data. Therefore, a highly precise and effective numerical computation method becomes crucial in the development of the automated local line rolling forming system for producing complex curvature plates used in ships. In this study, a three-dimensional elastoplastic finite element method was first employed to perform numerical computations for local line rolling forming, and the corresponding deformation and strain distribution features were acquired. In addition, according to the characteristics of strain distributions, a simplified deformation simulation method, based on the deformation obtained by applying strain was presented. Compared to the results of the three-dimensional elastoplastic finite element method, this simplified deformation simulation method was verified to provide high computational accuracy, and this could result in a substantial reduction in calculation time. Thus, the application of the simplified deformation simulation method was further explored in the case of multiple rolling loading paths. Moreover, it was also utilized to calculate the local line rolling forming for the typical complex curvature plate of ships. Research findings indicated that the simplified deformation simulation method was an effective tool for rapidly obtaining relationships between the forming shape, processing path, and process parameters.

  1. Simplified pressure method for respirator fit testing.

    Science.gov (United States)

    Han, D; Xu, M; Foo, S; Pilacinski, W; Willeke, K

    1991-08-01

    A simplified pressure method has been developed for fit testing air-purifying respirators. In this method, the air-purifying cartridges are replaced by a pressure-sensing attachment and a valve. While wearers hold their breath, a small pump extracts air from the respirator cavity until a steady-state pressure is reached in 1 to 2 sec. The flow rate through the face seal leak is a unique function of this pressure, which is determined once for all respirators, regardless of the respirator's cavity volume or deformation because of pliability. The contaminant concentration inside the respirator depends on the degree of dilution by the flow through the cartridges. The cartridge flow varies among different brands and is measured once for each brand. The ratio of cartridge to leakflow is a measure of fit. This flow ratio has been measured on human subjects and has been compared to fit factors determined on the same subjects by means of photometric and particle count tests. The aerosol tests gave higher values of fit.

  2. Simplified scheme or radioactive plume calculations

    International Nuclear Information System (INIS)

    Gibson, T.A.; Montan, D.N.

    1976-01-01

    A simplified mathematical scheme to estimate external whole-body γ radiation exposure rates from gaseous radioactive plumes was developed for the Rio Blanco Gas Field Nuclear Stimulation Experiment. The method enables one to calculate swiftly, in the field, downwind exposure rates knowing the meteorological conditions and γ radiation exposure rates measured by detectors positioned near the plume source. The method is straightforward and easy to use under field conditions without the help of mini-computers. It is applicable to a wide range of radioactive plume situations. It should be noted that the Rio Blanco experiment was detonated on May 17, 1973, and no seep or release of radioactive material occurred

  3. Modulating functions-based method for parameters and source estimation in one-dimensional partial differential equations

    KAUST Repository

    Asiri, Sharefa M.

    2016-10-20

    In this paper, modulating functions-based method is proposed for estimating space–time-dependent unknowns in one-dimensional partial differential equations. The proposed method simplifies the problem into a system of algebraic equations linear in unknown parameters. The well-posedness of the modulating functions-based solution is proved. The wave and the fifth-order KdV equations are used as examples to show the effectiveness of the proposed method in both noise-free and noisy cases.

  4. Phase-Inductance-Based Position Estimation Method for Interior Permanent Magnet Synchronous Motors

    Directory of Open Access Journals (Sweden)

    Xin Qiu

    2017-12-01

    Full Text Available This paper presents a phase-inductance-based position estimation method for interior permanent magnet synchronous motors (IPMSMs. According to the characteristics of phase induction of IPMSMs, the corresponding relationship of the rotor position and the phase inductance is obtained. In order to eliminate the effect of the zero-sequence component of phase inductance and reduce the rotor position estimation error, the phase inductance difference is employed. With the iterative computation of inductance vectors, the position plane is further subdivided, and the rotor position is extracted by comparing the amplitudes of inductance vectors. To decrease the consumption of computer resources and increase the practicability, a simplified implementation is also investigated. In this method, the rotor position information is achieved easily, with several basic math operations and logical comparisons of phase inductances, without any coordinate transformation or trigonometric function calculation. Based on this position estimation method, the field orientated control (FOC strategy is established, and the detailed implementation is also provided. A series of experiment results from a prototype demonstrate the correctness and feasibility of the proposed method.

  5. A simplified parsimonious higher order multivariate Markov chain model with new convergence condition

    Science.gov (United States)

    Wang, Chao; Yang, Chuan-sheng

    2017-09-01

    In this paper, we present a simplified parsimonious higher-order multivariate Markov chain model with new convergence condition. (TPHOMMCM-NCC). Moreover, estimation method of the parameters in TPHOMMCM-NCC is give. Numerical experiments illustrate the effectiveness of TPHOMMCM-NCC.

  6. Experimental Quasi-Microwave Whole-Body Averaged SAR Estimation Method Using Cylindrical-External Field Scanning

    Science.gov (United States)

    Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio

    The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and numerical analyses based on the finite-difference time-domain (FDTD) method. We consider the anatomical European human phantoms and plane-wave in the 2GHz mobile phone frequency band. The measured whole-body averaged SAR results obtained by the proposed method are compared with the results of the FDTD analyses.

  7. Evaluation of simplified dna extraction methods for EMM typing of group a streptococci

    Directory of Open Access Journals (Sweden)

    Jose JJM

    2006-01-01

    Full Text Available Simplified methods of DNA extraction for amplification and sequencing for emm typing of group A streptococci (GAS can save valuable time and cost in resource crunch situations. To evaluate this, we compared two methods of DNA extraction directly from colonies with the standard CDC cell lysate method for emm typing of 50 GAS strains isolated from children with pharyngitis and impetigo. For this, GAS colonies were transferred into two sets of PCR tubes. One set was preheated at 94oC for two minutes in the thermal cycler and cooled while the other set was frozen overnight at -20oC and then thawed before adding the PCR mix. For the cell lysate method, cells were treated with mutanolysin and hyaluronidase before heating at 100oC for 10 minutes and cooling immediately as recommended in the CDC method. All 50 strains could be typed by sequencing the hyper variable region of the emm gene after amplification. The quality of sequences and the emm types identified were also identical. Our study shows that the two simplified DNA extraction methods directly from colonies can conveniently be used for typing a large number of GAS strains easily in relatively short time.

  8. 3D Bearing Capacity of Structured Cells Supported on Cohesive Soil: Simplified Analysis Method

    Directory of Open Access Journals (Sweden)

    Martínez-Galván Sergio Antonio

    2013-06-01

    Full Text Available In this paper a simplified analysis method to compute the bearing capacity of structured cell foundations subjected to vertical loading and supported in soft cohesive soil is proposed. A structured cell is comprised by a top concrete slab structurally connected to concrete external walls that enclose the natural soil. Contrary to a box foundation it does not include a bottom slab and hence, the soil within the walls becomes an important component of the structured cell. This simplified method considers the three-dimensional geometry of the cell, the undrained shear strength of cohesive soils and the existence of structural continuity between the top concrete slab and the surrounding walls, along the walls themselves and the walls structural joints. The method was developed from results of numerical-parametric analyses, from which it was found that structured cells fail according to a punching-type mechanism.

  9. A simplified, data-constrained approach to estimate the permafrost carbon-climate feedback.

    Science.gov (United States)

    Koven, C D; Schuur, E A G; Schädel, C; Bohn, T J; Burke, E J; Chen, G; Chen, X; Ciais, P; Grosse, G; Harden, J W; Hayes, D J; Hugelius, G; Jafarov, E E; Krinner, G; Kuhry, P; Lawrence, D M; MacDougall, A H; Marchenko, S S; McGuire, A D; Natali, S M; Nicolsky, D J; Olefeldt, D; Peng, S; Romanovsky, V E; Schaefer, K M; Strauss, J; Treat, C C; Turetsky, M

    2015-11-13

    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation-Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a three-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100. Under a medium warming scenario (RCP4.5), the approach projects permafrost soil C losses of 12.2-33.4 Pg C; under a high warming scenario (RCP8.5), the approach projects C losses of 27.9-112.6 Pg C. Projected C losses are roughly linearly proportional to global temperature changes across the two scenarios. These results indicate a global sensitivity of frozen soil C to climate change (γ sensitivity) of -14 to -19 Pg C °C(-1) on a 100 year time scale. For CH4 emissions, our approach assumes a fixed saturated area and that increases in CH4 emissions are related to increased heterotrophic respiration in anoxic soil, yielding CH4 emission increases of 7% and 35% for the RCP4.5 and RCP8.5 scenarios, respectively, which add an additional greenhouse gas forcing of approximately 10-18%. The simplified approach

  10. Influencing Factors and Simplified Model of Film Hole Irrigation

    Directory of Open Access Journals (Sweden)

    Yi-Bo Li

    2017-07-01

    Full Text Available Film hole irrigation is an advanced low-cost and high-efficiency irrigation method, which can improve water conservation and water use efficiency. Given its various advantages and potential applications, we conducted a laboratory study to investigate the effects of soil texture, bulk density, initial soil moisture, irrigation depth, opening ratio (ρ, film hole diameter (D, and spacing on cumulative infiltration using SWMS-2D. We then proposed a simplified model based on the Kostiakov model for infiltration estimation. Error analyses indicated SWMS-2D to be suitable for infiltration simulation of film hole irrigation. Additional SWMS-2D-based investigations indicated that, for a certain soil, initial soil moisture and irrigation depth had the weakest effects on cumulative infiltration, whereas ρ and D had the strongest effects on cumulative infiltration. A simplified model with ρ and D was further established, and its use was then expanded to different soils. Verification based on seven soil types indicated that the established simplified double-factor model effectively estimates cumulative infiltration for film hole irrigation, with a small mean average error of 0.141–2.299 mm, a root mean square error of 0.177–2.722 mm, a percent bias of −2.131–1.479%, and a large Nash–Sutcliffe coefficient that is close to 1.0.

  11. Simplified hourly method to calculate summer temperatures in dwellings

    DEFF Research Database (Denmark)

    Mortensen, Lone Hedegaard; Aggerholm, Søren

    2012-01-01

    with an ordinary distribution of windows and a “worst” case where the window area facing south and west was increased by more than 60%. The simplified method used Danish weather data and only needs information on transmission losses, thermal mass, surface contact, internal load, ventilation scheme and solar load...... program for thermal simulations of buildings. The results are based on one year simulations of two cases. The cases were based on a low energy dwelling of 196 m². The transmission loss for the building envelope was 3.3 W/m², not including windows and doors. The dwelling was tested in two cases, a case...

  12. A simplified 137Cs transport model for estimating erosion rates in undisturbed soil

    International Nuclear Information System (INIS)

    Zhang Xinbao; Long Yi; He Xiubin; Fu Jiexiong; Zhang Yunqi

    2008-01-01

    137 Cs is an artificial radionuclide with a half-life of 30.12 years which released into the environment as a result of atmospheric testing of thermo-nuclear weapons primarily during the period of 1950s-1970s with the maximum rate of 137 Cs fallout from atmosphere in 1963. 137 Cs fallout is strongly and rapidly adsorbed by fine particles in the surface horizons of the soil, when it falls down on the ground mostly with precipitation. Its subsequent redistribution is associated with movements of the soil or sediment particles. The 137 Cs nuclide tracing technique has been used for assessment of soil losses for both undisturbed and cultivated soils. For undisturbed soils, a simple profile-shape model was developed in 1990 to describe the 137 Cs depth distribution in profile, where the maximum 137 Cs occurs in the surface horizon and it exponentially decreases with depth. The model implied that the total 137 Cs fallout amount deposited on the earth surface in 1963 and the 137 Cs profile shape has not changed with time. The model has been widely used for assessment of soil losses on undisturbed land. However, temporal variations of 137 Cs depth distribution in undisturbed soils after its deposition on the ground due to downward transport processes are not considered in the previous simple profile-shape model. Thus, the soil losses are overestimated by the model. On the base of the erosion assessment model developed by Walling, D.E., He, Q. [1999. Improved models for estimating soil erosion rates from cesium-137 measurements. Journal of Environmental Quality 28, 611-622], we discuss the 137 Cs transport process in the eroded soil profile and make some simplification to the model, develop a method to estimate the soil erosion rate more expediently. To compare the soil erosion rates calculated by the simple profile-shape model and the simple transport model, the soil losses related to different 137 Cs loss proportions of the reference inventory at the Kaixian site of the

  13. Simplified method for the transverse bending analysis of twin celled concrete box girder bridges

    Science.gov (United States)

    Chithra, J.; Nagarajan, Praveen; S, Sajith A.

    2018-03-01

    Box girder bridges are one of the best options for bridges with span more than 25 m. For the study of these bridges, three-dimensional finite element analysis is the best suited method. However, performing three-dimensional analysis for routine design is difficult as well as time consuming. Also, software used for the three-dimensional analysis are very expensive. Hence designers resort to simplified analysis for predicting longitudinal and transverse bending moments. Among the many analytical methods used to find the transverse bending moments, SFA is the simplest and widely used in design offices. Results from simplified frame analysis can be used for the preliminary analysis of the concrete box girder bridges.From the review of literatures, it is found that majority of the work done using SFA is restricted to the analysis of single cell box girder bridges. Not much work has been done on the analysis multi-cell concrete box girder bridges. In this present study, a double cell concrete box girder bridge is chosen. The bridge is modelled using three- dimensional finite element software and the results are then compared with the simplified frame analysis. The study mainly focuses on establishing correction factors for transverse bending moment values obtained from SFA.

  14. Appraisal of elastic follow-up for a generic mechanical structure through two simplified methods

    International Nuclear Information System (INIS)

    Gamboni, S.; Ravera, C.; Stretti, G.; Rebora, A.

    1989-01-01

    Elastic follow-up (EFU) is a complex phenomenon which affects the behaviour of some structural components, especially in high temperature operations. One of the major problems encountered by the designer is the quantitative evaluation of the amount of elastic follow-up that must be taken into account for the structures under examination. In the present paper a review of the guidance furnished by the ASME Code regarding EFU is presented through an application concerning a structural problem in which EFU occurs. This has been carried out with the additional purpose of comparing the percentage EFU obtained by two simplified methods: an inelastic simplified method involving relaxation analysis; the reduced elastic modulus procedure generally used for EFU problems in piping systems. The results obtained demonstrate a substantial agreement between the two methodologies when applied to a general type structure. (author)

  15. Weather data for simplified energy calculation methods. Volume IV. United States: WYEC data

    Energy Technology Data Exchange (ETDEWEB)

    Olsen, A.R.; Moreno, S.; Deringer, J.; Watson, C.R.

    1984-08-01

    The objective of this report is to provide a source of weather data for direct use with a number of simplified energy calculation methods available today. Complete weather data for a number of cities in the United States are provided for use in the following methods: degree hour, modified degree hour, bin, modified bin, and variable degree day. This report contains sets of weather data for 23 cities using Weather Year for Energy Calculations (WYEC) source weather data. Considerable overlap is present in cities (21) covered by both the TRY and WYEC data. The weather data at each city has been summarized in a number of ways to provide differing levels of detail necessary for alternative simplified energy calculation methods. Weather variables summarized include dry bulb and wet bulb temperature, percent relative humidity, humidity ratio, wind speed, percent possible sunshine, percent diffuse solar radiation, total solar radiation on horizontal and vertical surfaces, and solar heat gain through standard DSA glass. Monthly and annual summaries, in some cases by time of day, are available. These summaries are produced in a series of nine computer generated tables.

  16. A Control Variate Method for Probabilistic Performance Assessment. Improved Estimates for Mean Performance Quantities of Interest

    Energy Technology Data Exchange (ETDEWEB)

    MacKinnon, Robert J.; Kuhlman, Kristopher L

    2016-05-01

    We present a method of control variates for calculating improved estimates for mean performance quantities of interest, E(PQI) , computed from Monte Carlo probabilistic simulations. An example of a PQI is the concentration of a contaminant at a particular location in a problem domain computed from simulations of transport in porous media. To simplify the presentation, the method is described in the setting of a one- dimensional elliptical model problem involving a single uncertain parameter represented by a probability distribution. The approach can be easily implemented for more complex problems involving multiple uncertain parameters and in particular for application to probabilistic performance assessment of deep geologic nuclear waste repository systems. Numerical results indicate the method can produce estimates of E(PQI)having superior accuracy on coarser meshes and reduce the required number of simulations needed to achieve an acceptable estimate.

  17. Formative Research on the Simplifying Conditions Method (SCM) for Task Analysis and Sequencing.

    Science.gov (United States)

    Kim, YoungHwan; Reigluth, Charles M.

    The Simplifying Conditions Method (SCM) is a set of guidelines for task analysis and sequencing of instructional content under the Elaboration Theory (ET). This article introduces the fundamentals of SCM and presents the findings from a formative research study on SCM. It was conducted in two distinct phases: design and instruction. In the first…

  18. A simplified, data-constrained approach to estimate the permafrost carbon–climate feedback

    Science.gov (United States)

    Koven, C.D.; Schuur, E.A.G.; Schädel, C.; Bohn, T. J.; Burke, E. J.; Chen, G.; Chen, X.; Ciais, P.; Grosse, G.; Harden, J.W.; Hayes, D.J.; Hugelius, G.; Jafarov, Elchin E.; Krinner, G.; Kuhry, P.; Lawrence, D.M.; MacDougall, A. H.; Marchenko, Sergey S.; McGuire, A. David; Natali, Susan M.; Nicolsky, D.J.; Olefeldt, David; Peng, S.; Romanovsky, V.E.; Schaefer, Kevin M.; Strauss, J.; Treat, C.C.; Turetsky, M.

    2015-01-01

    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation–Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a three-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100. Under a medium warming scenario (RCP4.5), the approach projects permafrost soil C losses of 12.2–33.4 Pg C; under a high warming scenario (RCP8.5), the approach projects C losses of 27.9–112.6 Pg C. Projected C losses are roughly linearly proportional to global temperature changes across the two scenarios. These results indicate a global sensitivity of frozen soil C to climate change (γ sensitivity) of −14 to −19 Pg C °C−1 on a 100 year time scale. For CH4 emissions, our approach assumes a fixed saturated area and that increases in CH4 emissions are related to increased heterotrophic respiration in anoxic soil, yielding CH4 emission increases of 7% and 35% for the RCP4.5 and RCP8.5 scenarios, respectively, which add an additional greenhouse gas forcing of approximately 10–18%. The

  19. Weather data for simplified energy calculation methods. Volume II. Middle United States: TRY data

    Energy Technology Data Exchange (ETDEWEB)

    Olsen, A.R.; Moreno, S.; Deringer, J.; Watson, C.R.

    1984-08-01

    The objective of this report is to provide a source of weather data for direct use with a number of simplified energy calculation methods available today. Complete weather data for a number of cities in the United States are provided for use in the following methods: degree hour, modified degree hour, bin, modified bin, and variable degree day. This report contains sets of weather data for 22 cities in the continental United States using Test Reference Year (TRY) source weather data. The weather data at each city has been summarized in a number of ways to provide differing levels of detail necessary for alternative simplified energy calculation methods. Weather variables summarized include dry bulb and wet bulb temperature, percent relative humidity, humidity ratio, wind speed, percent possible sunshine, percent diffuse solar radiation, total solar radiation on horizontal and vertical surfaces, and solar heat gain through standard DSA glass. Monthly and annual summaries, in some cases by time of day, are available. These summaries are produced in a series of nine computer generated tables.

  20. Non-intrusive speech quality assessment in simplified e-model

    OpenAIRE

    Vozňák, Miroslav

    2012-01-01

    The E-model brings a modern approach to the computation of estimated quality, allowing for easy implementation. One of its advantages is that it can be applied in real time. The method is based on a mathematical computation model evaluating transmission path impairments influencing speech signal, especially delays and packet losses. These parameters, common in an IP network, can affect speech quality dramatically. The paper deals with a proposal for a simplified E-model and its pr...

  1. Performance study of the simplified theory of plastic zones and the Twice-Yield method for the fatigue check

    International Nuclear Information System (INIS)

    Hübel, Hartwig; Willuweit, Adrian; Rudolph, Jürgen; Ziegler, Rainer; Lang, Hermann; Rother, Klemens; Deller, Simon

    2014-01-01

    As elastic–plastic fatigue analyses are still time consuming the simplified elastic–plastic analysis (e.g. ASME Section III, NB 3228.5, the French RCC-M code, paragraphs B 3234.3, B 3234.5 and B3234.6 and the German KTA rule 3201.2, paragraph 7.8.4) is often applied. Besides linearly elastic analyses and factorial plasticity correction (K e factors) direct methods are an option. In fact, calculation effort and accuracy of results are growing in the following graded scheme: a) linearly elastic analysis along with K e correction, b) direct methods for the determination of stabilized elastic–plastic strain ranges and c) incremental elastic–plastic methods for the determination of stabilized elastic–plastic strain ranges. The paper concentrates on option b) by substantiating the practical applicability of the simplified theory of plastic zones STPZ (based on Zarka's method) and – for comparison – the established Twice-Yield method. The Twice-Yield method is explicitly addressed in ASME Code, Section VIII, Div. 2. Application relevant aspects are particularly addressed. Furthermore, the applicability of the STPZ for arbitrary load time histories in connection with an appropriate cycle counting method is discussed. Note, that the STPZ is applicable both for the determination of (fatigue relevant) elastic–plastic strain ranges and (ratcheting relevant) locally accumulated strains. This paper concentrates on the performance of the method in terms of the determination of elastic–plastic strain ranges and fatigue usage factors. The additional performance in terms of locally accumulated strains and ratcheting will be discussed in a future publication. - Highlights: • Simplified elastic–plastic fatigue analyses. • Simplified theory of plastic zones. • Thermal cyclic loading. • Twice-Yield method. • Practical application examples

  2. Efficient Estimation of Extreme Non-linear Roll Motions using the First-order Reliability Method (FORM)

    DEFF Research Database (Denmark)

    Jensen, Jørgen Juncher

    2007-01-01

    In on-board decision support systems efficient procedures are needed for real-time estimation of the maximum ship responses to be expected within the next few hours, given on-line information on the sea state and user defined ranges of possible headings and speeds. For linear responses standard...... frequency domain methods can be applied. To non-linear responses like the roll motion, standard methods like direct time domain simulations are not feasible due to the required computational time. However, the statistical distribution of non-linear ship responses can be estimated very accurately using...... the first-order reliability method (FORM), well-known from structural reliability problems. To illustrate the proposed procedure, the roll motion is modelled by a simplified non-linear procedure taking into account non-linear hydrodynamic damping, time-varying restoring and wave excitation moments...

  3. New methods for estimating follow-up rates in cohort studies

    Directory of Open Access Journals (Sweden)

    Xiaonan Xue

    2017-12-01

    Full Text Available Abstract Background The follow-up rate, a standard index of the completeness of follow-up, is important for assessing the validity of a cohort study. A common method for estimating the follow-up rate, the “Percentage Method”, defined as the fraction of all enrollees who developed the event of interest or had complete follow-up, can severely underestimate the degree of follow-up. Alternatively, the median follow-up time does not indicate the completeness of follow-up, and the reverse Kaplan-Meier based method and Clark’s Completeness Index (CCI also have limitations. Methods We propose a new definition for the follow-up rate, the Person-Time Follow-up Rate (PTFR, which is the observed person-time divided by total person-time assuming no dropouts. The PTFR cannot be calculated directly since the event times for dropouts are not observed. Therefore, two estimation methods are proposed: a formal person-time method (FPT in which the expected total follow-up time is calculated using the event rate estimated from the observed data, and a simplified person-time method (SPT that avoids estimation of the event rate by assigning full follow-up time to all events. Simulations were conducted to measure the accuracy of each method, and each method was applied to a prostate cancer recurrence study dataset. Results Simulation results showed that the FPT has the highest accuracy overall. In most situations, the computationally simpler SPT and CCI methods are only slightly biased. When applied to a retrospective cohort study of cancer recurrence, the FPT, CCI and SPT showed substantially greater 5-year follow-up than the Percentage Method (92%, 92% and 93% vs 68%. Conclusions The Person-time methods correct a systematic error in the standard Percentage Method for calculating follow-up rates. The easy to use SPT and CCI methods can be used in tandem to obtain an accurate and tight interval for PTFR. However, the FPT is recommended when event rates and

  4. A coupled remote sensing and simplified surface energy balance approach to estimate actual evapotranspiration from irrigated fields

    Science.gov (United States)

    Senay, G.B.; Budde, Michael; Verdin, J.P.; Melesse, Assefa M.

    2007-01-01

    Accurate crop performance monitoring and production estimation are critical for timely assessment of the food balance of several countries in the world. Since 2001, the Famine Early Warning Systems Network (FEWS NET) has been monitoring crop performance and relative production using satellite-derived data and simulation models in Africa, Central America, and Afghanistan where ground-based monitoring is limited because of a scarcity of weather stations. The commonly used crop monitoring models are based on a crop water-balance algorithm with inputs from satellite-derived rainfall estimates. These models are useful to monitor rainfed agriculture, but they are ineffective for irrigated areas. This study focused on Afghanistan, where over 80 percent of agricultural production comes from irrigated lands. We developed and implemented a Simplified Surface Energy Balance (SSEB) model to monitor and assess the performance of irrigated agriculture in Afghanistan using a combination of 1-km thermal data and 250m Normalized Difference Vegetation Index (NDVI) data, both from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor. We estimated seasonal actual evapotranspiration (ETa) over a period of six years (2000-2005) for two major irrigated river basins in Afghanistan, the Kabul and the Helmand, by analyzing up to 19 cloud-free thermal and NDVI images from each year. These seasonal ETa estimates were used as relative indicators of year-to-year production magnitude differences. The temporal water-use pattern of the two irrigated basins was indicative of the cropping patterns specific to each region. Our results were comparable to field reports and to estimates based on watershed-wide crop water-balance model results. For example, both methods found that the 2003 seasonal ETa was the highest of all six years. The method also captured water management scenarios where a unique year-to-year variability was identified in addition to water-use differences between

  5. Decay ratio estimation in pressurized water reactor

    International Nuclear Information System (INIS)

    Por, G.; Runkel, J.

    1990-11-01

    The well known decay ratio (DR) from stability analysis of boiling water reactors (BWR) is estimated from the impulse response function which was evaluated using a simplified univariate autoregression method. This simplified DR called modified DR (mDR) was applied on neutron noise measurements carried out during five fuel cycles of a 1300 MWe PWR. Results show that this fast evaluation method can be used for monitoring of the growing oscillation of the neutron flux during the fuel cycles which is a major concern of utilities in PWRs, thus it can be used for estimating safety margins. (author) 17 refs.; 10 figs

  6. Detailed disc assembly temperature prediction: comparison between CFD and simplified engineering methods

    CSIR Research Space (South Africa)

    Snedden, Glen C

    2003-09-01

    Full Text Available Institute of Aeronautics and Astronautics Inc. All rights reserved. DETAILED DISC ASSEMBLY TEMPERATURE PREDICTION: COMPARISON BETWEEN CFD AND SIMPLIFIED ENGINEERING METHODS ISABE-2005-1130 Glen Snedden, Thomas Roos and Kavendra Naidoo CSIR, Defencetek... transfer and conduction code (Gaugler, 1978) Taw Adiabatic Wall Temperature y+ Near wall Reynolds number Introduction In order to calculate life degradation of gas turbine disc assemblies, it is necessary to model the transient thermal and mechanical...

  7. SIMPLIFIED MATHEMATICAL MODEL OF SMALL SIZED UNMANNED AIRCRAFT VEHICLE LAYOUT

    Directory of Open Access Journals (Sweden)

    2016-01-01

    Full Text Available Strong reduction of new aircraft design period using new technology based on artificial intelligence is the key problem mentioned in forecasts of leading aerospace industry research centers. This article covers the approach to devel- opment of quick aerodynamic design methods based on artificial intelligence neural system. The problem is being solved for the classical scheme of small sized unmanned aircraft vehicle (UAV. The principal parts of the method are the mathe- matical model of layout, layout generator of this type of aircraft is built on aircraft neural networks, automatic selection module for cleaning variety of layouts generated in automatic mode, robust direct computational fluid dynamics method, aerodynamic characteristics approximators on artificial neural networks.Methods based on artificial neural networks have intermediate position between computational fluid dynamics methods or experiments and simplified engineering approaches. The use of ANN for estimating aerodynamic characteris-tics put limitations on input data. For this task the layout must be presented as a vector with dimension not exceeding sev-eral hundred. Vector components must include all main parameters conventionally used for layouts description and com- pletely replicate the most important aerodynamics and structural properties.The first stage of the work is presented in the paper. Simplified mathematical model of small sized UAV was developed. To estimate the range of geometrical parameters of layouts the review of existing vehicle was done. The result of the work is the algorithm and computer software for generating the layouts based on ANN technolo-gy. 10000 samples were generated and the dataset containig geometrical and aerodynamic characteristics of layoutwas created.

  8. The use of maturity method in estimating concrete strength

    International Nuclear Information System (INIS)

    Salama, A.E.; Abd El-Baky, S.M.; Ali, E.E.; Ghanem, G.M.

    2005-01-01

    Prediction of the early age strength of concrete is essential for modernized concrete for construction as well as for manufacturing of structural parts. Safe and economic scheduling of such critical operations as form removal and re shoring, application of post-tensioning or other mechanical treatment, and in process transportation and rapid delivery of products all should be based upon a good grasp of the strength development of the concrete in use. For many years, it has been proposed that the strength of concrete can be related to a simple mathematical function of time and temperature so that strength could be assessed by calculation without mechanical testing. Such functions are used to compute what is called the m aturity o f concrete, and the computed value is believed to obtain a correlation with the strength of concrete. With its simplicity and low cost, the application of maturity concept as in situ testing method has received wide attention and found its use in engineering practice. This research work investigates the use of M aturity method' in estimating the concrete strength. An experimental program is designed to estimate the concrete strength by using the maturity method. Using different concrete mixes, with available local materials. Ordinary Portland Cement, crushed stone, silica fume, fly ash and admixtures with different contents are used . All the specimens were exposed to different curing temperatures (10, 25 and 40 degree C), in order to get a simplified expression of maturity that fits in with the influence of temperature. Mix designs and charts obtained from this research can be used as guide information for estimating concrete strength by using the maturity method

  9. Wave Velocity Estimation in Heterogeneous Media

    KAUST Repository

    Asiri, Sharefa M.; Laleg-Kirati, Taous-Meriem

    2016-01-01

    In this paper, modulating functions-based method is proposed for estimating space-time dependent unknown velocity in the wave equation. The proposed method simplifies the identification problem into a system of linear algebraic equations. Numerical

  10. A simplified approach to the PROMETHEE method for priority setting in management of mine action projects

    Directory of Open Access Journals (Sweden)

    Marko Mladineo

    2016-12-01

    Full Text Available In the last 20 years, priority setting in mine actions, i.e. in humanitarian demining, has become an increasingly important topic. Given that mine action projects require management and decision-making based on a multi -criteria approach, multi-criteria decision-making methods like PROMETHEE and AHP have been used worldwide for priority setting. However, from the aspect of mine action, where stakeholders in the decision-making process for priority setting are project managers, local politicians, leaders of different humanitarian organizations, or similar, applying these methods can be difficult. Therefore, a specialized web-based decision support system (Web DSS for priority setting, developed as part of the FP7 project TIRAMISU, has been extended using a module for developing custom priority setting scenarios in line with an exceptionally easy, user-friendly approach. The idea behind this research is to simplify the multi-criteria analysis based on the PROMETHEE method. Therefore, a simplified PROMETHEE method based on statistical analysis for automated suggestions of parameters such as preference function thresholds, interactive selection of criteria weights, and easy input of criteria evaluations is presented in this paper. The result is web-based DSS that can be applied worldwide for priority setting in mine action. Additionally, the management of mine action projects is supported using modules for providing spatial data based on the geographic information system (GIS. In this paper, the benefits and limitations of a simplified PROMETHEE method are presented using a case study involving mine action projects, and subsequently, certain proposals are given for the further research.

  11. Simplified method for beatlength measurement in optical fibre

    International Nuclear Information System (INIS)

    Chu, R.; Town, G.

    2000-01-01

    Full text: A simplified technique for measuring beatlength in birefringent optical fibres using magnetic modulation was analysed, and tested experimentally. By avoiding the use of unnecessary optical components and splicing to the fibre under test, the beatlength was measured accurately with good signal-to-noise ratio

  12. Simplified analysis method for vibration of fusion reactor components with magnetic damping

    International Nuclear Information System (INIS)

    Tanaka, Yoshikazu; Horie, Tomoyoshi; Niho, Tomoya

    2000-01-01

    This paper describes two simplified analysis methods for the magnetically damped vibration. One is the method modifying the result of finite element uncoupled analysis using the coupling intensity parameter, and the other is the method using the solution and coupled eigenvalues of the single-degree-of-freedom coupled model. To verify these methods, numerical analyses of a plate and a thin cylinder are performed. The comparison between the results of the former method and the finite element tightly coupled analysis show almost satisfactory agreement. The results of the latter method agree very well with the finite element tightly coupled results because of the coupled eigenvalues. Since the vibration with magnetic damping can be evaluated using these methods without finite element coupled analysis, these approximate methods will be practical and useful for the wide range of design analyses taking account of the magnetic damping effect

  13. Performance Analyses of Counter-Flow Closed Wet Cooling Towers Based on a Simplified Calculation Method

    Directory of Open Access Journals (Sweden)

    Xiaoqing Wei

    2017-02-01

    Full Text Available As one of the most widely used units in water cooling systems, the closed wet cooling towers (CWCTs have two typical counter-flow constructions, in which the spray water flows from the top to the bottom, and the moist air and cooling water flow in the opposite direction vertically (parallel or horizontally (cross, respectively. This study aims to present a simplified calculation method for conveniently and accurately analyzing the thermal performance of the two types of counter-flow CWCTs, viz. the parallel counter-flow CWCT (PCFCWCT and the cross counter-flow CWCT (CCFCWCT. A simplified cooling capacity model that just includes two characteristic parameters is developed. The Levenberg–Marquardt method is employed to determine the model parameters by curve fitting of experimental data. Based on the proposed model, the predicted outlet temperatures of the process water are compared with the measurements of a PCFCWCT and a CCFCWCT, respectively, reported in the literature. The results indicate that the predicted values agree well with the experimental data in previous studies. The maximum absolute errors in predicting the process water outlet temperatures are 0.20 and 0.24 °C for the PCFCWCT and CCFCWCT, respectively. These results indicate that the simplified method is reliable for performance prediction of counter-flow CWCTs. Although the flow patterns of the two towers are different, the variation trends of thermal performance are similar to each other under various operating conditions. The inlet air wet-bulb temperature, inlet cooling water temperature, air flow rate, and cooling water flow rate are crucial for determining the cooling capacity of a counter-flow CWCT, while the cooling tower effectiveness is mainly determined by the flow rates of air and cooling water. Compared with the CCFCWCT, the PCFCWCT is much more applicable in a large-scale cooling water system, and the superiority would be amplified when the scale of water

  14. Prediction of the heat gain of external walls: An innovative approach for full-featured excitations based on the simplified method of Mackey-and-Wright

    International Nuclear Information System (INIS)

    Ruivo, C.R.; Vaz, D.C.

    2015-01-01

    Highlights: • The transient thermal behaviour of external multilayer walls of buildings is studied. • Reference results for four representative walls, obtained with a numerical model, are provided. • Shortcomings of approaches based on the Mackey-and-Wright method are identified. • Handling full-feature excitations with Fourier series decomposition improves accuracy. • A simpler, yet accurate, promising novel approach to predict heat gain is proposed. - Abstract: Nowadays, simulation tools are available for calculating the thermal loads of multiple rooms of buildings, for given inputs. However, due to inaccuracies or uncertainties in some of the input data (e.g., thermal properties, air infiltrations flow rates, building occupancy), the evaluated thermal load may represent no more than just an estimate of the actual thermal load of the spaces. Accordingly, in certain practical situations, simplified methods may offer a more reasonable trade-off between effort and results accuracy than advanced software. Hence, despite the advances in computing power over the last decades, simplified methods for the evaluation of thermal loads are still of great interest nowadays, for both the practicing engineer and the graduating student, since these can be readily implemented or developed in common computational-tools, like a spreadsheet. The method of Mackey and Wright (M&W) is a simplified method that upon values of the decrement factor and time lag of a wall (or roof) estimates the instantaneous rate of heat transfer through its indoor surface. It assumes cyclic behaviour and shows good accuracy when the excitation and response have matching shapes, but it involves non negligible error otherwise, for example, in the case of walls of high thermal inertia. The aim of this study is to develop a simplified procedure that considerably improves the accuracy of the M&W method, particularly for excitations that noticeably depart from the sinusoidal shape, while not

  15. Simplified DFT methods for consistent structures and energies of large systems

    Science.gov (United States)

    Caldeweyher, Eike; Gerit Brandenburg, Jan

    2018-05-01

    Kohn–Sham density functional theory (DFT) is routinely used for the fast electronic structure computation of large systems and will most likely continue to be the method of choice for the generation of reliable geometries in the foreseeable future. Here, we present a hierarchy of simplified DFT methods designed for consistent structures and non-covalent interactions of large systems with particular focus on molecular crystals. The covered methods are a minimal basis set Hartree–Fock (HF-3c), a small basis set screened exchange hybrid functional (HSE-3c), and a generalized gradient approximated functional evaluated in a medium-sized basis set (B97-3c), all augmented with semi-classical correction potentials. We give an overview on the methods design, a comprehensive evaluation on established benchmark sets for geometries and lattice energies of molecular crystals, and highlight some realistic applications on large organic crystals with several hundreds of atoms in the primitive unit cell.

  16. Small-scale CDM projects in a competitive electricity industry: How good is a simplified baseline methodology?

    International Nuclear Information System (INIS)

    Shrestha, Ram M.; Abeygunawardana, A.M.A.K.

    2007-01-01

    Setting baseline emissions is one of the principal tasks involved in awarding credits for greenhouse gas emission (GHG) mitigation projects under the Clean Development Mechanism (CDM). An emission baseline has to be project-specific in order to be accurate. However, project-specific baseline calculations are subject to high transaction costs, which disadvantage small-scale projects. For this reason, the CDM-Executive Board (CDM-EB) has approved simplified baseline methodologies for selected small-scale CDM project categories. While the simplified methods help reduce the transaction cost, they may also result in inaccuracies in the estimation of emission reductions from CDM projects. The purpose of this paper is to present a rigorous economic scheduling method for calculating the GHG emission reduction in a hypothetical competitive electricity industry due to the operation of a renewable energy-based power plant under CDM and compare the GHG emission reduction derived from the rigorous method with that obtained from the use of a simplified (i.e., standardized) method approved by the CDM-EB. A key finding of the paper is that depending upon the level of power demand, prices of electricity and input fuels, the simplified method can lead to either significant overestimation or substantial underestimation of emission reduction due to the operation of renewable energy-based power projects in a competitive electricity industry

  17. Simplified analysis of laterally loaded pile groups

    Directory of Open Access Journals (Sweden)

    F.M. Abdrabbo

    2012-06-01

    Full Text Available The response of laterally loaded pile groups is a complicated soil–structure interaction problem. Although fairly reliable methods are developed to predicate the lateral behavior of single piles, the lateral response of pile groups has attracted less attention due to the required high cost and complication implication. This study presents a simplified method to analyze laterally loaded pile groups. The proposed method implements p-multiplier factors in combination with the horizontal modulus of subgrade reaction. Shadowing effects in closely spaced piles in a group were taken into consideration. It is proven that laterally loaded piles embedded in sand can be analyzed within the working load range assuming a linear relationship between lateral load and lateral displacement. The proposed method estimates the distribution of lateral loads among piles in a pile group and predicts the safe design lateral load of a pile group. The benefit of the proposed method is in its simplicity for the preliminary design stage with a little computational effort.

  18. Improved Simplified Methods for Effective Seismic Analysis and Design of Isolated and Damped Bridges in Western and Eastern North America

    Science.gov (United States)

    Koval, Viacheslav

    The seismic design provisions of the CSA-S6 Canadian Highway Bridge Design Code and the AASHTO LRFD Seismic Bridge Design Specifications have been developed primarily based on historical earthquake events that have occurred along the west coast of North America. For the design of seismic isolation systems, these codes include simplified analysis and design methods. The appropriateness and range of application of these methods are investigated through extensive parametric nonlinear time history analyses in this thesis. It was found that there is a need to adjust existing design guidelines to better capture the expected nonlinear response of isolated bridges. For isolated bridges located in eastern North America, new damping coefficients are proposed. The applicability limits of the code-based simplified methods have been redefined to ensure that the modified method will lead to conservative results and that a wider range of seismically isolated bridges can be covered by this method. The possibility of further improving current simplified code methods was also examined. By transforming the quantity of allocated energy into a displacement contribution, an idealized analytical solution is proposed as a new simplified design method. This method realistically reflects the effects of ground-motion and system design parameters, including the effects of a drifted oscillation center. The proposed method is therefore more appropriate than current existing simplified methods and can be applicable to isolation systems exhibiting a wider range of properties. A multi-level-hazard performance matrix has been adopted by different seismic provisions worldwide and will be incorporated into the new edition of the Canadian CSA-S6-14 Bridge Design code. However, the combined effect and optimal use of isolation and supplemental damping devices in bridges have not been fully exploited yet to achieve enhanced performance under different levels of seismic hazard. A novel Dual-Level Seismic

  19. Cask crush pad analysis using detailed and simplified analysis methods

    International Nuclear Information System (INIS)

    Uldrich, E.D.; Hawkes, B.D.

    1997-01-01

    A crush pad has been designed and analyzed to absorb the kinetic energy of a hypothetically dropped spent nuclear fuel shipping cask into a 44-ft. deep cask unloading pool at the Fluorinel and Storage Facility (FAST). This facility, located at the Idaho Chemical Processing Plant (ICPP) at the Idaho national Engineering and Environmental Laboratory (INEEL), is a US Department of Energy site. The basis for this study is an analysis by Uldrich and Hawkes. The purpose of this analysis was to evaluate various hypothetical cask drop orientations to ensure that the crush pad design was adequate and the cask deceleration at impact was less than 100 g. It is demonstrated herein that a large spent fuel shipping cask, when dropped onto a foam crush pad, can be analyzed by either hand methods or by sophisticated dynamic finite element analysis using computer codes such as ABAQUS. Results from the two methods are compared to evaluate accuracy of the simplified hand analysis approach

  20. Development and validation of a simplified titration method for monitoring volatile fatty acids in anaerobic digestion.

    Science.gov (United States)

    Sun, Hao; Guo, Jianbin; Wu, Shubiao; Liu, Fang; Dong, Renjie

    2017-09-01

    The volatile fatty acids (VFAs) concentration has been considered as one of the most sensitive process performance indicators in anaerobic digestion (AD) process. However, the accurate determination of VFAs concentration in AD processes normally requires advanced equipment and complex pretreatment procedures. A simplified method with fewer sample pretreatment procedures and improved accuracy is greatly needed, particularly for on-site application. This report outlines improvements to the Nordmann method, one of the most popular titrations used for VFA monitoring. The influence of ion and solid interfering subsystems in titrated samples on results accuracy was discussed. The total solid content in titrated samples was the main factor affecting accuracy in VFA monitoring. Moreover, a high linear correlation was established between the total solids contents and VFA measurement differences between the traditional Nordmann equation and gas chromatography (GC). Accordingly, a simplified titration method was developed and validated using a semi-continuous experiment of chicken manure anaerobic digestion with various organic loading rates. The good fitting of the results obtained by this method in comparison with GC results strongly supported the potential application of this method to VFA monitoring. Copyright © 2017. Published by Elsevier Ltd.

  1. Simplified methods to the complete thermal and mechanical behavior of a pressure vessel during a severe accident

    International Nuclear Information System (INIS)

    Dupas, P.; Schneiter, J.R.

    1996-01-01

    EDF has developed a software package of simplified methods (proprietary ones or from literature) in order to study the thermal and mechanical behavior of a PWR pressure vessel during a severe accident involving a corium localization in the vessel lower head. Using a part of this package, the authors can evaluate for instance successively: the heat flux at the inner surface of the vessel (conductive or convective pool of corium); the thermal exchange coefficient between the vessel and the outside (dry pit or flooded pit, watertight thermal insulation or not); the complete thermal evolution of the vessel (temperature profile, melting); the possible global plastic failure of the vessel; the creep behavior in the thickness of the vessel. These simplified methods are a cost effective alternative to finite element calculations which are yet used to validate the previous methods, waiting for experimental results to come

  2. Identification of new biomarker of radiation exposure for establishing rapid, simplified biodosimetric method

    International Nuclear Information System (INIS)

    Iizuka, Daisuke; Kawai, Hidehiko; Kamiya, Kenji; Suzuki, Fumio; Izumi, Shunsuke

    2014-01-01

    Until now, counting chromosome aberration is the most accurate method for evaluating radiation doses. However, this method is time consuming and requires skills for evaluating chromosome aberrations. It could be difficult to apply this method to majority of people who are expected to be exposed to ionizing radiation. In this viewpoint, establishment of rapid, simplified biodosimetric methods for triage will be anticipated. Due to the development of mass spectrometry method and the identification of new molecules such as microRNA (miRNA), it is conceivable that new molecular biomarker of radiation exposure using some newly developed mass spectrometry. In this review article, the part of our results including the changes of protein (including the changes of glycosylation), peptide, metabolite, miRNA after radiation exposure will be shown. (author)

  3. Simplified thermal fatigue evaluations using the GLOSS method

    International Nuclear Information System (INIS)

    Adinarayana, N.; Seshadri, R.

    1996-01-01

    The Generalized Local Stress Strain (GLOSS) method has been extended to include thermal effects in addition to mechanical loadings. The method, designated as Thermal-GLOSS, has been applied to several pressure component configuration of practical interest. The inelastic strains calculated by the Thermal-GLOSS method has been compared with the Molski-Glinka method, the Neuber formula and the inelastic finite element analysis results, and found to give consistently good estimates. This is pertinent to power plant equipment

  4. Enhancing the Simplified Surface Energy Balance (SSEB) Approach for Estimating Landscape ET: Validation with the METRIC model

    Science.gov (United States)

    Senay, Gabriel B.; Budde, Michael E.; Verdin, James P.

    2011-01-01

    Evapotranspiration (ET) can be derived from satellite data using surface energy balance principles. METRIC (Mapping EvapoTranspiration at high Resolution with Internalized Calibration) is one of the most widely used models available in the literature to estimate ET from satellite imagery. The Simplified Surface Energy Balance (SSEB) model is much easier and less expensive to implement. The main purpose of this research was to present an enhanced version of the Simplified Surface Energy Balance (SSEB) model and to evaluate its performance using the established METRIC model. In this study, SSEB and METRIC ET fractions were compared using 7 Landsat images acquired for south central Idaho during the 2003 growing season. The enhanced SSEB model compared well with the METRIC model output exhibiting an r2 improvement from 0.83 to 0.90 in less complex topography (elevation less than 2000 m) and with an improvement of r2 from 0.27 to 0.38 in more complex (mountain) areas with elevation greater than 2000 m. Independent evaluation showed that both models exhibited higher variation in complex topographic regions, although more with SSEB than with METRIC. The higher ET fraction variation in the complex mountainous regions highlighted the difficulty of capturing the radiation and heat transfer physics on steep slopes having variable aspect with the simple index model, and the need to conduct more research. However, the temporal consistency of the results suggests that the SSEB model can be used on a wide range of elevation (more successfully up 2000 m) to detect anomalies in space and time for water resources management and monitoring such as for drought early warning systems in data scarce regions. SSEB has a potential for operational agro-hydrologic applications to estimate ET with inputs of surface temperature, NDVI, DEM and reference ET.

  5. Simplified methods applied to the complete thermal and mechanical behaviour of a pressure vessel during a severe accident

    International Nuclear Information System (INIS)

    Dupas, P.

    1996-01-01

    EDF has developed a software package of simplified methods (proprietary ones from literature) in order to study the thermal and mechanical behaviour of a PWR pressure vessel during a severe accident involving a corium localization in the vessel lower head. Using a part of this package, we can evaluate for instance successively: the heat flux at the inner surface of the vessel (conductive or convective pool of corium); the thermal exchange coefficient between the vessel and the outside (dry pit or flooded pit, watertight thermal insulation or not); the complete thermal evolution of the vessel (temperature profile, melting); the possible global plastic failure of the vessel; the creep behaviour in the vessel. These simplified methods are low cost alternative to finite element calculations which are yet used to validate the previous methods, waiting for experimental results to come. (authors)

  6. Method for estimating road salt contamination of Norwegian lakes

    Science.gov (United States)

    Kitterød, Nils-Otto; Wike Kronvall, Kjersti; Turtumøygaard, Stein; Haaland, Ståle

    2013-04-01

    Consumption of road salt in Norway, used to improve winter road conditions, has been tripled during the last two decades, and there is a need to quantify limits for optimal use of road salt to avoid further environmental harm. The purpose of this study was to implement methodology to estimate chloride concentration in any given water body in Norway. This goal is feasible to achieve if the complexity of solute transport in the landscape is simplified. The idea was to keep computations as simple as possible to be able to increase spatial resolution of input functions. The first simplification we made was to treat all roads exposed to regular salt application as steady state sources of sodium chloride. This is valid if new road salt is applied before previous contamination is removed through precipitation. The main reasons for this assumption are the significant retention capacity of vegetation; organic matter; and soil. The second simplification we made was that the groundwater table is close to the surface. This assumption is valid for major part of Norway, which means that topography is sufficient to delineate catchment area at any location in the landscape. Given these two assumptions, we applied spatial functions of mass load (mass NaCl pr. time unit) and conditional estimates of normal water balance (volume of water pr. time unit) to calculate steady state chloride concentration along the lake perimeter. Spatial resolution of mass load and estimated concentration along the lake perimeter was 25 m x 25 m while water balance had 1 km x 1 km resolution. The method was validated for a limited number of Norwegian lakes and estimation results have been compared to observations. Initial results indicate significant overlap between measurements and estimations, but only for lakes where the road salt is the major contribution for chloride contamination. For lakes in catchments with high subsurface transmissivity, the groundwater table is not necessarily following the

  7. Simplified methods for evaluating road prism stability

    Science.gov (United States)

    William J. Elliot; Mark Ballerini; David Hall

    2003-01-01

    Mass failure is one of the most common failures of low-volume roads in mountainous terrain. Current methods for evaluating stability of these roads require a geotechnical specialist. A stability analysis program, XSTABL, was used to estimate the stability of 3,696 combinations of road geometry, soil, and groundwater conditions. A sensitivity analysis was carried out to...

  8. A Combined Gravity Compensation Method for INS Using the Simplified Gravity Model and Gravity Database.

    Science.gov (United States)

    Zhou, Xiao; Yang, Gongliu; Wang, Jing; Wen, Zeyang

    2018-05-14

    In recent decades, gravity compensation has become an important way to reduce the position error of an inertial navigation system (INS), especially for a high-precision INS, because of the extensive application of high precision inertial sensors (accelerometers and gyros). This paper first deducts the INS's solution error considering gravity disturbance and simulates the results. Meanwhile, this paper proposes a combined gravity compensation method using a simplified gravity model and gravity database. This new combined method consists of two steps all together. Step 1 subtracts the normal gravity using a simplified gravity model. Step 2 first obtains the gravity disturbance on the trajectory of the carrier with the help of ELM training based on the measured gravity data (provided by Institute of Geodesy and Geophysics; Chinese Academy of sciences), and then compensates it into the error equations of the INS, considering the gravity disturbance, to further improve the navigation accuracy. The effectiveness and feasibility of this new gravity compensation method for the INS are verified through vehicle tests in two different regions; one is in flat terrain with mild gravity variation and the other is in complex terrain with fierce gravity variation. During 2 h vehicle tests, the positioning accuracy of two tests can improve by 20% and 38% respectively, after the gravity is compensated by the proposed method.

  9. A simple in vitro test tube method for estimating the bioavailability of phosphorus in feed ingredients for swine.

    Science.gov (United States)

    Bollinger, David W; Tsunoda, Atsushi; Ledoux, David R; Ellersieck, Mark R; Veum, Trygve L

    2004-04-07

    A simplified in vitro test tube (TT) method was developed to estimate the percentage of available P in feed ingredients for swine. The entire digestion procedure with the TT method consists of three consecutive enzymatic digestions carried out in a 50-mL conical test tube: (1) Pre-digestion with endo-xylanase and beta-glucanase for 1 h, (2) peptic digestion for 2 h, and (3) pancreatic digestion for 2 or 4 h. The TT method is simpler and much easier to perform compared to the dialysis tubing (DT) method, because dialysis tubing is not used. Reducing sample size from 1.0 to 0.25 g for the TT method improved results. In conclusion, the accuracy and validity of the TT method is equal to that of our more complicated DT method (r = 0.97, P < 0.001), designed to mimic the digestive system of swine, for estimating the availability of P in plant-origin feed ingredients.

  10. Wave Velocity Estimation in Heterogeneous Media

    KAUST Repository

    Asiri, Sharefa M.

    2016-03-21

    In this paper, modulating functions-based method is proposed for estimating space-time dependent unknown velocity in the wave equation. The proposed method simplifies the identification problem into a system of linear algebraic equations. Numerical simulations on noise-free and noisy cases are provided in order to show the effectiveness of the proposed method.

  11. Simplified method for elastic plastic analysis of material presenting bilinear kinematic hardening

    International Nuclear Information System (INIS)

    Roche, R.

    1983-12-01

    A simplified method for elastic plastic analysis is presented. Material behavior is assumed to be elastic plastic with bilinear kinematic hardening. The proposed method give a strain-stress field fullfilling material constitutive equations, equations of equilibrium and continuity conditions. This strain-stress is obtained through two linear computations. The first one is the conventional elastic analysis of the body submitted to the applied load. The second one use tangent matrix (tangent Young's modulus and Poisson's ratio) for the determination of an additional stress due to imposed initial strain. Such a method suits finite elements computer codes, the most useful result being plastic strains resulting from the applied loading (load control or deformation control). Obviously, there is not unique solution, for stress-strain field is not depending only of the applied load, but of the load history. Therefore, less pessimistic solutions can be got by one or two additional linear computations [fr

  12. Simplified Theory of Plastic Zones for cyclic loading and multilinear hardening

    International Nuclear Information System (INIS)

    Hübel, Hartwig

    2015-01-01

    The Simplified Theory of Plastic Zones (STPZ) is a direct method based on Zarka's method, primarily developed to estimate post-shakedown quantities of structures under cyclic loading, avoiding incremental analyses through a load histogram. In a different paper the STPZ has previously been shown to provide excellent estimates of the elastic–plastic strain ranges in the state of plastic shakedown as required for fatigue analyses. In the present paper, it is described how the STPZ can be used to predict the strains accumulated through a number of loading cycles due to a ratcheting mechanism, until either elastic or plastic shakedown is achieved, so that strain limits can be satisfied. Thus, a consistent means of estimating both, strain ranges and accumulated strains is provided for structural integrity assessment as required by pressure vessel codes. The computational costs involved typically consist of few linear elastic analyses and some local calculations. Multilinear kinematic hardening and temperature dependent yield stresses are accounted for. The quality of the results and the computational burden involved are demonstrated through four examples. - Highlights: • A method is provided to estimate accumulated elastic–plastic strains. • A consistent method is provided to estimate elastic–plastic strain ranges. • Effect of multilinear kinematic hardening is captured. • Temperature dependent material properties are accounted for. • Few linear elastic analyses required

  13. A comparative study between a simplified Kalman filter and Sliding Window Averaging for single trial dynamical estimation of event-related potentials

    DEFF Research Database (Denmark)

    Vedel-Larsen, Esben; Fuglø, Jacob; Channir, Fouad

    2010-01-01

    , are variable and depend on cognitive function. This study compares the performance of a simplified Kalman filter with Sliding Window Averaging in tracking dynamical changes in single trial P300. The comparison is performed on simulated P300 data with added background noise consisting of both simulated and real...... background EEG in various input signal to noise ratios. While both methods can be applied to track dynamical changes, the simplified Kalman filter has an advantage over the Sliding Window Averaging, most notable in a better noise suppression when both are optimized for faster changing latency and amplitude...

  14. Bias in estimating food consumption of fish from stomach-content analysis

    DEFF Research Database (Denmark)

    Rindorf, Anna; Lewy, Peter

    2004-01-01

    This study presents an analysis of the bias introduced by using simplified methods to calculate food intake of fish from stomach contents. Three sources of bias were considered: (1) the effect of estimating consumption based on a limited number of stomach samples, (2) the effect of using average......, a serious positive bias was introduced by estimating food intake from the contents of pooled stomach samples. An expression is given that can be used to correct analytically for this bias. A new method, which takes into account the distribution and evacuation of individual prey types as well as the effect...... of other food in the stomach on evacuation, is suggested for estimating the intake of separate prey types. Simplifying the estimation by ignoring these factors biased estimates of consumption of individual prey types by up to 150% in a data example....

  15. Simplified method for the determination of N-nitrosamines in rubber vulcanizates

    Energy Technology Data Exchange (ETDEWEB)

    Incavo, Joseph A [Goodyear Tire and Rubber Company, Akron, OH (United States); Schafer, Melvin A [oodyear Tire and Rubber Company, Akron, OH (United States)

    2006-01-31

    A simplified method for the trace determination of N-nitrosamines in carbon black-loaded rubber compounds is described. The extraction of volatile nitrosamines is accomplished by thermal desorption rather than the traditional solvent extraction procedure. The analytes are trapped on Thermosorb/N sorbent and subsequently analyzed by gas chromatography with thermal energy analyzer detection (GC/TEA). Conditions that provide full extraction of nitrosamines from actual rubber compounds were determined to be 30 min at 150 deg. C in vessels dynamically purged with N{sub 2}. Method precision was found to be 10% for NDMA at 71 ng/g and 7.3% for NMOR at 248 ng/g. Recoveries for the seven common N-nitrosamines ranged from 94 to 117%. Limits of detection in the rubber matrix are 6.3-13 ng/g. The technique is found to offer improved recovery of lower molecular weight nitrosamines and it is shown to be simpler and faster than previous techniques.

  16. Simplified method for the determination of N-nitrosamines in rubber vulcanizates

    International Nuclear Information System (INIS)

    Incavo, Joseph A.; Schafer, Melvin A.

    2006-01-01

    A simplified method for the trace determination of N-nitrosamines in carbon black-loaded rubber compounds is described. The extraction of volatile nitrosamines is accomplished by thermal desorption rather than the traditional solvent extraction procedure. The analytes are trapped on Thermosorb/N sorbent and subsequently analyzed by gas chromatography with thermal energy analyzer detection (GC/TEA). Conditions that provide full extraction of nitrosamines from actual rubber compounds were determined to be 30 min at 150 deg. C in vessels dynamically purged with N 2 . Method precision was found to be 10% for NDMA at 71 ng/g and 7.3% for NMOR at 248 ng/g. Recoveries for the seven common N-nitrosamines ranged from 94 to 117%. Limits of detection in the rubber matrix are 6.3-13 ng/g. The technique is found to offer improved recovery of lower molecular weight nitrosamines and it is shown to be simpler and faster than previous techniques

  17. Photographic and drafting techniques simplify method of producing engineering drawings

    Science.gov (United States)

    Provisor, H.

    1968-01-01

    Combination of photographic and drafting techniques has been developed to simplify the preparation of three dimensional and dimetric engineering drawings. Conventional photographs can be converted to line drawings by making copy negatives on high contrast film.

  18. Simplified static method for determining seismic loads on equipment in moderate and high hazard facilities

    International Nuclear Information System (INIS)

    Scott, M.A.; Holmes, P.A.

    1991-01-01

    A simplified static analysis methodology is presented for qualifying equipment in moderate and high-hazard facility-use category structures, where the facility use is defined in Design and Evaluation Guidelines for Department of Energy Facilities Subjected to Natural Phenomena Hazards, UCRL-15910. Currently there are no equivalent simplified static methods for determining seismic loads on equipment in these facility use categories without completing dynamic analysis of the facility to obtain local floor accelerations or spectra. The requirements of UCRL-15910 specify the use of open-quotes dynamicclose quotes analysis methods, consistent with Seismic Design Guidelines for Essential Buildings, Chapter 6, open-quotes Nonstructural Elements,close quotes TM5-809-10-1, be used for determining seismic loads on mechanical equipment and components. Chapter 6 assumes that the dynamic analysis of the facility has generated either floor response spectra or model floor accelerations. These in turn are utilized with the dynamic modification factor and the actual demand and capacity ratios to determine equipment loading. This complex methodology may be necessary to determine more exacting loads for hard to qualify equipment but does not provide a simple conservative loading methodology for equipment with ample structural capacity

  19. Rapid construction of pinhole SPECT system matrices by distance-weighted Gaussian interpolation method combined with geometric parameter estimations

    International Nuclear Information System (INIS)

    Lee, Ming-Wei; Chen, Yi-Chun

    2014-01-01

    In pinhole SPECT applied to small-animal studies, it is essential to have an accurate imaging system matrix, called H matrix, for high-spatial-resolution image reconstructions. Generally, an H matrix can be obtained by various methods, such as measurements, simulations or some combinations of both methods. In this study, a distance-weighted Gaussian interpolation method combined with geometric parameter estimations (DW-GIMGPE) is proposed. It utilizes a simplified grid-scan experiment on selected voxels and parameterizes the measured point response functions (PRFs) into 2D Gaussians. The PRFs of missing voxels are interpolated by the relations between the Gaussian coefficients and the geometric parameters of the imaging system with distance-weighting factors. The weighting factors are related to the projected centroids of voxels on the detector plane. A full H matrix is constructed by combining the measured and interpolated PRFs of all voxels. The PRFs estimated by DW-GIMGPE showed similar profiles as the measured PRFs. OSEM reconstructed images of a hot-rod phantom and normal rat myocardium demonstrated the effectiveness of the proposed method. The detectability of a SKE/BKE task on a synthetic spherical test object verified that the constructed H matrix provided comparable detectability to that of the H matrix acquired by a full 3D grid-scan experiment. The reduction in the acquisition time of a full 1.0-mm grid H matrix was about 15.2 and 62.2 times with the simplified grid pattern on 2.0-mm and 4.0-mm grid, respectively. A finer-grid H matrix down to 0.5-mm spacing interpolated by the proposed method would shorten the acquisition time by 8 times, additionally. -- Highlights: • A rapid interpolation method of system matrices (H) is proposed, named DW-GIMGPE. • Reduce H acquisition time by 15.2× with simplified grid scan and 2× interpolation. • Reconstructions of a hot-rod phantom with measured and DW-GIMGPE H were similar. • The imaging study of normal

  20. Spatial accuracy of a simplified disaggregation method for traffic emissions applied in seven mid-sized Chilean cities

    Science.gov (United States)

    Ossés de Eicker, Margarita; Zah, Rainer; Triviño, Rubén; Hurni, Hans

    The spatial accuracy of top-down traffic emission inventory maps obtained with a simplified disaggregation method based on street density was assessed in seven mid-sized Chilean cities. Each top-down emission inventory map was compared against a reference, namely a more accurate bottom-up emission inventory map from the same study area. The comparison was carried out using a combination of numerical indicators and visual interpretation. Statistically significant differences were found between the seven cities with regard to the spatial accuracy of their top-down emission inventory maps. In compact cities with a simple street network and a single center, a good accuracy of the spatial distribution of emissions was achieved with correlation values>0.8 with respect to the bottom-up emission inventory of reference. In contrast, the simplified disaggregation method is not suitable for complex cities consisting of interconnected nuclei, resulting in correlation valuessituation to get an overview on the spatial distribution of the emissions generated by traffic activities.

  1. A non overlapping parallel domain decomposition method applied to the simplified transport equations

    International Nuclear Information System (INIS)

    Lathuiliere, B.; Barrault, M.; Ramet, P.; Roman, J.

    2009-01-01

    A reactivity computation requires to compute the highest eigenvalue of a generalized eigenvalue problem. An inverse power algorithm is used commonly. Very fine modelizations are difficult to tackle for our sequential solver, based on the simplified transport equations, in terms of memory consumption and computational time. So, we propose a non-overlapping domain decomposition method for the approximate resolution of the linear system to solve at each inverse power iteration. Our method brings to a low development effort as the inner multigroup solver can be re-use without modification, and allows us to adapt locally the numerical resolution (mesh, finite element order). Numerical results are obtained by a parallel implementation of the method on two different cases with a pin by pin discretization. This results are analyzed in terms of memory consumption and parallel efficiency. (authors)

  2. Simplified Analytical Methods to Analyze Lock Gates Submitted to Ship Collisions and Earthquakes

    Directory of Open Access Journals (Sweden)

    Buldgen Loic

    2015-09-01

    Full Text Available This paper presents two simplified analytical methods to analyze lock gates submitted to two different accidental loads. The case of an impact involving a vessel is first investigated. In this situation, the resistance of the struck gate is evaluated by assuming a local and a global deforming mode. The super-element method is used in the first case, while an equivalent beam model is simultaneously introduced to capture the overall bending motion of the structure. The second accidental load considered in this paper is the seismic action, for which an analytical method is presented to evaluate the total hydrodynamic pressure applied on a lock gate during an earthquake, due account being taken of the fluid-structure interaction. For each of these two actions, numerical validations are presented and the analytical results are compared to finite-element solutions.

  3. A gravimetric simplified method for nucleated marrow cell counting using an injection needle.

    Science.gov (United States)

    Saitoh, Toshiki; Fang, Liu; Matsumoto, Kiyoshi

    2005-08-01

    A simplified gravimetric marrow cell counting method for rats is proposed for a regular screening method. After fresh bone marrow was aspirated by an injection needle, the marrow cells were suspended in carbonate buffered saline. The nucleated marrow cell count (NMC) was measured by an automated multi-blood cell analyzer. When this gravimetric method was applied to rats, the NMC of the left and right femurs had essentially identical values due to careful handling. The NMC at 4 to 10 weeks of age in male and female Crj:CD(SD)IGS rats was 2.72 to 1.96 and 2.75 to 1.98 (x10(6) counts/mg), respectively. More useful information for evaluation could be obtained by using this gravimetric method in addition to myelogram examination. However, some difficulties with this method include low NMC due to blood contamination and variation of NMC due to handling. Therefore, the utility of this gravimetric method for screening will be clarified by the accumulation of the data on myelotoxicity studies with this method.

  4. A GOMS model applied to a simplified control panel design

    International Nuclear Information System (INIS)

    Chavez, C.; Edwards, R.M.

    1992-01-01

    The design of the user interface for a new system requires many decisions to be considered. To develop sensitivity to user needs requires understanding user behavior. The how-to-do-it knowledge is a mixture of task-related and interface-related components. A conscientious analysis of these components, allows the designer to construct a model in terms of goals, operators, methods, and selection (GOMS model) rules that can be advantageously used in the design process and evaluation of a user interface. The emphasis of the present work is on describing the importance and use of a GOMS model as a formal user interface analysis tool in the development of a simplified panel for the control of a nuclear power plant. At Pennsylvania State University, a highly automated control system with a greatly simplified human interface has been proposed to improve power plant safety. Supervisory control is to be conducted with a simplified control panel with the following functions: startup, shutdown, increase power, decrease power, reset, and scram. Initial programming of the operator interface has been initiated within the framework of a U.S. Department of Energy funded university project for intelligent distributed control. A hypothesis to be tested is that this scheme can be also used to estimate mental work load content and predict human performance

  5. Simplified method of calculating residual stress in circumferential welding of piping

    International Nuclear Information System (INIS)

    Umemoto, Tadahiro

    1984-01-01

    Many circumferential joints of piping are used in as-welded state, but in these welded joints, the residual stress as high as the yield stress of materials arises, and causes to accelerate stress corrosion cracking and corrosion fatigue. The experiment or the finite element method to clarify welding residual stress requires much time and labor, and is expensive, therefore, the author proposed the simplified method of calculation. The heating and cooling process of welding is very complex, and cannot be modeled as it is, therefore, it was assumed that in multiple layer welding, the welding condition of the last layer determines the residual stress, that material constants are invariable regardless of temperature, that the temperature distribution and residual stress are axisymmetric, and that there is repeated stress-strain relation in the vicinity of welded parts. The temperature distribution at the time of welding, thermal stress and welding residual stress are analyzed, and the material constants used for the calculation of residual stress are given. As the example of calculation, the effect of welding heat input and materials is shown. The extension of the method to a thick-walled pipe is discussed. (Kako, I.)

  6. Determination of the performance of the Kaplan hydraulic turbines through simplified procedure

    Science.gov (United States)

    Pădureanu, I.; Jurcu, M.; Campian, C. V.; Haţiegan, C.

    2018-01-01

    A simplified procedure has been developed, compared to the complex one recommended by IEC 60041 (i.e. index samples), for measurement of the performance of the hydraulic turbines. The simplified procedure determines the minimum and maximum powers, the efficiency at maximum power, the evolution of powers by head and flow and to determine the correct relationship between runner/impeller blade angle and guide vane opening for most efficient operation of double-regulated machines. The simplified procedure can be used for a rapid and partial estimation of the performance of hydraulic turbines for repair and maintenance work.

  7. A simplified method for quantitative assessment of the relative health and safety risk of environmental management activities

    International Nuclear Information System (INIS)

    Eide, S.A.; Smith, T.H.; Peatross, R.G.; Stepan, I.E.

    1996-09-01

    This report presents a simplified method to assess the health and safety risk of Environmental Management activities of the US Department of Energy (DOE). The method applies to all types of Environmental Management activities including waste management, environmental restoration, and decontamination and decommissioning. The method is particularly useful for planning or tradeoff studies involving multiple conceptual options because it combines rapid evaluation with a quantitative approach. The method is also potentially applicable to risk assessments of activities other than DOE Environmental Management activities if rapid quantitative results are desired

  8. Clinical implementation of a GPU-based simplified Monte Carlo method for a treatment planning system of proton beam therapy

    International Nuclear Information System (INIS)

    Kohno, R; Hotta, K; Nishioka, S; Matsubara, K; Tansho, R; Suzuki, T

    2011-01-01

    We implemented the simplified Monte Carlo (SMC) method on graphics processing unit (GPU) architecture under the computer-unified device architecture platform developed by NVIDIA. The GPU-based SMC was clinically applied for four patients with head and neck, lung, or prostate cancer. The results were compared to those obtained by a traditional CPU-based SMC with respect to the computation time and discrepancy. In the CPU- and GPU-based SMC calculations, the estimated mean statistical errors of the calculated doses in the planning target volume region were within 0.5% rms. The dose distributions calculated by the GPU- and CPU-based SMCs were similar, within statistical errors. The GPU-based SMC showed 12.30–16.00 times faster performance than the CPU-based SMC. The computation time per beam arrangement using the GPU-based SMC for the clinical cases ranged 9–67 s. The results demonstrate the successful application of the GPU-based SMC to a clinical proton treatment planning. (note)

  9. Simplified Freeman-Tukey test statistics for testing probabilities in ...

    African Journals Online (AJOL)

    This paper presents the simplified version of the Freeman-Tukey test statistic for testing hypothesis about multinomial probabilities in one, two and multidimensional contingency tables that does not require calculating the expected cell frequencies before test of significance. The simplified method established new criteria of ...

  10. Coach simplified structure modeling and optimization study based on the PBM method

    Science.gov (United States)

    Zhang, Miaoli; Ren, Jindong; Yin, Ying; Du, Jian

    2016-09-01

    For the coach industry, rapid modeling and efficient optimization methods are desirable for structure modeling and optimization based on simplified structures, especially for use early in the concept phase and with capabilities of accurately expressing the mechanical properties of structure and with flexible section forms. However, the present dimension-based methods cannot easily meet these requirements. To achieve these goals, the property-based modeling (PBM) beam modeling method is studied based on the PBM theory and in conjunction with the characteristics of coach structure of taking beam as the main component. For a beam component of concrete length, its mechanical characteristics are primarily affected by the section properties. Four section parameters are adopted to describe the mechanical properties of a beam, including the section area, the principal moments of inertia about the two principal axles, and the torsion constant of the section. Based on the equivalent stiffness strategy, expressions for the above section parameters are derived, and the PBM beam element is implemented in HyperMesh software. A case is realized using this method, in which the structure of a passenger coach is simplified. The model precision is validated by comparing the basic performance of the total structure with that of the original structure, including the bending and torsion stiffness and the first-order bending and torsional modal frequencies. Sensitivity analysis is conducted to choose design variables. The optimal Latin hypercube experiment design is adopted to sample the test points, and polynomial response surfaces are used to fit these points. To improve the bending and torsion stiffness and the first-order torsional frequency and taking the allowable maximum stresses of the braking and left turning conditions as constraints, the multi-objective optimization of the structure is conducted using the NSGA-II genetic algorithm on the ISIGHT platform. The result of the

  11. A simplified technique for shakedown limit load determination

    International Nuclear Information System (INIS)

    Abdalla, Hany F.; Megahed, Mohammad M.; Younan, Maher Y.A.

    2007-01-01

    In this paper, a simplified technique is presented to determine the shakedown limit load of a structure using the finite element method. The simplified technique determines the shakedown limit load without performing lengthy time consuming full elastic-plastic cyclic loading simulations or conventional iterative elastic techniques. Instead, the shakedown limit load is determined by performing two analyses namely: an elastic analysis and an elastic-plastic analysis. By extracting the results of the two analyses, the shakedown limit load is determined through the calculation of the residual stresses developed within the structure. The simplified technique is applied and verified using two bench mark shakedown problems namely: the two-bar structure subjected to constant axial force and cyclic thermal loading, and the Bree cylinder subjected to constant internal pressure and cyclic high temperature variation across its wall. The results of the simplified technique showed very good correlation with the, analytically determined, Bree diagrams of both structures. In order to gain confidence in the simplified technique, the shakedown limit loads output by the simplified technique are used to perform full elastic-plastic cyclic loading simulations to check for shakedown behavior of both structures

  12. Using simplified peer review processes to fund research: a prospective study

    Science.gov (United States)

    Herbert, Danielle L; Graves, Nicholas; Clarke, Philip; Barnett, Adrian G

    2015-01-01

    Objective To prospectively test two simplified peer review processes, estimate the agreement between the simplified and official processes, and compare the costs of peer review. Design, participants and setting A prospective parallel study of Project Grant proposals submitted in 2013 to the National Health and Medical Research Council (NHMRC) of Australia. The official funding outcomes were compared with two simplified processes using proposals in Public Health and Basic Science. The two simplified processes were: panels of 7 reviewers who met face-to-face and reviewed only the nine-page research proposal and track record (simplified panel); and 2 reviewers who independently reviewed only the nine-page research proposal (journal panel). The official process used panels of 12 reviewers who met face-to-face and reviewed longer proposals of around 100 pages. We compared the funding outcomes of 72 proposals that were peer reviewed by the simplified and official processes. Main outcome measures Agreement in funding outcomes; costs of peer review based on reviewers’ time and travel costs. Results The agreement between the simplified and official panels (72%, 95% CI 61% to 82%), and the journal and official panels (74%, 62% to 83%), was just below the acceptable threshold of 75%. Using the simplified processes would save $A2.1–$A4.9 million per year in peer review costs. Conclusions Using shorter applications and simpler peer review processes gave reasonable agreement with the more complex official process. Simplified processes save time and money that could be reallocated to actual research. Funding agencies should consider streamlining their application processes. PMID:26137884

  13. Simplified quantification of nicotinic receptors with 2[18F]F-A-85380 PET

    International Nuclear Information System (INIS)

    Mitkovski, Sascha; Villemagne, Victor L.; Novakovic, Kathy E.; O'Keefe, Graeme; Tochon-Danguy, Henri; Mulligan, Rachel S.; Dickinson, Kerryn L.; Saunder, Tim; Gregoire, Marie-Claude; Bottlaender, Michel; Dolle, Frederic; Rowe, Christopher C.

    2005-01-01

    Introduction: Neuronal nicotinic acetylcholine receptors (nAChRs), widely distributed in the human brain, are implicated in various neurophysiological processes as well as being particularly affected in neurodegenerative conditions such as Alzheimer's disease. We sought to evaluate a minimally invasive method for quantification of nAChR distribution in the normal human brain, suitable for routine clinical application, using 2[ 18 F]F-A-85380 and positron emission tomography (PET). Methods: Ten normal volunteers (four females and six males, aged 63.40±9.22 years) underwent a dynamic 120-min PET scan after injection of 226 MBq 2[ 18 F]F-A-85380 along with arterial blood sampling. Regional binding was assessed through standardized uptake value (SUV) and distribution volumes (DV) obtained using both compartmental (DV 2CM ) and graphical analysis (DV Logan ). A simplified approach to the estimation of DV (DV simplified ), defined as the region-to-plasma ratio at apparent steady state (90-120 min post injection), was compared with the other quantification approaches. Results: DV Logan values were higher than DV 2CM . A strong correlation was observed between DV simplified , DV Logan (r=.94) and DV 2CM (r=.90) in cortical regions, with lower correlations in thalamus (r=.71 and .82, respectively). Standardized uptake value showed low correlation against DV Logan and DV 2CM . Conclusion: DV simplified determined by the ratio of tissue to metabolite-corrected plasma using a single 90- to 120-min PET acquisition appears acceptable for quantification of cortical nAChR binding with 2[ 18 F]F-A-85380 and suitable for clinical application

  14. A Simplified Version of the Fuzzy Decision Method and its Comparison with the Paraconsistent Decision Method

    Science.gov (United States)

    de Carvalho, Fábio Romeu; Abe, Jair Minoro

    2010-11-01

    Two recent non-classical logics have been used to make decision: fuzzy logic and paraconsistent annotated evidential logic Et. In this paper we present a simplified version of the fuzzy decision method and its comparison with the paraconsistent one. Paraconsistent annotated evidential logic Et, introduced by Da Costa, Vago and Subrahmanian (1991), is capable of handling uncertain and contradictory data without becoming trivial. It has been used in many applications such as information technology, robotics, artificial intelligence, production engineering, decision making etc. Intuitively, one Et logic formula is type p(a, b), in which a and b belong to [0, 1] (real interval) and represent respectively the degree of favorable evidence (or degree of belief) and the degree of contrary evidence (or degree of disbelief) found in p. The set of all pairs (a; b), called annotations, when plotted, form the Cartesian Unitary Square (CUS). This set, containing a similar order relation of real number, comprises a network, called lattice of the annotations. Fuzzy logic was introduced by Zadeh (1965). It tries to systematize the knowledge study, searching mainly to study the fuzzy knowledge (you don't know what it means) and distinguish it from the imprecise one (you know what it means, but you don't know its exact value). This logic is similar to paraconsistent annotated one, since it attributes a numeric value (only one, not two values) to each proposition (then we can say that it is an one-valued logic). This number translates the intensity (the degree) with which the preposition is true. Let's X a set and A, a subset of X, identified by the function f(x). For each element x∈X, you have y = f(x)∈[0, 1]. The number y is called degree of pertinence of x in A. Decision making theories based on these logics have shown to be powerful in many aspects regarding more traditional methods, like the one based on Statistics. In this paper we present a first study for a simplified

  15. A simplified Excel® algorithm for estimating the least limiting water range of soils

    Directory of Open Access Journals (Sweden)

    Leão Tairone Paiva

    2004-01-01

    Full Text Available The least limiting water range (LLWR of soils has been employed as a methodological approach for evaluation of soil physical quality in different agricultural systems, including forestry, grasslands and major crops. However, the absence of a simplified methodology for the quantification of LLWR has hampered the popularization of its use among researchers and soil managers. Taking this into account this work has the objective of proposing and describing a simplified algorithm developed in Excel® software for quantification of the LLWR, including the calculation of the critical bulk density, at which the LLWR becomes zero. Despite the simplicity of the procedures and numerical techniques of optimization used, the nonlinear regression produced reliable results when compared to those found in the literature.

  16. Simplified elastoplastic methods of analysing fatigue in notches

    International Nuclear Information System (INIS)

    Autrusson, B.

    1993-01-01

    The aim of this study is to show the state of the art concerning methods of mechanical analysis available in the literature for evaluating notch root elastoplastic strain. The components of fast breeder reactors are subjected to numerous thermal transients, which can cause fatigue failure. To prevent this from happening, it is necessary to know the local strain range and to use it to estimate the number of cycles to crack initiation. Practical methods have been developed for the calculation of the local strain range, and have led to the drafting of design rules. Direct methods of determining the local strain range of the 'inelastic analysis' type have also been described. In conclusion a series of recommendations is made on the applicability and the conservatism of these methods

  17. Design of a Bidirectional Energy Storage System for a Vanadium Redox Flow Battery in a Microgrid with SOC Estimation

    Directory of Open Access Journals (Sweden)

    Qingwu Gong

    2017-03-01

    Full Text Available This paper used a Vanadium Redox flow Battery (VRB as the storage battery and designed a two-stage topology of a VRB energy storage system in which a phase-shifted full bridge dc-dc converter and three-phase inverter were used, considering the low terminal voltage of the VRB. Following this, a model of the VRB was simplified, according to the operational characteristics of the VRB in this designed topology of a VRB energy storage system (ESS. By using the simplified equivalent model of the VRB, the control parameters of the ESS were designed. For effectively estimating the state of charge (SOC of the VRB, a traditional method for providing the SOC estimation was simplified, and a simple and effective SOC estimation method was proposed in this paper. Finally, to illustrate the proper design of the VRB ESS and the proposed SOC estimation method, a corresponding simulation was designed by Simulink. The test results have demonstrated that this proposed SOC estimation method is feasible and effective for indicating the SOC of a VRB and the proper design of this VRB ESS is very reasonable for VRB applications.

  18. AgarTrap: a simplified Agrobacterium-mediated transformation method for sporelings of the liverwort Marchantia polymorpha L.

    Science.gov (United States)

    Tsuboyama, Shoko; Kodama, Yutaka

    2014-01-01

    The liverwort Marchantia polymorpha L. is being developed as an emerging model plant, and several transformation techniques were recently reported. Examples are biolistic- and Agrobacterium-mediated transformation methods. Here, we report a simplified method for Agrobacterium-mediated transformation of sporelings, and it is termed Agar-utilized Transformation with Pouring Solutions (AgarTrap). The procedure of the AgarTrap was carried out by simply exchanging appropriate solutions in a Petri dish, and completed within a week, successfully yielding sufficient numbers of independent transformants for molecular analysis (e.g. characterization of gene/protein function) in a single experiment. The AgarTrap method will promote future molecular biological study in M. polymorpha.

  19. A simplified approach to evaluating severe accident source term for PWR

    International Nuclear Information System (INIS)

    Huang, Gaofeng; Tong, Lili; Cao, Xuewu

    2014-01-01

    Highlights: • Traditional source term evaluation approaches have been studied. • A simplified approach of source term evaluation for 600 MW PWR is studied. • Five release categories are established. - Abstract: For early design of NPPs, no specific severe accident source term evaluation was considered. Some general source terms have been used for some NPPs. In order to implement a best estimate, a special source term evaluation should be implemented for an NPP. Traditional source term evaluation approaches (mechanism approach and parametric approach) have some difficulties associated with their implementation. The traditional approaches are not consistent with cost-benefit assessment. A simplified approach for evaluating severe accident source term for PWR is studied. For the simplified approach, a simplified containment event tree is established. According to representative cases selection, weighted coefficient evaluation, computation of representative source term cases and weighted computation, five containment release categories are established, including containment bypass, containment isolation failure, containment early failure, containment late failure and intact containment

  20. Analysis of simplified heat transfer models for thermal property determination of nano-film by TDTR method

    Science.gov (United States)

    Wang, Xinwei; Chen, Zhe; Sun, Fangyuan; Zhang, Hang; Jiang, Yuyan; Tang, Dawei

    2018-03-01

    Heat transfer in nanostructures is of critical importance for a wide range of applications such as functional materials and thermal management of electronics. Time-domain thermoreflectance (TDTR) has been proved to be a reliable measurement technique for the thermal property determinations of nanoscale structures. However, it is difficult to determine more than three thermal properties at the same time. Heat transfer model simplifications can reduce the fitting variables and provide an alternative way for thermal property determination. In this paper, two simplified models are investigated and analyzed by the transform matrix method and simulations. TDTR measurements are performed on Al-SiO2-Si samples with different SiO2 thickness. Both theoretical and experimental results show that the simplified tri-layer model (STM) is reliable and suitable for thin film samples with a wide range of thickness. Furthermore, the STM can also extract the intrinsic thermal conductivity and interfacial thermal resistance from serial samples with different thickness.

  1. Improved simplified scheme of atom equivalents to calculate enthalpies of formation of alkyl radicals

    International Nuclear Information System (INIS)

    Castro, Eduardo A.

    2002-01-01

    An improved simplified method of atom equivalents is applied to the calculation of enthalpies of formation of several alkyl radicals. Some statistical mechanics and thermodynamic corrections are added to compare theoretical values with available experimental data. The estimation is quite satisfactory and the average error is similar to current experimental uncertainties, thus providing a direct and simple procedure for this sort of calculation when experimental results are unavailable or/and as an independent check when experimental data are in doubt. (Author) [es

  2. Fuel Burn Estimation Using Real Track Data

    Science.gov (United States)

    Chatterji, Gano B.

    2011-01-01

    A procedure for estimating fuel burned based on actual flight track data, and drag and fuel-flow models is described. The procedure consists of estimating aircraft and wind states, lift, drag and thrust. Fuel-flow for jet aircraft is determined in terms of thrust, true airspeed and altitude as prescribed by the Base of Aircraft Data fuel-flow model. This paper provides a theoretical foundation for computing fuel-flow with most of the information derived from actual flight data. The procedure does not require an explicit model of thrust and calibrated airspeed/Mach profile which are typically needed for trajectory synthesis. To validate the fuel computation method, flight test data provided by the Federal Aviation Administration were processed. Results from this method show that fuel consumed can be estimated within 1% of the actual fuel consumed in the flight test. Next, fuel consumption was estimated with simplified lift and thrust models. Results show negligible difference with respect to the full model without simplifications. An iterative takeoff weight estimation procedure is described for estimating fuel consumption, when takeoff weight is unavailable, and for establishing fuel consumption uncertainty bounds. Finally, the suitability of using radar-based position information for fuel estimation is examined. It is shown that fuel usage could be estimated within 5.4% of the actual value using positions reported in the Airline Situation Display to Industry data with simplified models and iterative takeoff weight computation.

  3. Regional Estimation of Remotely Sensed Evapotranspiration Using the Surface Energy Balance-Advection (SEB-A Method

    Directory of Open Access Journals (Sweden)

    Suhua Liu

    2016-08-01

    Full Text Available Evapotranspiration (ET is an essential part of the hydrological cycle and accurately estimating it plays a crucial role in water resource management. Surface energy balance (SEB models are widely used to estimate regional ET with remote sensing. The presence of horizontal advection, however, perturbs the surface energy balance system and contributes to the uncertainty of energy influxes. Thus, it is vital to consider horizontal advection when applying SEB models to estimate ET. This study proposes an innovative and simplified approach, the surface energy balance-advection (SEB-A method, which is based on the energy balance theory and also takes into account the horizontal advection to determine ET by remote sensing. The SEB-A method considers that the actual ET consists of two parts: the local ET that is regulated by the energy balance system and the exotic ET that arises from horizontal advection. To evaluate the SEB-A method, it was applied to the middle region of the Heihe River in China. Instantaneous ET for three days were acquired and assessed with ET measurements from eddy covariance (EC systems. The results demonstrated that the ET estimates had a high accuracy, with a correlation coefficient (R2 of 0.713, a mean average error (MAE of 39.3 W/m2 and a root mean square error (RMSE of 54.6 W/m2 between the estimates and corresponding measurements. Percent error was calculated to more rigorously assess the accuracy of these estimates, and it ranged from 0% to 35%, with over 80% of the locations within a 20% error. To better understand the SEB-A method, the relationship between the ET estimates and land use types was analyzed, and the results indicated that the ET estimates had spatial distributions that correlated with vegetation patterns and could well demonstrate the ET differences caused by different land use types. The sensitivity analysis suggested that the SEB-A method requested accurate estimation of the available energy, R n − G

  4. Identification of Clearance and Contact Stiffness in a Simplified Barrel-Cradle Structure of Artillery System

    Directory of Open Access Journals (Sweden)

    Bing Li

    2015-02-01

    Full Text Available In gun barrel-cradle structure, the presence of clearance usually changes the dynamic response of muzzle and results in shooting dispersion (under continuous firing condition. The parameter estimation of such clearance nonlinear system is the prerequisite for establishing quantitative relation between the clearance and muzzle disturbance. In this paper, the restoring force surface (RFS method and the nonlinear identification through feedback of outputs (NIFO method are first combined for parameter identification in a simplified barrel-cradle structure. With the RFS method, clearance value can be obtained by analyzing the restoring force plot. Then the contact stiffness can be identified by using NIFO method. This identification process is verified in a single-degree-of-freedom (SDOF system with clearance. To adapt to the rigid-flexible coupled beam system with clearances which is simplified from the barrel-cradle structure, a modification for the combined method mentioned above is proposed. The core idea of the modification is reducing the continuous system to multiple-degree-of-freedom (MDOF system to reserve the nonlinear characteristics through modal transformation matrix. The advantage of this transformation is that the linear parts of the MDOF systems are decoupled, which greatly reduces the difficulty of identification. The simulation results have shown the effectiveness of current method.

  5. Estimating the Capacity of Urban Transportation Networks with an Improved Sensitivity Based Method

    Directory of Open Access Journals (Sweden)

    Muqing Du

    2015-01-01

    Full Text Available The throughput of a given transportation network is always of interest to the traffic administrative department, so as to evaluate the benefit of the transportation construction or expansion project before its implementation. The model of the transportation network capacity formulated as a mathematic programming with equilibrium constraint (MPEC well defines this problem. For practical applications, a modified sensitivity analysis based (SAB method is developed to estimate the solution of this bilevel model. The high-efficient origin-based (OB algorithm is extended for the precise solution of the combined model which is integrated in the network capacity model. The sensitivity analysis approach is also modified to simplify the inversion of the Jacobian matrix in large-scale problems. The solution produced in every iteration of SAB is restrained to be feasible to guarantee the success of the heuristic search. From the numerical experiments, the accuracy of the derivatives for the linear approximation could significantly affect the converging of the SAB method. The results also show that the proposed method could obtain good suboptimal solutions from different starting points in the test examples.

  6. A simplified method of calculating heat flow through a two-phase heat exchanger

    Energy Technology Data Exchange (ETDEWEB)

    Yohanis, Y.G. [Thermal Systems Engineering Group, Faculty of Engineering, University of Ulster, Newtownabbey, Co Antrim, BT37 0QB Northern Ireland (United Kingdom)]. E-mail: yg.yohanis@ulster.ac.uk; Popel, O.S. [Non-traditional Renewable Energy Sources, Institute for High Temperatures, Russian Academy of Sciences, 13/19 Izhorskaya str., IVTAN, Moscow 125412 (Russian Federation); Frid, S.E. [Non-traditional Renewable Energy Sources, Institute for High Temperatures, Russian Academy of Sciences, 13/19 Izhorskaya str., IVTAN, Moscow 125412 (Russian Federation)

    2005-10-01

    A simplified method of calculating the heat flow through a heat exchanger in which one or both heat carrying media are undergoing a phase change is proposed. It is based on enthalpies of the heat carrying media rather than their temperatures. The method enables the determination of the maximum rate of heat flow provided the thermodynamic properties of both heat-carrying media are known. There will be no requirement to separately simulate each part of the system or introduce boundaries within the heat exchanger if one or both heat-carrying media undergo a phase change. The model can be used at the pre-design stage, when the parameters of the heat exchangers may not be known, i.e., to carry out an assessment of a complex energy scheme such as a steam power plant. One such application of this model is in thermal simulation exercises within the TRNSYS modeling environment.

  7. A simplified method of calculating heat flow through a two-phase heat exchanger

    International Nuclear Information System (INIS)

    Yohanis, Y.G.; Popel, O.S.; Frid, S.E.

    2005-01-01

    A simplified method of calculating the heat flow through a heat exchanger in which one or both heat carrying media are undergoing a phase change is proposed. It is based on enthalpies of the heat carrying media rather than their temperatures. The method enables the determination of the maximum rate of heat flow provided the thermodynamic properties of both heat-carrying media are known. There will be no requirement to separately simulate each part of the system or introduce boundaries within the heat exchanger if one or both heat-carrying media undergo a phase change. The model can be used at the pre-design stage, when the parameters of the heat exchangers may not be known, i.e., to carry out an assessment of a complex energy scheme such as a steam power plant. One such application of this model is in thermal simulation exercises within the TRNSYS modeling environment

  8. Radioiodine Therapy of Hyperthyroidism. Simplified patient-specific absorbed dose planning

    Energy Technology Data Exchange (ETDEWEB)

    Joensson, Helene

    2003-10-01

    Radioiodine therapy of hyperthyroidism is the most frequently performed radiopharmaceutical therapy. To calculate the activity of {sup 131}I to be administered for giving a certain absorbed dose to the thyroid, the mass of the thyroid and the individual biokinetic data, normally in the form of uptake and biologic half-time, have to be determined. The biologic half-time is estimated from several uptake measurements and the first one is usually made 24 hours after the intake of the test activity. However, many hospitals consider it time-consuming since at least three visits of the patient to the hospital are required (administration of test activity, first uptake measurement, second uptake measurement plus treatment). Instead, many hospitals use a fixed effective half-time or even a fixed administered activity, only requiring two visits. However, none of these methods considers the absorbed dose to the thyroid of the individual patient. In this work a simplified patient-specific method for treating hyperthyroidism is proposed, based on one single uptake measurement, thus requiring only two visits to the hospital. The calculation is as accurate as using the individual biokinetic data. The simplified method is as patient-convenient and time effective as using a fixed effective half-time or a fixed administered activity. The simplified method is based upon a linear relation between the late uptake measurement 4-7 days after intake of the test activity and the product of the extrapolated initial uptake and the effective half-time. Treatments not considering individual biokinetics in the thyroid result in a distribution of administered absorbed dose to the thyroid, with a range of -50 % to +160 % compared to a protocol calculating the absorbed dose to the thyroid of the individual patient. Treatments with a fixed administered activity of 370 MBq will in general administer 250 % higher activity to the patient, with a range of -30 % to +770 %. The absorbed dose to other

  9. On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.

    Science.gov (United States)

    Yamazaki, Keisuke

    2012-07-01

    Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Unrecorded Alcohol Consumption: Quantitative Methods of Estimation

    OpenAIRE

    Razvodovsky, Y. E.

    2010-01-01

    unrecorded alcohol; methods of estimation In this paper we focused on methods of estimation of unrecorded alcohol consumption level. Present methods of estimation of unrevorded alcohol consumption allow only approximate estimation of unrecorded alcohol consumption level. Tacking into consideration the extreme importance of such kind of data, further investigation is necessary to improve the reliability of methods estimation of unrecorded alcohol consumption.

  11. Simplifying cardiovascular risk estimation using resting heart rate.

    LENUS (Irish Health Repository)

    Cooney, Marie Therese

    2010-09-01

    Elevated resting heart rate (RHR) is a known, independent cardiovascular (CV) risk factor, but is not included in risk estimation systems, including Systematic COronary Risk Evaluation (SCORE). We aimed to derive risk estimation systems including RHR as an extra variable and assess the value of this addition.

  12. Simplified method to solve sound transmission through structures lined with elastic porous material.

    Science.gov (United States)

    Lee, J H; Kim, J

    2001-11-01

    An approximate analysis method is developed to calculate sound transmission through structures lined with porous material. Because the porous material has both the solid phase and fluid phase, three wave components exist in the material, which makes the related analysis very complicated. The main idea in developing the approximate method is very simple: modeling the porous material using only the strongest of the three waves, which in effect idealizes the material as an equivalent fluid. The analysis procedure has to be conducted in two steps. In the first step, sound transmission through a flat double panel with a porous liner of infinite extents, which has the same cross sectional construction as the actual structure, is solved based on the full theory and the strongest wave component is identified. In the second step sound transmission through the actual structure is solved modeling the porous material as an equivalent fluid while using the actual geometry of the structure. The development and validation of the method are discussed in detail. As an application example, the transmission loss through double walled cylindrical shells with a porous core is calculated utilizing the simplified method.

  13. Screening efficacy of a simplified logMAR chart

    Directory of Open Access Journals (Sweden)

    Naganathan Muthuramalingam

    2016-04-01

    Aim: This study evaluates the efficacy of a simplified logMAR chart, designed for VA testing over the conventional Snellen chart, in a school-based vision-screening programme. Methods: We designed a simplified logMAR chart by employing the principles of the Early Treatment Diabetic Retinopathy Study (ETDRS chart in terms of logarithmic letter size progression, inter-letter spacing, and inter-line spacing. Once the simplified logMAR chart was validated by students in the Elite school vision-screening programme, we set out to test the chart in 88 primary and middle schools in the Tiruporur block of Kancheepuram district in Tamil Nadu. One school teacher in each school was trained to screen a cross-sectional population of 10 354 primary and secondary school children (girls: 5488; boys: 4866 for VA deficits using a new, simplified logMAR algorithm. An experienced paediatric optometrist was recruited to validate the screening methods and technique used by the teachers to collect the data. Results: The optometrist screened a subset of 1300 school children from the total sample. The optometrist provided the professional insights needed to validate the clinical efficacy of the simplified logMAR algorithm and verified the reliability of the data collected by the teachers. The mean age of children sampled for validation was 8.6 years (range: 9–14 years. The sensitivity and the specificity of the simplified logMAR chart when compared to the standard logMAR chart were found to be 95% and 98%, respectively. Kappa value was 0.97. Sensitivity of the teachers’ screening was 66.63% (95% confidence interval [CI]: 52.73–77.02 and the specificity was 98.33% (95% CI: 97.49–98.95. Testing of VA was done under substandard illumination levels in 87% of the population. A total of 10 354 children were screened, 425 of whom were found to have some form of visual and/or ocular defect that was identified by the teacher or optometrist. Conclusion: The simplified logMAR testing algorithm

  14. A hierarchical estimator development for estimation of tire-road friction coefficient.

    Directory of Open Access Journals (Sweden)

    Xudong Zhang

    Full Text Available The effect of vehicle active safety systems is subject to the friction force arising from the contact of tires and the road surface. Therefore, an adequate knowledge of the tire-road friction coefficient is of great importance to achieve a good performance of these control systems. This paper presents a tire-road friction coefficient estimation method for an advanced vehicle configuration, four-motorized-wheel electric vehicles, in which the longitudinal tire force is easily obtained. A hierarchical structure is adopted for the proposed estimation design. An upper estimator is developed based on unscented Kalman filter to estimate vehicle state information, while a hybrid estimation method is applied as the lower estimator to identify the tire-road friction coefficient using general regression neural network (GRNN and Bayes' theorem. GRNN aims at detecting road friction coefficient under small excitations, which are the most common situations in daily driving. GRNN is able to accurately create a mapping from input parameters to the friction coefficient, avoiding storing an entire complex tire model. As for large excitations, the estimation algorithm is based on Bayes' theorem and a simplified "magic formula" tire model. The integrated estimation method is established by the combination of the above-mentioned estimators. Finally, the simulations based on a high-fidelity CarSim vehicle model are carried out on different road surfaces and driving maneuvers to verify the effectiveness of the proposed estimation method.

  15. A hierarchical estimator development for estimation of tire-road friction coefficient.

    Science.gov (United States)

    Zhang, Xudong; Göhlich, Dietmar

    2017-01-01

    The effect of vehicle active safety systems is subject to the friction force arising from the contact of tires and the road surface. Therefore, an adequate knowledge of the tire-road friction coefficient is of great importance to achieve a good performance of these control systems. This paper presents a tire-road friction coefficient estimation method for an advanced vehicle configuration, four-motorized-wheel electric vehicles, in which the longitudinal tire force is easily obtained. A hierarchical structure is adopted for the proposed estimation design. An upper estimator is developed based on unscented Kalman filter to estimate vehicle state information, while a hybrid estimation method is applied as the lower estimator to identify the tire-road friction coefficient using general regression neural network (GRNN) and Bayes' theorem. GRNN aims at detecting road friction coefficient under small excitations, which are the most common situations in daily driving. GRNN is able to accurately create a mapping from input parameters to the friction coefficient, avoiding storing an entire complex tire model. As for large excitations, the estimation algorithm is based on Bayes' theorem and a simplified "magic formula" tire model. The integrated estimation method is established by the combination of the above-mentioned estimators. Finally, the simulations based on a high-fidelity CarSim vehicle model are carried out on different road surfaces and driving maneuvers to verify the effectiveness of the proposed estimation method.

  16. A simplified procedure of linear regression in a preliminary analysis

    Directory of Open Access Journals (Sweden)

    Silvia Facchinetti

    2013-05-01

    Full Text Available The analysis of a statistical large data-set can be led by the study of a particularly interesting variable Y – regressed – and an explicative variable X, chosen among the remained variables, conjointly observed. The study gives a simplified procedure to obtain the functional link of the variables y=y(x by a partition of the data-set into m subsets, in which the observations are synthesized by location indices (mean or median of X and Y. Polynomial models for y(x of order r are considered to verify the characteristics of the given procedure, in particular we assume r= 1 and 2. The distributions of the parameter estimators are obtained by simulation, when the fitting is done for m= r + 1. Comparisons of the results, in terms of distribution and efficiency, are made with the results obtained by the ordinary least square methods. The study also gives some considerations on the consistency of the estimated parameters obtained by the given procedure.

  17. User's guide for simplified computer models for the estimation of long-term performance of cement-based materials

    International Nuclear Information System (INIS)

    Plansky, L.E.; Seitz, R.R.

    1994-02-01

    This report documents user instructions for several simplified subroutines and driver programs that can be used to estimate various aspects of the long-term performance of cement-based barriers used in low-level radioactive waste disposal facilities. The subroutines are prepared in a modular fashion to allow flexibility for a variety of applications. Three levels of codes are provided: the individual subroutines, interactive drivers for each of the subroutines, and an interactive main driver, CEMENT, that calls each of the individual drivers. The individual subroutines for the different models may be taken independently and used in larger programs, or the driver modules can be used to execute the subroutines separately or as part of the main driver routine. A brief program description is included and user-interface instructions for the individual subroutines are documented in the main report. These are intended to be used when the subroutines are used as subroutines in a larger computer code

  18. Simplified elastoplastic fatigue analysis

    International Nuclear Information System (INIS)

    Autrusson, B.; Acker, D.; Hoffmann, A.

    1987-01-01

    Oligocyclic fatigue behaviour is a function of the local strain range. The design codes ASME section III, RCC-M, Code Case N47, RCC-MR, and the Guide issued by PNC propose simplified methods to evaluate the local strain range. After having briefly described these simplified methods, we tested them by comparing the results of experimental strains with those predicted by these rules. The experiments conducted for this study involved perforated plates under tensile stress, notched or reinforced beams under four-point bending stress, grooved specimens under tensile-compressive stress, and embedded grooved beams under bending stress. They display a relative conservatism depending on each case. The evaluation of the strains of rather inaccurate and sometimes lacks conservatism. So far, the proposal is to use the finite element codes with a simple model. The isotropic model with the cyclic consolidation curve offers a good representation of the real equivalent strain. There is obviously no question of representing the cycles and the entire loading history, but merely of calculating the maximum variation in elastoplastic equivalent deformations with a constant-rate loading. The results presented testify to the good prediction of the strains with this model. The maximum equivalent strain will be employed to evaluate fatigue damage

  19. Noninvasive and simple method for the estimation of myocardial metabolic rate of glucose by PET and 18F-FDG

    International Nuclear Information System (INIS)

    Takahashi, Norio; Tamaki, Nagara; Kawamoto, Masahide

    1994-01-01

    To estimate regional myocardial metabolic rate of glucose (rMRGlu) with positron emission tomography (PET) and 2-[ 18 F] fluoro-2-deoxy-D-glucose (FDG), non invasive simple method has been investigated using dynamic PET imaging in 14 patients with ischemic heart disease. This imaging approach uses a blood time-activity curve (TAC) derived from a region of interest (ROI) drawn over dynamic PET images of the left ventricle (LV), left atrium (LA) and aorta. Patlak graphic analysis was used to estimate k 1 k 3 /(k 2 +k 3 ) from serial plasma and myocardial radioactivities. FDG counts ratio between whole blood and plasma was relatively constant (0.91±0.02) both throughout the time and among different patients. Although TACs derived from dynamic PET images gradually increased at later phase due to spill over from the myocardium into the cavity, three were good agreements between the estimated K complex values obtained from arterial blood sampling and dynamic PET imaging (LV r=0.95, LA r=0.96, aorta r=0.98). These results demonstrate the practical usefulness of a simplified and noninvasive method for the estimation of rMRGlu in humans by PET. (author)

  20. A modified variable-coefficient projective Riccati equation method and its application to (2 + 1)-dimensional simplified generalized Broer-Kaup system

    International Nuclear Information System (INIS)

    Liu Qing; Zhu Jiamin; Hong Bihai

    2008-01-01

    A modified variable-coefficient projective Riccati equation method is proposed and applied to a (2 + 1)-dimensional simplified and generalized Broer-Kaup system. It is shown that the method presented by Huang and Zhang [Huang DJ, Zhang HQ. Chaos, Solitons and Fractals 2005; 23:601] is a special case of our method. The results obtained in the paper include many new formal solutions besides the all solutions found by Huang and Zhang

  1. A Simplified Method for Stationary Heat Transfer of a Hollow Core Concrete Slab Used for TABS

    DEFF Research Database (Denmark)

    Yu, Tao; Heiselberg, Per Kvols; Lei, Bo

    2014-01-01

    Thermally activated building systems (TABS) have been an energy efficient way to improve the indoor thermal comfort. Due to the complicated structure, heat transfer prediction for a hollow core concrete used for TABS is difficult. This paper proposes a simplified method using equivalent thermal...... resistance for the stationary heat transfer of this kind of system. Numerical simulations are carried out to validate this method, and this method shows very small deviations from the numerical simulations. Meanwhile, this method is used to investigate the influence of the thickness of insulation on the heat...... transfer. The insulation with a thickness of more than 0.06 m can keep over 95 % of the heat transferred from the lower surface, which is beneficial to the radiant ceiling cooling. Finally, this method is extended to involve the effect of the pipe, and the numerical comparison results show that this method...

  2. Internal Dosimetry Intake Estimation using Bayesian Methods

    International Nuclear Information System (INIS)

    Miller, G.; Inkret, W.C.; Martz, H.F.

    1999-01-01

    New methods for the inverse problem of internal dosimetry are proposed based on evaluating expectations of the Bayesian posterior probability distribution of intake amounts, given bioassay measurements. These expectation integrals are normally of very high dimension and hence impractical to use. However, the expectations can be algebraically transformed into a sum of terms representing different numbers of intakes, with a Poisson distribution of the number of intakes. This sum often rapidly converges, when the average number of intakes for a population is small. A simplified algorithm using data unfolding is described (UF code). (author)

  3. A simplified method to measure choroidal thickness using adaptive compensation in enhanced depth imaging optical coherence tomography.

    Directory of Open Access Journals (Sweden)

    Preeti Gupta

    Full Text Available PURPOSE: To evaluate a simplified method to measure choroidal thickness (CT using commercially available enhanced depth imaging (EDI spectral domain optical coherence tomography (SD-OCT. METHODS: We measured CT in 31 subjects without ocular diseases using Spectralis EDI SD-OCT. The choroid-scleral interface of the acquired images was first enhanced using a post-processing compensation algorithm. The enhanced images were then analysed using Photoshop. Two graders independently graded the images to assess inter-grader reliability. One grader re-graded the images after 2 weeks to determine intra-grader reliability. Statistical analysis was performed using intra-class correlation coefficient (ICC and Bland-Altman plot analyses. RESULTS: Using adaptive compensation both the intra-grader reliability (ICC: 0.95 to 0.97 and inter-grader reliability (ICC: 0.93 to 0.97 were perfect for all five locations of CT. However, with the conventional technique of manual CT measurements using built-in callipers provided with the Heidelberg explorer software, the intra- (ICC: 0.87 to 0.94 and inter-grader reliability (ICC: 0.90 to 0.93 for all the measured locations is lower. Using adaptive compensation, the mean differences (95% limits of agreement for intra- and inter-grader sub-foveal CT measurements were -1.3 (-3.33 to 30.8 µm and -1.2 (-36.6 to 34.2 µm, respectively. CONCLUSIONS: The measurement of CT obtained from EDI SD-OCT using our simplified method was highly reliable and efficient. Our method is an easy and practical approach to improve the quality of choroidal images and the precision of CT measurement.

  4. Simplified large African carnivore density estimators from track indices

    Directory of Open Access Journals (Sweden)

    Christiaan W. Winterbach

    2016-12-01

    Full Text Available Background The range, population size and trend of large carnivores are important parameters to assess their status globally and to plan conservation strategies. One can use linear models to assess population size and trends of large carnivores from track-based surveys on suitable substrates. The conventional approach of a linear model with intercept may not intercept at zero, but may fit the data better than linear model through the origin. We assess whether a linear regression through the origin is more appropriate than a linear regression with intercept to model large African carnivore densities and track indices. Methods We did simple linear regression with intercept analysis and simple linear regression through the origin and used the confidence interval for ß in the linear model y = αx + ß, Standard Error of Estimate, Mean Squares Residual and Akaike Information Criteria to evaluate the models. Results The Lion on Clay and Low Density on Sand models with intercept were not significant (P > 0.05. The other four models with intercept and the six models thorough origin were all significant (P < 0.05. The models using linear regression with intercept all included zero in the confidence interval for ß and the null hypothesis that ß = 0 could not be rejected. All models showed that the linear model through the origin provided a better fit than the linear model with intercept, as indicated by the Standard Error of Estimate and Mean Square Residuals. Akaike Information Criteria showed that linear models through the origin were better and that none of the linear models with intercept had substantial support. Discussion Our results showed that linear regression through the origin is justified over the more typical linear regression with intercept for all models we tested. A general model can be used to estimate large carnivore densities from track densities across species and study areas. The formula observed track density = 3.26

  5. Study on simulation methods of atrium building cooling load in hot and humid regions

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Yiqun; Li, Yuming; Huang, Zhizhong [Institute of Building Performance and Technology, Sino-German College of Applied Sciences, Tongji University, 1239 Siping Road, Shanghai 200092 (China); Wu, Gang [Weldtech Technology (Shanghai) Co. Ltd. (China)

    2010-10-15

    In recent years, highly glazed atria are popular because of their architectural aesthetics and advantage of introducing daylight into inside. However, cooling load estimation of such atrium buildings is difficult due to complex thermal phenomena that occur in the atrium space. The study aims to find out a simplified method of estimating cooling loads through simulations for various types of atria in hot and humid regions. Atrium buildings are divided into different types. For every type of atrium buildings, both CFD and energy models are developed. A standard method versus the simplified one is proposed to simulate cooling load of atria in EnergyPlus based on different room air temperature patterns as a result from CFD simulation. It incorporates CFD results as input into non-dimensional height room air models in EnergyPlus, and the simulation results are defined as a baseline model in order to compare with the results from the simplified method for every category of atrium buildings. In order to further validate the simplified method an actual atrium office building is tested on site in a typical summer day and measured results are compared with simulation results using the simplified methods. Finally, appropriate methods of simulating different types of atrium buildings are proposed. (author)

  6. Accuracy of height estimation and tidal volume setting using anthropometric formulas in an ICU Caucasian population.

    Science.gov (United States)

    L'her, Erwan; Martin-Babau, Jérôme; Lellouche, François

    2016-12-01

    Knowledge of patients' height is essential for daily practice in the intensive care unit. However, actual height measurements are unavailable on a daily routine in the ICU and measured height in the supine position and/or visual estimates may lack consistency. Clinicians do need simple and rapid methods to estimate the patients' height, especially in short height and/or obese patients. The objectives of the study were to evaluate several anthropometric formulas for height estimation on healthy volunteers and to test whether several of these estimates will help tidal volume setting in ICU patients. This was a prospective, observational study in a medical intensive care unit of a university hospital. During the first phase of the study, eight limb measurements were performed on 60 healthy volunteers and 18 height estimation formulas were tested. During the second phase, four height estimates were performed on 60 consecutive ICU patients under mechanical ventilation. In the 60 healthy volunteers, actual height was well correlated with the gold standard, measured height in the erect position. Correlation was low between actual and calculated height, using the hand's length and width, the index, or the foot equations. The Chumlea method and its simplified version, performed in the supine position, provided adequate estimates. In the 60 ICU patients, calculated height using the simplified Chumlea method was well correlated with measured height (r = 0.78; ∂ ventilation, alternative anthropometric methods to obtain patient's height based on lower leg and on forearm measurements could be useful to facilitate the application of protective mechanical ventilation in a Caucasian ICU population. The simplified Chumlea method is easy to achieve in a bed-ridden patient and provides accurate height estimates, with a low bias.

  7. Short-Cut Estimators of Criterion-Referenced Test Consistency.

    Science.gov (United States)

    Brown, James Dean

    1990-01-01

    Presents simplified methods for deriving estimates of the consistency of criterion-referenced, English-as-a-Second-Language tests, including (1) the threshold loss agreement approach using agreement or kappa coefficients, (2) the squared-error loss agreement approach using the phi(lambda) dependability approach, and (3) the domain score…

  8. A simplified approach to estimating the distribution of occasionally-consumed dietary components, applied to alcohol intake

    Directory of Open Access Journals (Sweden)

    Julia Chernova

    2016-07-01

    Full Text Available Abstract Background Within-person variation in dietary records can lead to biased estimates of the distribution of food intake. Quantile estimation is especially relevant in the case of skewed distributions and in the estimation of under- or over-consumption. The analysis of the intake distributions of occasionally-consumed foods presents further challenges due to the high frequency of zero records. Two-part mixed-effects models account for excess-zeros, daily variation and correlation arising from repeated individual dietary records. In practice, the application of the two-part model with random effects involves Monte Carlo (MC simulations. However, these can be time-consuming and the precision of MC estimates depends on the size of the simulated data which can hinder reproducibility of results. Methods We propose a new approach based on numerical integration as an alternative to MC simulations to estimate the distribution of occasionally-consumed foods in sub-populations. The proposed approach and MC methods are compared by analysing the alcohol intake distribution in a sub-population of individuals at risk of developing metabolic syndrome. Results The rate of convergence of the results of MC simulations to the results of our proposed method is model-specific, depends on the number of draws from the target distribution, and is relatively slower at the tails of the distribution. Our data analyses also show that model misspecification can lead to incorrect model parameter estimates. For example, under the wrong model assumption of zero correlation between the components, one of the predictors turned out as non-significant at 5 % significance level (p-value 0.062 but it was estimated as significant in the correctly specified model (p-value 0.016. Conclusions The proposed approach for the analysis of the intake distributions of occasionally-consumed foods provides a quicker and more precise alternative to MC simulation methods, particularly in the

  9. Simplified Method of Optimal Sizing of a Renewable Energy Hybrid System for Schools

    Directory of Open Access Journals (Sweden)

    Jiyeon Kim

    2016-11-01

    Full Text Available Schools are a suitable public building for renewable energy systems. Renewable energy hybrid systems (REHSs have recently been introduced in schools following a new national regulation that mandates renewable energy utilization. An REHS combines the common renewable-energy sources such as geothermal heat pumps, solar collectors for water heating, and photovoltaic systems with conventional energy systems (i.e., boilers and air-source heat pumps. Optimal design of an REHS by adequate sizing is not a trivial task because it usually requires intensive work including detailed simulation and demand/supply analysis. This type of simulation-based approach for optimization is difficult to implement in practice. To address this, this paper proposes simplified sizing equations for renewable-energy systems of REHSs. A conventional optimization process is used to calculate the optimal combinations of an REHS for cases of different numbers of classrooms and budgets. On the basis of the results, simplified sizing equations that use only the number of classrooms as the input are proposed by regression analysis. A verification test was carried out using an initial conventional optimization process. The results show that the simplified sizing equations predict similar sizing results to the initial process, consequently showing similar capital costs within a 2% error.

  10. Simplified fringe order correction for absolute phase maps recovered with multiple-spatial-frequency fringe projections

    International Nuclear Information System (INIS)

    Ding, Yi; Peng, Kai; Lu, Lei; Zhong, Kai; Zhu, Ziqi

    2017-01-01

    Various kinds of fringe order errors may occur in the absolute phase maps recovered with multi-spatial-frequency fringe projections. In existing methods, multiple successive pixels corrupted by fringe order errors are detected and corrected pixel-by-pixel with repeating searches, which is inefficient for applications. To improve the efficiency of multiple successive fringe order corrections, in this paper we propose a method to simplify the error detection and correction by the stepwise increasing property of fringe order. In the proposed method, the numbers of pixels in each step are estimated to find the possible true fringe order values, repeating the search in detecting multiple successive errors can be avoided for efficient error correction. The effectiveness of our proposed method is validated by experimental results. (paper)

  11. A simplified spherical harmonic method for coupled electron-photon transport calculations

    International Nuclear Information System (INIS)

    Josef, J.A.

    1996-12-01

    In this thesis we have developed a simplified spherical harmonic method (SP N method) and associated efficient solution techniques for 2-D multigroup electron-photon transport calculations. The SP N method has never before been applied to charged-particle transport. We have performed a first time Fourier analysis of the source iteration scheme and the P 1 diffusion synthetic acceleration (DSA) scheme applied to the 2-D SP N equations. Our theoretical analyses indicate that the source iteration and P 1 DSA schemes are as effective for the 2-D SP N equations as for the 1-D S N equations. Previous analyses have indicated that the P 1 DSA scheme is unstable (with sufficiently forward-peaked scattering and sufficiently small absorption) for the 2-D S N equations, yet is very effective for the 1-D S N equations. In addition, we have applied an angular multigrid acceleration scheme, and computationally demonstrated that it performs as well for the 2-D SP N equations as for the 1-D S N equations. It has previously been shown for 1-D S N calculations that this scheme is much more effective than the DSA scheme when scattering is highly forward-peaked. We have investigated the applicability of the SP N approximation to two different physical classes of problems: satellite electronics shielding from geomagnetically trapped electrons, and electron beam problems. In the space shielding study, the SP N method produced solutions that are accurate within 10% of the benchmark Monte Carlo solutions, and often orders of magnitude faster than Monte Carlo. We have successfully modeled quasi-void problems and have obtained excellent agreement with Monte Carlo. We have observed that the SP N method appears to be too diffusive an approximation for beam problems. This result, however, is in agreement with theoretical expectations

  12. Boundary methods for mode estimation

    Science.gov (United States)

    Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.

    1999-08-01

    This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).

  13. Multi-Path OD-Matrix Estimation (MPME) based on Stochastic User Equilibrium Traffic Assignment

    DEFF Research Database (Denmark)

    Nielsen, Otto Anker

    1997-01-01

    Most conventional methods for estimating trip matrices from traffic counts assume either that the counts are error-free, determin-istic variables or they use a simplified traffic assignment model. Without these assumptions, the methods often demand prohibitive calculation times. The paper present...

  14. A transfer function type of simplified electrochemical model with modified boundary conditions and Padé approximation for Li-ion battery: Part 1. lithium concentration estimation

    Science.gov (United States)

    Yuan, Shifei; Jiang, Lei; Yin, Chengliang; Wu, Hongjie; Zhang, Xi

    2017-06-01

    To guarantee the safety, high efficiency and long lifetime for lithium-ion battery, an advanced battery management system requires a physics-meaningful yet computationally efficient battery model. The pseudo-two dimensional (P2D) electrochemical model can provide physical information about the lithium concentration and potential distributions across the cell dimension. However, the extensive computation burden caused by the temporal and spatial discretization limits its real-time application. In this research, we propose a new simplified electrochemical model (SEM) by modifying the boundary conditions for electrolyte diffusion equations, which significantly facilitates the analytical solving process. Then to obtain a reduced order transfer function, the Padé approximation method is adopted to simplify the derived transcendental impedance solution. The proposed model with the reduced order transfer function can be briefly computable and preserve physical meanings through the presence of parameters such as the solid/electrolyte diffusion coefficients (Ds&De) and particle radius. The simulation illustrates that the proposed simplified model maintains high accuracy for electrolyte phase concentration (Ce) predictions, saying 0.8% and 0.24% modeling error respectively, when compared to the rigorous model under 1C-rate pulse charge/discharge and urban dynamometer driving schedule (UDDS) profiles. Meanwhile, this simplified model yields significantly reduced computational burden, which benefits its real-time application.

  15. Development of a simplified statistical methodology for nuclear fuel rod internal pressure calculation

    International Nuclear Information System (INIS)

    Kim, Kyu Tae; Kim, Oh Hwan

    1999-01-01

    A simplified statistical methodology is developed in order to both reduce over-conservatism of deterministic methodologies employed for PWR fuel rod internal pressure (RIP) calculation and simplify the complicated calculation procedure of the widely used statistical methodology which employs the response surface method and Monte Carlo simulation. The simplified statistical methodology employs the system moment method with a deterministic statistical methodology employs the system moment method with a deterministic approach in determining the maximum variance of RIP. The maximum RIP variance is determined with the square sum of each maximum value of a mean RIP value times a RIP sensitivity factor for all input variables considered. This approach makes this simplified statistical methodology much more efficient in the routine reload core design analysis since it eliminates the numerous calculations required for the power history-dependent RIP variance determination. This simplified statistical methodology is shown to be more conservative in generating RIP distribution than the widely used statistical methodology. Comparison of the significances of each input variable to RIP indicates that fission gas release model is the most significant input variable. (author). 11 refs., 6 figs., 2 tabs

  16. Energy dependent mesh adaptivity of discontinuous isogeometric discrete ordinate methods with dual weighted residual error estimators

    Science.gov (United States)

    Owens, A. R.; Kópházi, J.; Welch, J. A.; Eaton, M. D.

    2017-04-01

    In this paper a hanging-node, discontinuous Galerkin, isogeometric discretisation of the multigroup, discrete ordinates (SN) equations is presented in which each energy group has its own mesh. The equations are discretised using Non-Uniform Rational B-Splines (NURBS), which allows the coarsest mesh to exactly represent the geometry for a wide range of engineering problems of interest; this would not be the case using straight-sided finite elements. Information is transferred between meshes via the construction of a supermesh. This is a non-trivial task for two arbitrary meshes, but is significantly simplified here by deriving every mesh from a common coarsest initial mesh. In order to take full advantage of this flexible discretisation, goal-based error estimators are derived for the multigroup, discrete ordinates equations with both fixed (extraneous) and fission sources, and these estimators are used to drive an adaptive mesh refinement (AMR) procedure. The method is applied to a variety of test cases for both fixed and fission source problems. The error estimators are found to be extremely accurate for linear NURBS discretisations, with degraded performance for quadratic discretisations owing to a reduction in relative accuracy of the "exact" adjoint solution required to calculate the estimators. Nevertheless, the method seems to produce optimal meshes in the AMR process for both linear and quadratic discretisations, and is ≈×100 more accurate than uniform refinement for the same amount of computational effort for a 67 group deep penetration shielding problem.

  17. Simplified and rapid method for extraction of ergosterol from natural samples and detection with quantitative and semi-quantitative methods using thin-layer chromatography

    OpenAIRE

    Larsen, Cand.scient Thomas; Ravn, Senior scientist Helle; Axelsen, Senior Scientist Jørgen

    2004-01-01

    A new and simplified method for extraction of ergosterol (ergoste-5,7,22-trien-3-beta-ol) from fungi in soil and litter was developed using pre-soaking extraction and paraffin oil for recovery. Recoveries of ergosterol were in the range of 94 - 100% depending on the solvent to oil ratio. Extraction efficiencies equal to heat-assisted extraction treatments were obtained with pre-soaked extraction. Ergosterol was detected with thin-layer chromatography (TLC) using fluorodensitometry with a quan...

  18. Simplified Method for Rapid Purification of Soluble Histones

    Directory of Open Access Journals (Sweden)

    Nives Ivić

    2016-06-01

    Full Text Available Functional and structural studies of histone-chaperone complexes, nucleosome modifications, their interactions with remodelers and regulatory proteins rely on obtaining recombinant histones from bacteria. In the present study, we show that co-expression of Xenopus laevis histone pairs leads to production of soluble H2AH2B heterodimer and (H3H42 heterotetramer. The soluble histone complexes are purified by simple chromatographic techniques. Obtained H2AH2B dimer and H3H4 tetramer are proficient in histone chaperone binding and histone octamer and nucleosome formation. Our optimized protocol enables rapid purification of multiple soluble histone variants with a remarkable high yield and simplifies histone octamer preparation. We expect that this simple approach will contribute to the histone chaperone and chromatin research. This work is licensed under a Creative Commons Attribution 4.0 International License.

  19. Optical chirp z-transform processor with a simplified architecture.

    Science.gov (United States)

    Ngo, Nam Quoc

    2014-12-29

    Using a simplified chirp z-transform (CZT) algorithm based on the discrete-time convolution method, this paper presents the synthesis of a simplified architecture of a reconfigurable optical chirp z-transform (OCZT) processor based on the silica-based planar lightwave circuit (PLC) technology. In the simplified architecture of the reconfigurable OCZT, the required number of optical components is small and there are no waveguide crossings which make fabrication easy. The design of a novel type of optical discrete Fourier transform (ODFT) processor as a special case of the synthesized OCZT is then presented to demonstrate its effectiveness. The designed ODFT can be potentially used as an optical demultiplexer at the receiver of an optical fiber orthogonal frequency division multiplexing (OFDM) transmission system.

  20. A Simplified Method to Measure Choroidal Thickness Using Adaptive Compensation in Enhanced Depth Imaging Optical Coherence Tomography

    Science.gov (United States)

    Gupta, Preeti; Sidhartha, Elizabeth; Girard, Michael J. A.; Mari, Jean Martial; Wong, Tien-Yin; Cheng, Ching-Yu

    2014-01-01

    Purpose To evaluate a simplified method to measure choroidal thickness (CT) using commercially available enhanced depth imaging (EDI) spectral domain optical coherence tomography (SD-OCT). Methods We measured CT in 31 subjects without ocular diseases using Spectralis EDI SD-OCT. The choroid-scleral interface of the acquired images was first enhanced using a post-processing compensation algorithm. The enhanced images were then analysed using Photoshop. Two graders independently graded the images to assess inter-grader reliability. One grader re-graded the images after 2 weeks to determine intra-grader reliability. Statistical analysis was performed using intra-class correlation coefficient (ICC) and Bland-Altman plot analyses. Results Using adaptive compensation both the intra-grader reliability (ICC: 0.95 to 0.97) and inter-grader reliability (ICC: 0.93 to 0.97) were perfect for all five locations of CT. However, with the conventional technique of manual CT measurements using built-in callipers provided with the Heidelberg explorer software, the intra- (ICC: 0.87 to 0.94) and inter-grader reliability (ICC: 0.90 to 0.93) for all the measured locations is lower. Using adaptive compensation, the mean differences (95% limits of agreement) for intra- and inter-grader sub-foveal CT measurements were −1.3 (−3.33 to 30.8) µm and −1.2 (−36.6 to 34.2) µm, respectively. Conclusions The measurement of CT obtained from EDI SD-OCT using our simplified method was highly reliable and efficient. Our method is an easy and practical approach to improve the quality of choroidal images and the precision of CT measurement. PMID:24797674

  1. Simplified elastic-plastic analysis of reinforced concrete structures - design method for self-restraining stress

    International Nuclear Information System (INIS)

    Aihara, S.; Atsumi, K.; Ujiie, K.; Satoh, S.

    1981-01-01

    Self-restraining stresses generate not only moments but also axial forces. Therefore the moment and force equilibriums of cross section are considered simultaneously, in combination with other external forces. Thus, under this theory, two computer programs are prepared for. Using these programs, the design procedures which considered the reduction of self-restraining stress, become easy if the elastic design stresses, which are separated normal stresses and self-restraining stresses, are given. Numerical examples are given to illustrate the application of the simplified elastic-plastic analysis and to study its effectiveness. First this method is applied to analyze an upper shielding wall in MARK-2 type's Reactor building. The results are compared with those obtained by the elastic-plastic analysis of Finite Element Method. From this comparison it was confirmed that the method described, had adequate accuracy for re-bar design. As a second example, Mat slab of Reactor building is analyzed. The quantity of re-bars calculated by this method, comes to about two third of re-bars less than those required when self-restraining stress is considered as normal stress. Also, the self-restraining stress reduction factor is about 0.5. (orig./HP)

  2. A simplified method of evaluating the stress wave environment of internal equipment

    Science.gov (United States)

    Colton, J. D.; Desmond, T. P.

    1979-01-01

    A simplified method called the transfer function technique (TFT) was devised for evaluating the stress wave environment in a structure containing internal equipment. The TFT consists of following the initial in-plane stress wave that propagates through a structure subjected to a dynamic load and characterizing how the wave is altered as it is transmitted through intersections of structural members. As a basis for evaluating the TFT, impact experiments and detailed stress wave analyses were performed for structures with two or three, or more members. Transfer functions that relate the wave transmitted through an intersection to the incident wave were deduced from the predicted wave response. By sequentially applying these transfer functions to a structure with several intersections, it was found that the environment produced by the initial stress wave propagating through the structure can be approximated well. The TFT can be used as a design tool or as an analytical tool to determine whether a more detailed wave analysis is warranted.

  3. Estimation of subcriticality of TCA using 'indirect estimation method for calculation error'

    International Nuclear Information System (INIS)

    Naito, Yoshitaka; Yamamoto, Toshihiro; Arakawa, Takuya; Sakurai, Kiyoshi

    1996-01-01

    To estimate the subcriticality of neutron multiplication factor in a fissile system, 'Indirect Estimation Method for Calculation Error' is proposed. This method obtains the calculational error of neutron multiplication factor by correlating measured values with the corresponding calculated ones. This method was applied to the source multiplication and to the pulse neutron experiments conducted at TCA, and the calculation error of MCNP 4A was estimated. In the source multiplication method, the deviation of measured neutron count rate distributions from the calculated ones estimates the accuracy of calculated k eff . In the pulse neutron method, the calculation errors of prompt neutron decay constants give the accuracy of the calculated k eff . (author)

  4. A simplified counter diffusion method combined with a 1D simulation program for optimizing crystallization conditions.

    Science.gov (United States)

    Tanaka, Hiroaki; Inaka, Koji; Sugiyama, Shigeru; Takahashi, Sachiko; Sano, Satoshi; Sato, Masaru; Yoshitomi, Susumu

    2004-01-01

    We developed a new protein crystallization method has been developed using a simplified counter-diffusion method for optimizing crystallization condition. It is composed of only a single capillary, the gel in the silicon tube and the screw-top test tube, which are readily available in the laboratory. The one capillary can continuously scan a wide range of crystallization conditions (combination of the concentrations of the precipitant and the protein) unless crystallization occurs, which means that it corresponds to many drops in the vapor-diffusion method. The amount of the precipitant and the protein solutions can be much less than in conventional methods. In this study, lysozyme and alpha-amylase were used as model proteins for demonstrating the efficiency of this method. In addition, one-dimensional (1-D) simulations of the crystal growth were performed based on the 1-D diffusion model. The optimized conditions can be applied to the initial crystallization conditions for both other counter-diffusion methods with the Granada Crystallization Box (GCB) and for the vapor-diffusion method after some modification.

  5. Estimation of contact resistance in proton exchange membrane fuel cells

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lianhong; Liu, Ying; Song, Haimin; Wang, Shuxin [School of Mechanical Engineering, Tianjin University, 92 Weijin Road, Nankai District, Tianjin 300072 (China); Zhou, Yuanyuan; Hu, S. Jack [Department of Mechanical Engineering, The University of Michigan, Ann Arbor, MI 48109-2125 (United States)

    2006-11-22

    The contact resistance between the bipolar plate (BPP) and the gas diffusion layer (GDL) is an important factor contributing to the power loss in proton exchange membrane (PEM) fuel cells. At present there is still not a well-developed method to estimate such contact resistance. This paper proposes two effective methods for estimating the contact resistance between the BPP and the GDL based on an experimental contact resistance-pressure constitutive relation. The constitutive relation was obtained by experimentally measuring the contact resistance between the GDL and a flat plate of the same material and processing conditions as the BPP under stated contact pressure. In the first method, which was a simplified prediction, the contact area and contact pressure between the BPP and the GDL were analyzed with a simple geometrical relation and the contact resistance was obtained by the contact resistance-pressure constitutive relation. In the second method, the contact area and contact pressure between the BPP and GDL were analyzed using FEM and the contact resistance was computed for each contact element according to the constitutive relation. The total contact resistance was then calculated by considering all contact elements in parallel. The influence of load distribution on contact resistance was also investigated. Good agreement was demonstrated between experimental results and predictions by both methods. The simplified prediction method provides an efficient approach to estimating the contact resistance in PEM fuel cells. The proposed methods for estimating the contact resistance can be useful in modeling and optimizing the assembly process to improve the performance of PEM fuel cells. (author)

  6. Heuristic introduction to estimation methods

    International Nuclear Information System (INIS)

    Feeley, J.J.; Griffith, J.M.

    1982-08-01

    The methods and concepts of optimal estimation and control have been very successfully applied in the aerospace industry during the past 20 years. Although similarities exist between the problems (control, modeling, measurements) in the aerospace and nuclear power industries, the methods and concepts have found only scant acceptance in the nuclear industry. Differences in technical language seem to be a major reason for the slow transfer of estimation and control methods to the nuclear industry. Therefore, this report was written to present certain important and useful concepts with a minimum of specialized language. By employing a simple example throughout the report, the importance of several information and uncertainty sources is stressed and optimal ways of using or allowing for these sources are presented. This report discusses optimal estimation problems. A future report will discuss optimal control problems

  7. A simplified presentation of the multigroup analytic nodal method in 2-D Cartesian geometry

    International Nuclear Information System (INIS)

    Hebert, Alain

    2008-01-01

    The nodal diffusion algorithms used in many production reactor simulation codes are originating from a common ancestry developed in the 1970s, the analytic nodal method (ANM) of the QUANDRY code. However, this original presentation of the ANM is complex and makes difficult the calculation of the nodal coupling matrices. Moreover, QUANDRY is limited to two-energy groups and its generalization to more groups appears laborious. We are presenting a simplified implementation of the ANM requiring only limited programming work. This formulation is consistent with the initial QUANDRY implementation and is easily generalizable to arbitrary G-group problems. A Matlab script is provided to highlight the simplicity of our presentation. For the sake of clarity, our implementation is limited to G-group, 2-D Cartesian geometry

  8. Disturbance estimation of nuclear power plant by using reduced-order model

    International Nuclear Information System (INIS)

    Tashima, Shin-ichi; Wakabayashi, Jiro

    1983-01-01

    An estimation method is proposed of multiplex disturbances which occur in a nuclear power plant. The method is composed of two parts: (i) the identification of a simplified model of multi-input and multi-output to describe the related system response, and (ii) the design of a Kalman filter to estimate the multiplex disturbance. Concerning the simplified model, several observed signals are firstly selected as output variables which can well represent the system response caused by the disturbances. A reduced-order model is utilized for designing the disturbance estimator. This is based on the following two considerations. The first is that the disturbance is assumed to be of a quasistatic nature. The other is based on the intuition that there exist a few dominant modes between the disturbances and the selected observed signals and that most of the non-dominant modes which remain may not affect the accuracy of the disturbance estimator. The reduced-order model is furtherly transformed to a single-output model using a linear combination of the output signals, where the standard procedure of the structural identification is evaded. The parameters of the model thus transformed are calculated by the generalized least square method. As for the multiplex disturbance estimator, the Kalman filtering method is applied by compromising the following three items : (a) quick response to disturbance, (b) reduction of estimation error in the presence of observation noises, and (c) the elimination of cross-interference between the disturbances to the plant and the estimates from the Kalman filter. The effectiveness of the proposed method is verified through some computer experiments using a BWR plant simulator. (author)

  9. Simplified Qualitative Discrete Numerical Model to Determine Cracking Pattern in Brittle Materials by Means of Finite Element Method

    OpenAIRE

    Ochoa-Avendaño, J.; Garzon-Alvarado, D. A.; Linero, Dorian L.; Cerrolaza, M.

    2017-01-01

    This paper presents the formulation, implementation, and validation of a simplified qualitative model to determine the crack path of solids considering static loads, infinitesimal strain, and plane stress condition. This model is based on finite element method with a special meshing technique, where nonlinear link elements are included between the faces of the linear triangular elements. The stiffness loss of some link elements represents the crack opening. Three experimental tests of bending...

  10. Simplified calculation method for radiation dose under normal condition of transport

    International Nuclear Information System (INIS)

    Watabe, N.; Ozaki, S.; Sato, K.; Sugahara, A.

    1993-01-01

    In order to estimate radiation dose during transportation of radioactive materials, the following computer codes are available: RADTRAN, INTERTRAN, J-TRAN. Because these codes consist of functions for estimating doses not only under normal conditions but also in the case of accidents, when nuclei may leak and spread into the environment by air diffusion, the user needs to have special knowledge and experience. In this presentation, we describe how, with a view to preparing a method by which a person in charge of transportation can calculate doses in normal conditions, the main parameters upon which the value of doses depends were extracted and the dose for a unit of transportation was estimated. (J.P.N.)

  11. Flood risk assessment in France: comparison of extreme flood estimation methods (EXTRAFLO project, Task 7)

    Science.gov (United States)

    Garavaglia, F.; Paquet, E.; Lang, M.; Renard, B.; Arnaud, P.; Aubert, Y.; Carre, J.

    2013-12-01

    In flood risk assessment the methods can be divided in two families: deterministic methods and probabilistic methods. In the French hydrologic community the probabilistic methods are historically preferred to the deterministic ones. Presently a French research project named EXTRAFLO (RiskNat Program of the French National Research Agency, https://extraflo.cemagref.fr) deals with the design values for extreme rainfall and floods. The object of this project is to carry out a comparison of the main methods used in France for estimating extreme values of rainfall and floods, to obtain a better grasp of their respective fields of application. In this framework we present the results of Task 7 of EXTRAFLO project. Focusing on French watersheds, we compare the main extreme flood estimation methods used in French background: (i) standard flood frequency analysis (Gumbel and GEV distribution), (ii) regional flood frequency analysis (regional Gumbel and GEV distribution), (iii) local and regional flood frequency analysis improved by historical information (Naulet et al., 2005), (iv) simplify probabilistic method based on rainfall information (i.e. Gradex method (CFGB, 1994), Agregee method (Margoum, 1992) and Speed method (Cayla, 1995)), (v) flood frequency analysis by continuous simulation approach and based on rainfall information (i.e. Schadex method (Paquet et al., 2013, Garavaglia et al., 2010), Shyreg method (Lavabre et al., 2003)) and (vi) multifractal approach. The main result of this comparative study is that probabilistic methods based on additional information (i.e. regional, historical and rainfall information) provide better estimations than the standard flood frequency analysis. Another interesting result is that, the differences between the various extreme flood quantile estimations of compared methods increase with return period, staying relatively moderate up to 100-years return levels. Results and discussions are here illustrated throughout with the example

  12. Simplified tritium permeation model

    International Nuclear Information System (INIS)

    Longhurst, G.R.

    1993-01-01

    In this model I seek to provide a simplified approach to solving permeation problems addressed by TMAP4. I will assume that there are m one-dimensional segments with thickness L i , i = 1, 2, hor-ellipsis, m, joined in series with an implantation flux, J i , implanting at the single depth, δ, in the first segment. From material properties and heat transfer considerations, I calculate temperatures at each face of each segment, and from those temperatures I find local diffusivities and solubilities. I assume recombination coefficients K r1 and K r2 are known at the upstream and downstream faces, respectively, but the model will generate Baskes recombination coefficient values on demand. Here I first develop the steady-state concentration equations and then show how trapping considerations can lead to good estimates of permeation transient times

  13. A Method of Nuclear Software Reliability Estimation

    International Nuclear Information System (INIS)

    Park, Gee Yong; Eom, Heung Seop; Cheon, Se Woo; Jang, Seung Cheol

    2011-01-01

    A method on estimating software reliability for nuclear safety software is proposed. This method is based on the software reliability growth model (SRGM) where the behavior of software failure is assumed to follow the non-homogeneous Poisson process. Several modeling schemes are presented in order to estimate and predict more precisely the number of software defects based on a few of software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating the software test cases into the model. It is identified that this method is capable of accurately estimating the remaining number of software defects which are on-demand type directly affecting safety trip functions. The software reliability can be estimated from a model equation and one method of obtaining the software reliability is proposed

  14. Reactor core performance estimating device

    International Nuclear Information System (INIS)

    Tanabe, Akira; Yamamoto, Toru; Shinpuku, Kimihiro; Chuzen, Takuji; Nishide, Fusayo.

    1995-01-01

    The present invention can autonomously simplify a neural net model thereby enabling to conveniently estimate various amounts which represents reactor core performances by a simple calculation in a short period of time. Namely, a reactor core performance estimation device comprises a nerve circuit net which divides the reactor core into a large number of spacial regions, and receives various physical amounts for each region as input signals for input nerve cells and outputs estimation values of each amount representing the reactor core performances as output signals of output nerve cells. In this case, the nerve circuit net (1) has a structure of extended multi-layered model having direct coupling from an upper stream layer to each of downstream layers, (2) has a forgetting constant q in a corrected equation for a joined load value ω using an inverse error propagation method, (3) learns various amounts representing reactor core performances determined using the physical models as teacher signals, (4) determines the joined load value ω decreased as '0' when it is to less than a predetermined value upon learning described above, and (5) eliminates elements of the nerve circuit net having all of the joined load value decreased to 0. As a result, the neural net model comprises an autonomously simplifying means. (I.S.)

  15. Method-related estimates of sperm vitality.

    Science.gov (United States)

    Cooper, Trevor G; Hellenkemper, Barbara

    2009-01-01

    Comparison of methods that estimate viability of human spermatozoa by monitoring head membrane permeability revealed that wet preparations (whether using positive or negative phase-contrast microscopy) generated significantly higher percentages of nonviable cells than did air-dried eosin-nigrosin smears. Only with the latter method did the sum of motile (presumed live) and stained (presumed dead) preparations never exceed 100%, making this the method of choice for sperm viability estimates.

  16. A simplified method for low-level tritium measurement in the environmental water samples

    International Nuclear Information System (INIS)

    Sakuma, Yoichi; Yamanishi, Hirokuni; Ogata, Yoshimune

    2004-01-01

    Low level liquid scintillation counting took much time with a lot of doing to distill off the impurities in the sample water before mixing the sample with the liquid scintillation cocktail. In the light of it, we investigated the possibility of an alternative filtration method for sample purification. The tritium concentration in the environmental water has become very low, and the samples have to be treated by electrolysis enrichment with a liquid scintillation analyzer. Using the solid polymer electrolyte enriching device, there is no need to add neither any electrolyte nor the neutralization after the concentration. If we could replace the distillation process with the filtration, the procedure would be simplified very much. We investigated the procedure and we were able to prove that the reverse osmosis (RO) filtration was available. Moreover, in order to rationalize all through the measurement method, we examined the followings: (1) Improvement of the enriching apparatus. (2) Easier measurement of heavy water concentration using a density meter, instead of a mass spectrometer. The concentration of water samples was measured to determine the enrichment rate of tritium during the electrolysis enrichment. (author)

  17. Updated thermal model using simplified short-wave radiosity calculations

    International Nuclear Information System (INIS)

    Smith, J.A.; Goltz, S.M.

    1994-01-01

    An extension to a forest canopy thermal radiance model is described that computes the short-wave energy flux absorbed within the canopy by solving simplified radiosity equations describing flux transfers between canopy ensemble classes partitioned by vegetation layer and leaf slope. Integrated short-wave reflectance and transmittance-factors obtained from measured leaf optical properties were found to be nearly equal for the canopy studied. Short-wave view factor matrices were approximated by combining the average leaf scattering coefficient with the long-wave view factor matrices already incorporated in the model. Both the updated and original models were evaluated for a dense spruce fir forest study site in Central Maine. Canopy short-wave absorption coefficients estimated from detailed Monte Carlo ray tracing calculations were 0.60, 0.04, and 0.03 for the top, middle, and lower canopy layers corresponding to leaf area indices of 4.0, 1.05, and 0.25. The simplified radiosity technique yielded analogous absorption values of 0.55, 0.03, and 0.01. The resulting root mean square error in modeled versus measured canopy temperatures for all layers was less than 1°C with either technique. Maximum error in predicted temperature using the simplified radiosity technique was approximately 2°C during peak solar heating. (author)

  18. Updated thermal model using simplified short-wave radiosity calculations

    Energy Technology Data Exchange (ETDEWEB)

    Smith, J. A.; Goltz, S. M.

    1994-02-15

    An extension to a forest canopy thermal radiance model is described that computes the short-wave energy flux absorbed within the canopy by solving simplified radiosity equations describing flux transfers between canopy ensemble classes partitioned by vegetation layer and leaf slope. Integrated short-wave reflectance and transmittance-factors obtained from measured leaf optical properties were found to be nearly equal for the canopy studied. Short-wave view factor matrices were approximated by combining the average leaf scattering coefficient with the long-wave view factor matrices already incorporated in the model. Both the updated and original models were evaluated for a dense spruce fir forest study site in Central Maine. Canopy short-wave absorption coefficients estimated from detailed Monte Carlo ray tracing calculations were 0.60, 0.04, and 0.03 for the top, middle, and lower canopy layers corresponding to leaf area indices of 4.0, 1.05, and 0.25. The simplified radiosity technique yielded analogous absorption values of 0.55, 0.03, and 0.01. The resulting root mean square error in modeled versus measured canopy temperatures for all layers was less than 1°C with either technique. Maximum error in predicted temperature using the simplified radiosity technique was approximately 2°C during peak solar heating. (author)

  19. An estimation method for echo signal energy of pipe inner surface longitudinal crack detection by 2-D energy coefficients integration

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Shiyuan, E-mail: redaple@bit.edu.cn; Sun, Haoyu, E-mail: redaple@bit.edu.cn; Xu, Chunguang, E-mail: redaple@bit.edu.cn; Cao, Xiandong, E-mail: redaple@bit.edu.cn; Cui, Liming, E-mail: redaple@bit.edu.cn; Xiao, Dingguo, E-mail: redaple@bit.edu.cn [School of Mechanical Engineering, Beijing Institute of Technology, Beijing, China NO.5 Zhongguancun South Street, Haidian District, Beijing 100081 (China)

    2015-03-31

    The echo signal energy is directly affected by the incident sound beam eccentricity or angle for thick-walled pipes inner longitudinal cracks detection. A method for analyzing the relationship between echo signal energy between the values of incident eccentricity is brought forward, which can be used to estimate echo signal energy when testing inside wall longitudinal crack of pipe, using mode-transformed compression wave adaptation of shear wave with water-immersion method, by making a two-dimension integration of “energy coefficient” in both circumferential and axial directions. The calculation model is founded for cylinder sound beam case, in which the refraction and reflection energy coefficients of different rays in the whole sound beam are considered different. The echo signal energy is calculated for a particular cylinder sound beam testing different pipes: a beam with a diameter of 0.5 inch (12.7mm) testing a φ279.4mm pipe and a φ79.4mm one. As a comparison, both the results of two-dimension integration and one-dimension (circumferential direction) integration are listed, and only the former agrees well with experimental results. The estimation method proves to be valid and shows that the usual method of simplifying the sound beam as a single ray for estimating echo signal energy and choosing optimal incident eccentricity is not so appropriate.

  20. Development of a simplified method for intelligent glazed façade design under different control strategies and verified by building simulation tool BSim

    DEFF Research Database (Denmark)

    Liu, Mingzhe; Wittchen, Kim Bjarne; Heiselberg, Per

    2014-01-01

    The research aims to develop a simplified calculation method for intelligent glazed facade under different control conditions (night shutter, solar shading and natural ventilation) to simulate the energy performance and indoor environment of an office room installed with the intelligent facade......, it is possible to calculate the whole year performance of a room or building with intelligent glazed façade, which makes it a less time consuming tool to investigate the performance of the intelligent façade under different control strategies in the design stage with acceptable accuracy. Results showed good....... The method took the angle dependence of the solar characteristic into account, including the simplified hourly building model developed according to EN 13790 to evaluate the influence of the controlled façade on both the indoor environment (indoor air temperature, solar transmittance through the façade...

  1. Appraisal of allowable loads by simplified rules

    International Nuclear Information System (INIS)

    Moulin, D.; Roche, R.L.

    1984-06-01

    This paper presents a simplified method of analysis of buckling of thin structures like those of L.M.F.B.R.'s. The edification of the method is very similar to methods used for buckling of beams and columns having initial geometric imperfections and buckling in the plastic range. Particular attention is paid to the stress hardening of material involved (austenitic steel) and to possible unstable post buckling of thin structures. The analysis method is based on elastic calculation and diagrams that take into account various initial geometric defects

  2. The Fast Simulation of Scattering Characteristics from a Simplified Time Varying Sea Surface

    Directory of Open Access Journals (Sweden)

    Yiwen Wei

    2015-01-01

    Full Text Available This paper aims at applying a simplified sea surface model into the physical optics (PO method to accelerate the scattering calculation from 1D time varying sea surface. To reduce the number of the segments and make further improvement on the efficiency of PO method, a simplified sea surface is proposed. In this simplified sea surface, the geometry of long waves is locally approximated by tilted facets that are much longer than the electromagnetic wavelength. The capillary waves are considered to be sinusoidal line superimposing on the long waves. The wavenumber of the sinusoidal waves is supposed to satisfy the resonant condition of Bragg waves which is dominant in all the scattered short wave components. Since the capillary wave is periodical within one facet, an analytical integration of the PO term can be performed. The backscattering coefficient obtained from a simplified sea surface model agrees well with that obtained from a realistic sea surface. The Doppler shifts and width also agree well with the realistic model since the capillary waves are taken into consideration. The good agreements indicate that the simplified model is reasonable and valid in predicting both the scattering coefficients and the Doppler spectra.

  3. Erosion estimation of guide vane end clearance in hydraulic turbines with sediment water flow

    Science.gov (United States)

    Han, Wei; Kang, Jingbo; Wang, Jie; Peng, Guoyi; Li, Lianyuan; Su, Min

    2018-04-01

    The end surface of guide vane or head cover is one of the most serious parts of sediment erosion for high-head hydraulic turbines. In order to investigate the relationship between erosion depth of wall surface and the characteristic parameter of erosion, an estimative method including a simplified flow model and a modificatory erosion calculative function is proposed in this paper. The flow between the end surfaces of guide vane and head cover is simplified as a clearance flow around a circular cylinder with a backward facing step. Erosion characteristic parameter of csws3 is calculated with the mixture model for multiphase flow and the renormalization group (RNG) k-𝜀 turbulence model under the actual working conditions, based on which, erosion depths of guide vane and head cover end surfaces are estimated with a modification of erosion coefficient K. The estimation results agree well with the actual situation. It is shown that the estimative method is reasonable for erosion prediction of guide vane and can provide a significant reference to determine the optimal maintenance cycle for hydraulic turbine in the future.

  4. Simplified LCA and matrix methods in identifying the environmental aspects of a product system.

    Science.gov (United States)

    Hur, Tak; Lee, Jiyong; Ryu, Jiyeon; Kwon, Eunsun

    2005-05-01

    In order to effectively integrate environmental attributes into the product design and development processes, it is crucial to identify the significant environmental aspects related to a product system within a relatively short period of time. In this study, the usefulness of life cycle assessment (LCA) and a matrix method as tools for identifying the key environmental issues of a product system were examined. For this, a simplified LCA (SLCA) method that can be applied to Electrical and Electronic Equipment (EEE) was developed to efficiently identify their significant environmental aspects for eco-design, since a full scale LCA study is usually very detailed, expensive and time-consuming. The environmentally responsible product assessment (ERPA) method, which is one of the matrix methods, was also analyzed. Then, the usefulness of each method in eco-design processes was evaluated and compared using the case studies of the cellular phone and vacuum cleaner systems. It was found that the SLCA and the ERPA methods provided different information but they complemented each other to some extent. The SLCA method generated more information on the inherent environmental characteristics of a product system so that it might be useful for new design/eco-innovation when developing a completely new product or method where environmental considerations play a major role from the beginning. On the other hand, the ERPA method gave more information on the potential for improving a product so that it could be effectively used in eco-redesign which intends to alleviate environmental impacts of an existing product or process.

  5. Explicit estimating equations for semiparametric generalized linear latent variable models

    KAUST Repository

    Ma, Yanyuan

    2010-07-05

    We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.

  6. River Discharge Estimation by Using Altimetry Data and Simplified Flood Routing Modeling

    Directory of Open Access Journals (Sweden)

    Tommaso Moramarco

    2013-08-01

    Full Text Available A methodology to estimate the discharge along rivers, even poorly gauged ones, taking advantage of water level measurements derived from satellite altimetry is proposed. The procedure is based on the application of the Rating Curve Model (RCM, a simple method allowing for the estimation of the flow conditions in a river section using only water levels recorded at that site and the discharges observed at another upstream section. The European Remote-Sensing Satellite 2, ERS-2, and the Environmental Satellite, ENVISAT, altimetry data are used to provide time series of water levels needed for the application of RCM. In order to evaluate the usefulness of the approach, the results are compared with the ones obtained by applying an empirical formula that allows discharge estimation from remotely sensed hydraulic information. To test the proposed procedure, the 236 km-reach of the Po River is investigated, for which five in situ stations and four satellite tracks are available. Results show that RCM is able to appropriately represent the discharge, and its performance is better than the empirical formula, although this latter does not require upstream hydrometric data. Given its simple formal structure, the proposed approach can be conveniently utilized in ungauged sites where only the survey of the cross-section is needed.

  7. Spectrum estimation method based on marginal spectrum

    International Nuclear Information System (INIS)

    Cai Jianhua; Hu Weiwen; Wang Xianchun

    2011-01-01

    FFT method can not meet the basic requirements of power spectrum for non-stationary signal and short signal. A new spectrum estimation method based on marginal spectrum from Hilbert-Huang transform (HHT) was proposed. The procession of obtaining marginal spectrum in HHT method was given and the linear property of marginal spectrum was demonstrated. Compared with the FFT method, the physical meaning and the frequency resolution of marginal spectrum were further analyzed. Then the Hilbert spectrum estimation algorithm was discussed in detail, and the simulation results were given at last. The theory and simulation shows that under the condition of short data signal and non-stationary signal, the frequency resolution and estimation precision of HHT method is better than that of FFT method. (authors)

  8. A numerical simulation of wheel spray for simplified vehicle model based on discrete phase method

    Directory of Open Access Journals (Sweden)

    Xingjun Hu

    2015-07-01

    Full Text Available Road spray greatly affects vehicle body soiling and driving safety. The study of road spray has attracted increasing attention. In this article, computational fluid dynamics software with widely used finite volume method code was employed to investigate the numerical simulation of spray induced by a simplified wheel model and a modified square-back model proposed by the Motor Industry Research Association. Shear stress transport k-omega turbulence model, discrete phase model, and Eulerian wall-film model were selected. In the simulation process, the phenomenon of breakup and coalescence of drops were considered, and the continuous and discrete phases were treated as two-way coupled in momentum and turbulent motion. The relationship between the vehicle external flow structure and body soiling was also discussed.

  9. Interpretation of searches for supersymmetry with simplified models

    Energy Technology Data Exchange (ETDEWEB)

    Chatrchyan, S.; Khachatryan, V.; Sirunyan, A. M.; Tumasyan, A.; Adam, W.; Aguilo, E.; Bergauer, T.; Dragicevic, M.; Erö, J.; Fabjan, C.; Friedl, M.; Frühwirth, R.; Ghete, V. M.; Hörmann, N.; Hrubec, J.; Jeitler, M.; Kiesenhofer, W.; Knünz, V.; Krammer, M.; Krätschmer, I.; Liko, D.; Mikulec, I.; Pernicka, M.; Rabady, D.; Rahbaran, B.; Rohringer, C.; Rohringer, H.; Schöfbeck, R.; Strauss, J.; Taurok, A.; Waltenberger, W.; Wulz, C. -E.; Mossolov, V.; Shumeiko, N.; Suarez Gonzalez, J.; Bansal, M.; Bansal, S.; Cornelis, T.; De Wolf, E. A.; Janssen, X.; Luyckx, S.; Mucibello, L.; Ochesanu, S.; Roland, B.; Rougny, R.; Selvaggi, M.; Van Haevermaet, H.; Van Mechelen, P.; Van Remortel, N.; Van Spilbeeck, A.; Blekman, F.; Blyweert, S.; D’Hondt, J.; Gonzalez Suarez, R.; Kalogeropoulos, A.; Maes, M.; Olbrechts, A.; Van Doninck, W.; Van Mulders, P.; Van Onsem, G. P.; Villella, I.; Clerbaux, B.; De Lentdecker, G.; Dero, V.; Gay, A. P. R.; Hreus, T.; Léonard, A.; Marage, P. E.; Mohammadi, A.; Reis, T.; Thomas, L.; Vander Velde, C.; Vanlaer, P.; Wang, J.; Adler, V.; Beernaert, K.; Cimmino, A.; Costantini, S.; Garcia, G.; Grunewald, M.; Klein, B.; Lellouch, J.; Marinov, A.; Mccartin, J.; Ocampo Rios, A. A.; Ryckbosch, D.; Strobbe, N.; Thyssen, F.; Tytgat, M.; Walsh, S.; Yazgan, E.; Zaganidis, N.; Basegmez, S.; Bruno, G.; Castello, R.; Ceard, L.; Delaere, C.; du Pree, T.; Favart, D.; Forthomme, L.; Giammanco, A.; Hollar, J.; Lemaitre, V.; Liao, J.; Militaru, O.; Nuttens, C.; Pagano, D.; Pin, A.; Piotrzkowski, K.; Vizan Garcia, J. M.; Beliy, N.; Caebergs, T.; Daubie, E.; Hammad, G. H.; Alves, G. A.; Correa Martins Junior, M.; Martins, T.; Pol, M. E.; Souza, M. H. G.; Aldá Júnior, W. L.; Carvalho, W.; Custódio, A.; Da Costa, E. M.; De Jesus Damiao, D.; De Oliveira Martins, C.; Fonseca De Souza, S.; Malbouisson, H.; Malek, M.; Matos Figueiredo, D.; Mundim, L.; Nogima, H.; Prado Da Silva, W. L.; Santoro, A.; Soares Jorge, L.; Sznajder, A.; Vilela Pereira, A.; Anjos, T. S.; Bernardes, C. A.; Dias, F. A.; Fernandez Perez Tomei, T. R.; Gregores, E. M.; Lagana, C.; Marinho, F.; Mercadante, P. G.; Novaes, S. F.; Padula, Sandra S.; Genchev, V.; Iaydjiev, P.; Piperov, S.; Rodozov, M.; Stoykova, S.; Sultanov, G.; Tcholakov, V.; Trayanov, R.; Vutova, M.; Dimitrov, A.; Hadjiiska, R.; Kozhuharov, V.; Litov, L.; Pavlov, B.; Petkov, P.; Bian, J. G.; Chen, G. M.; Chen, H. S.; Jiang, C. H.; Liang, D.; Liang, S.; Meng, X.; Tao, J.; Wang, J.; Wang, X.; Wang, Z.; Xiao, H.; Xu, M.; Zang, J.; Zhang, Z.; Asawatangtrakuldee, C.; Ban, Y.; Guo, Y.; Li, W.; Liu, S.; Mao, Y.; Qian, S. J.; Teng, H.; Wang, D.; Zhang, L.; Zou, W.; Avila, C.; Gomez, J. P.; Gomez Moreno, B.; Osorio Oliveros, A. F.; Sanabria, J. C.; Godinovic, N.; Lelas, D.; Plestina, R.; Polic, D.; Puljak, I.; Antunovic, Z.; Kovac, M.; Brigljevic, V.; Duric, S.; Kadija, K.; Luetic, J.; Mekterovic, D.; Morovic, S.; Attikis, A.; Galanti, M.; Mavromanolakis, G.; Mousa, J.; Nicolaou, C.; Ptochos, F.; Razis, P. A.; Finger, M.; Finger, M.; Assran, Y.; Elgammal, S.; Ellithi Kamel, A.; Mahmoud, M. A.; Mahrous, A.; Radi, A.; Kadastik, M.; Müntel, M.; Raidal, M.; Rebane, L.; Tiko, A.; Eerola, P.; Fedi, G.; Voutilainen, M.; Härkönen, J.; Heikkinen, A.; Karimäki, V.; Kinnunen, R.; Kortelainen, M. J.; Lampén, T.; Lassila-Perini, K.; Lehti, S.; Lindén, T.; Luukka, P.; Mäenpää, T.; Peltola, T.; Tuominen, E.; Tuominiemi, J.; Tuovinen, E.; Ungaro, D.; Wendland, L.; Banzuzi, K.; Karjalainen, A.; Korpela, A.; Tuuva, T.; Besancon, M.; Choudhury, S.; Dejardin, M.; Denegri, D.; Fabbro, B.; Faure, J. L.; Ferri, F.; Ganjour, S.; Givernaud, A.; Gras, P.; Hamel de Monchenault, G.; Jarry, P.; Locci, E.; Malcles, J.; Millischer, L.; Nayak, A.; Rander, J.; Rosowsky, A.; Titov, M.; Baffioni, S.; Beaudette, F.; Benhabib, L.; Bianchini, L.; Bluj, M.; Busson, P.; Charlot, C.; Daci, N.; Dahms, T.; Dalchenko, M.; Dobrzynski, L.; Florent, A.; Granier de Cassagnac, R.; Haguenauer, M.; Miné, P.; Mironov, C.; Naranjo, I. N.; Nguyen, M.; Ochando, C.; Paganini, P.; Sabes, D.; Salerno, R.; Sirois, Y.; Veelken, C.; Zabi, A.; Agram, J. -L.; Andrea, J.; Bloch, D.; Bodin, D.; Brom, J. -M.; Cardaci, M.; Chabert, E. C.; Collard, C.; Conte, E.; Drouhin, F.; Fontaine, J. -C.; Gelé, D.; Goerlach, U.; Juillot, P.; Le Bihan, A. -C.; Van Hove, P.; Fassi, F.; Mercier, D.; Beauceron, S.; Beaupere, N.; Bondu, O.; Boudoul, G.; Brochet, S.; Chasserat, J.; Chierici, R.; Contardo, D.; Depasse, P.; El Mamouni, H.; Fay, J.; Gascon, S.; Gouzevitch, M.; Ille, B.; Kurca, T.; Lethuillier, M.; Mirabito, L.; Perries, S.; Sgandurra, L.; Sordini, V.; Tschudi, Y.; Verdier, P.; Viret, S.; Tsamalaidze, Z.; Autermann, C.; Beranek, S.; Calpas, B.; Edelhoff, M.; Feld, L.; Heracleous, N.; Hindrichs, O.; Jussen, R.; Klein, K.; Merz, J.; Ostapchuk, A.; Perieanu, A.; Raupach, F.; Sammet, J.; Schael, S.; Sprenger, D.; Weber, H.; Wittmer, B.; Zhukov, V.; Ata, M.; Caudron, J.; Dietz-Laursonn, E.; Duchardt, D.; Erdmann, M.; Fischer, R.; Güth, A.; Hebbeker, T.; Heidemann, C.; Hoepfner, K.; Klingebiel, D.; Kreuzer, P.; Merschmeyer, M.; Meyer, A.; Olschewski, M.; Papacz, P.; Pieta, H.; Reithler, H.; Schmitz, S. A.; Sonnenschein, L.; Steggemann, J.; Teyssier, D.; Thüer, S.; Weber, M.; Bontenackels, M.; Cherepanov, V.; Erdogan, Y.; Flügge, G.; Geenen, H.; Geisler, M.; Haj Ahmad, W.; Hoehle, F.; Kargoll, B.; Kress, T.; Kuessel, Y.; Lingemann, J.; Nowack, A.; Perchalla, L.; Pooth, O.; Sauerland, P.; Stahl, A.; Aldaya Martin, M.; Behr, J.; Behrenhoff, W.; Behrens, U.; Bergholz, M.; Bethani, A.; Borras, K.; Burgmeier, A.; Cakir, A.; Calligaris, L.; Campbell, A.; Castro, E.; Costanza, F.; Dammann, D.; Diez Pardos, C.; Eckerlin, G.; Eckstein, D.; Flucke, G.; Geiser, A.; Glushkov, I.; Gunnellini, P.; Habib, S.; Hauk, J.; Hellwig, G.; Jung, H.; Kasemann, M.; Katsas, P.; Kleinwort, C.; Kluge, H.; Knutsson, A.; Krämer, M.; Krücker, D.; Kuznetsova, E.; Lange, W.; Leonard, J.; Lohmann, W.; Lutz, B.; Mankel, R.; Marfin, I.; Marienfeld, M.; Melzer-Pellmann, I. -A.; Meyer, A. B.; Mnich, J.; Mussgiller, A.; Naumann-Emme, S.; Novgorodova, O.; Olzem, J.; Perrey, H.; Petrukhin, A.; Pitzl, D.; Raspereza, A.; Ribeiro Cipriano, P. M.; Riedl, C.; Ron, E.; Rosin, M.; Salfeld-Nebgen, J.; Schmidt, R.; Schoerner-Sadenius, T.; Sen, N.; Spiridonov, A.; Stein, M.; Walsh, R.; Wissing, C.; Blobel, V.; Enderle, H.; Erfle, J.; Gebbert, U.; Görner, M.; Gosselink, M.; Haller, J.; Hermanns, T.; Höing, R. S.; Kaschube, K.; Kaussen, G.; Kirschenmann, H.; Klanner, R.; Lange, J.; Nowak, F.; Peiffer, T.; Pietsch, N.; Rathjens, D.; Sander, C.; Schettler, H.; Schleper, P.; Schlieckau, E.; Schmidt, A.; Schröder, M.; Schum, T.; Seidel, M.; Sibille, J.; Sola, V.; Stadie, H.; Steinbrück, G.; Thomsen, J.; Vanelderen, L.; Barth, C.; Berger, J.; Böser, C.; Chwalek, T.; De Boer, W.; Descroix, A.; Dierlamm, A.; Feindt, M.; Guthoff, M.; Hackstein, C.; Hartmann, F.; Hauth, T.; Heinrich, M.; Held, H.; Hoffmann, K. H.; Husemann, U.; Katkov, I.; Komaragiri, J. R.; Lobelle Pardo, P.; Martschei, D.; Mueller, S.; Müller, Th.; Niegel, M.; Nürnberg, A.; Oberst, O.; Oehler, A.; Ott, J.; Quast, G.; Rabbertz, K.; Ratnikov, F.; Ratnikova, N.; Röcker, S.; Schilling, F. -P.; Schott, G.; Simonis, H. J.; Stober, F. M.; Troendle, D.; Ulrich, R.; Wagner-Kuhr, J.; Wayand, S.; Weiler, T.; Zeise, M.; Anagnostou, G.; Daskalakis, G.; Geralis, T.; Kesisoglou, S.; Kyriakis, A.; Loukas, D.; Manolakos, I.; Markou, A.; Markou, C.; Ntomari, E.; Gouskos, L.; Mertzimekis, T. J.; Panagiotou, A.; Saoulidou, N.; Evangelou, I.; Foudas, C.; Kokkas, P.; Manthos, N.; Papadopoulos, I.; Patras, V.; Bencze, G.; Hajdu, C.; Hidas, P.; Horvath, D.; Sikler, F.; Veszpremi, V.; Vesztergombi, G.; Beni, N.; Czellar, S.; Molnar, J.; Palinkas, J.; Szillasi, Z.; Karancsi, J.; Raics, P.; Trocsanyi, Z. L.; Ujvari, B.; Beri, S. B.; Bhatnagar, V.; Dhingra, N.; Gupta, R.; Kaur, M.; Mehta, M. Z.; Nishu, N.; Saini, L. K.; Sharma, A.; Singh, J. B.; Kumar, Ashok; Kumar, Arun; Ahuja, S.; Bhardwaj, A.; Choudhary, B. C.; Malhotra, S.; Naimuddin, M.; Ranjan, K.; Sharma, V.; Shivpuri, R. K.; Banerjee, S.; Bhattacharya, S.; Dutta, S.; Gomber, B.; Jain, Sa.; Jain, Sh.; Khurana, R.; Sarkar, S.; Sharan, M.; Abdulsalam, A.; Dutta, D.; Kailas, S.; Kumar, V.; Mohanty, A. K.; Pant, L. M.; Shukla, P.; Aziz, T.; Ganguly, S.; Guchait, M.; Gurtu, A.; Maity, M.; Majumder, G.; Mazumdar, K.; Mohanty, G. B.; Parida, B.; Sudhakar, K.; Wickramage, N.; Banerjee, S.; Dugad, S.; Arfaei, H.; Bakhshiansohi, H.; Etesami, S. M.; Fahim, A.; Hashemi, M.; Hesari, H.; Jafari, A.; Khakzad, M.; Mohammadi Najafabadi, M.; Paktinat Mehdiabadi, S.; Safarzadeh, B.; Zeinali, M.; Abbrescia, M.; Barbone, L.; Calabria, C.; Chhibra, S. S.; Colaleo, A.; Creanza, D.; De Filippis, N.; De Palma, M.; Fiore, L.; Iaselli, G.; Maggi, G.; Maggi, M.; Marangelli, B.; My, S.; Nuzzo, S.; Pacifico, N.; Pompili, A.; Pugliese, G.; Selvaggi, G.; Silvestris, L.; Singh, G.; Venditti, R.; Verwilligen, P.; Zito, G.; Abbiendi, G.; Benvenuti, A. C.; Bonacorsi, D.; Braibant-Giacomelli, S.; Brigliadori, L.; Capiluppi, P.; Castro, A.; Cavallo, F. R.; Cuffiani, M.; Dallavalle, G. M.; Fabbri, F.; Fanfani, A.; Fasanella, D.; Giacomelli, P.; Grandi, C.; Guiducci, L.; Marcellini, S.; Masetti, G.; Meneghelli, M.; Montanari, A.; Navarria, F. L.; Odorici, F.; Perrotta, A.; Primavera, F.; Rossi, A. M.; Rovelli, T.; Siroli, G. P.; Tosi, N.; Travaglini, R.; Albergo, S.; Cappello, G.; Chiorboli, M.; Costa, S.; Potenza, R.; Tricomi, A.; Tuve, C.; Barbagli, G.; Ciulli, V.; Civinini, C.; D’Alessandro, R.; Focardi, E.; Frosali, S.; Gallo, E.; Gonzi, S.; Meschini, M.; Paoletti, S.; Sguazzoni, G.; Tropiano, A.; Benussi, L.; Bianco, S.; Colafranceschi, S.; Fabbri, F.; Piccolo, D.; Fabbricatore, P.; Musenich, R.; Tosi, S.; Benaglia, A.; De Guio, F.; Di Matteo, L.; Fiorendi, S.; Gennai, S.; Ghezzi, A.; Malvezzi, S.; Manzoni, R. A.; Martelli, A.; Massironi, A.; Menasce, D.; Moroni, L.; Paganoni, M.; Pedrini, D.; Ragazzi, S.; Redaelli, N.; Sala, S.; Tabarelli de Fatis, T.; Buontempo, S.; Carrillo Montoya, C. A.; Cavallo, N.; De Cosa, A.; Dogangun, O.; Fabozzi, F.; Iorio, A. O. M.; Lista, L.; Meola, S.; Merola, M.; Paolucci, P.; Azzi, P.; Bacchetta, N.; Bisello, D.; Branca, A.; Carlin, R.; Checchia, P.; Dorigo, T.; Gasparini, F.; Gozzelino, A.; Kanishchev, K.; Lacaprara, S.; Lazzizzera, I.; Margoni, M.; Meneguzzo, A. T.; Pazzini, J.; Pozzobon, N.; Ronchese, P.; Simonetto, F.; Torassa, E.; Tosi, M.; Triossi, A.; Vanini, S.; Zotto, P.; Zucchetta, A.; Zumerle, G.; Gabusi, M.; Ratti, S. P.; Riccardi, C.; Torre, P.; Vitulo, P.; Biasini, M.; Bilei, G. M.; Fanò, L.; Lariccia, P.; Mantovani, G.; Menichelli, M.; Nappi, A.; Romeo, F.; Saha, A.; Santocchia, A.; Spiezia, A.; Taroni, S.; Azzurri, P.; Bagliesi, G.; Bernardini, J.; Boccali, T.; Broccolo, G.; Castaldi, R.; D’Agnolo, R. T.; Dell’Orso, R.; Fiori, F.; Foà, L.; Giassi, A.; Kraan, A.; Ligabue, F.; Lomtadze, T.; Martini, L.; Messineo, A.; Palla, F.; Rizzi, A.; Serban, A. T.; Spagnolo, P.; Squillacioti, P.; Tenchini, R.; Tonelli, G.; Venturi, A.; Verdini, P. G.; Barone, L.; Cavallari, F.; Del Re, D.; Diemoz, M.; Fanelli, C.; Grassi, M.; Longo, E.; Meridiani, P.; Micheli, F.; Nourbakhsh, S.; Organtini, G.; Paramatti, R.; Rahatlou, S.; Sigamani, M.; Soffi, L.; Amapane, N.; Arcidiacono, R.; Argiro, S.; Arneodo, M.; Biino, C.; Cartiglia, N.; Casasso, S.; Costa, M.; Demaria, N.; Mariotti, C.; Maselli, S.; Migliore, E.; Monaco, V.; Musich, M.; Obertino, M. M.; Pastrone, N.; Pelliccioni, M.; Potenza, A.; Romero, A.; Ruspa, M.; Sacchi, R.; Solano, A.; Staiano, A.; Belforte, S.; Candelise, V.; Casarsa, M.; Cossutti, F.; Della Ricca, G.; Gobbo, B.; Marone, M.; Montanino, D.; Penzo, A.; Schizzi, A.; Kim, T. Y.; Nam, S. K.; Chang, S.; Kim, D. H.; Kim, G. N.; Kong, D. J.; Park, H.; Son, D. C.; Son, T.; Kim, J. Y.; Kim, Zero J.; Song, S.; Choi, S.; Gyun, D.; Hong, B.; Jo, M.; Kim, H.; Kim, T. J.; Lee, K. S.; Moon, D. H.; Park, S. K.; Roh, Y.; Choi, M.; Kim, J. H.; Park, C.; Park, I. C.; Park, S.; Ryu, G.; Choi, Y.; Choi, Y. K.; Goh, J.; Kim, M. S.; Kwon, E.; Lee, B.; Lee, J.; Lee, S.; Seo, H.; Yu, I.; Bilinskas, M. J.; Grigelionis, I.; Janulis, M.; Juodagalvis, A.; Castilla-Valdez, H.; De La Cruz-Burelo, E.; Heredia-de La Cruz, I.; Lopez-Fernandez, R.; Martínez-Ortega, J.; Sanchez-Hernandez, A.; Villasenor-Cendejas, L. M.; Carrillo Moreno, S.; Vazquez Valencia, F.; Salazar Ibarguen, H. A.; Casimiro Linares, E.; Morelos Pineda, A.; Reyes-Santos, M. A.; Krofcheck, D.; Bell, A. J.; Butler, P. H.; Doesburg, R.; Reucroft, S.; Silverwood, H.; Ahmad, M.; Asghar, M. I.; Butt, J.; Hoorani, H. R.; Khalid, S.; Khan, W. A.; Khurshid, T.; Qazi, S.; Shah, M. A.; Shoaib, M.; Bialkowska, H.; Boimska, B.; Frueboes, T.; Górski, M.; Kazana, M.; Nawrocki, K.; Romanowska-Rybinska, K.; Szleper, M.; Wrochna, G.; Zalewski, P.; Brona, G.; Bunkowski, K.; Cwiok, M.; Dominik, W.; Doroba, K.; Kalinowski, A.; Konecki, M.; Krolikowski, J.; Misiura, M.; Almeida, N.; Bargassa, P.; David, A.; Faccioli, P.; Ferreira Parracho, P. G.; Gallinaro, M.; Seixas, J.; Varela, J.; Vischia, P.; Bunin, P.; Gavrilenko, M.; Golutvin, I.; Gorbunov, I.; Kamenev, A.; Karjavin, V.; Kozlov, G.; Lanev, A.; Malakhov, A.; Moisenz, P.; Palichik, V.; Perelygin, V.; Savina, M.; Shmatov, S.; Smirnov, V.; Volodko, A.; Zarubin, A.; Evstyukhin, S.; Golovtsov, V.; Ivanov, Y.; Kim, V.; Levchenko, P.; Murzin, V.; Oreshkin, V.; Smirnov, I.; Sulimov, V.; Uvarov, L.; Vavilov, S.; Vorobyev, A.; Vorobyev, An.; Andreev, Yu.; Dermenev, A.; Gninenko, S.; Golubev, N.; Kirsanov, M.; Krasnikov, N.; Matveev, V.; Pashenkov, A.; Tlisov, D.; Toropin, A.; Epshteyn, V.; Erofeeva, M.; Gavrilov, V.; Kossov, M.; Lychkovskaya, N.; Popov, V.; Safronov, G.; Semenov, S.; Shreyber, I.; Stolin, V.; Vlasov, E.; Zhokin, A.; Belyaev, A.; Boos, E.; Dubinin, M.; Dudko, L.; Ershov, A.; Gribushin, A.; Klyukhin, V.; Kodolova, O.; Lokhtin, I.; Markina, A.; Obraztsov, S.; Perfilov, M.; Petrushanko, S.; Popov, A.; Sarycheva, L.; Savrin, V.; Snigirev, A.; Andreev, V.; Azarkin, M.; Dremin, I.; Kirakosyan, M.; Leonidov, A.; Mesyats, G.; Rusakov, S. V.; Vinogradov, A.; Azhgirey, I.; Bayshev, I.; Bitioukov, S.; Grishin, V.; Kachanov, V.; Konstantinov, D.; Krychkine, V.; Petrov, V.; Ryutin, R.; Sobol, A.; Tourtchanovitch, L.; Troshin, S.; Tyurin, N.; Uzunian, A.; Volkov, A.; Adzic, P.; Djordjevic, M.; Ekmedzic, M.; Krpic, D.; Milosevic, J.; Aguilar-Benitez, M.; Alcaraz Maestre, J.; Arce, P.; Battilana, C.; Calvo, E.; Cerrada, M.; Chamizo Llatas, M.; Colino, N.; De La Cruz, B.; Delgado Peris, A.; Domínguez Vázquez, D.; Fernandez Bedoya, C.; Fernández Ramos, J. P.; Ferrando, A.; Flix, J.; Fouz, M. C.; Garcia-Abia, P.; Gonzalez Lopez, O.; Goy Lopez, S.; Hernandez, J. M.; Josa, M. I.; Merino, G.; Puerta Pelayo, J.; Quintario Olmeda, A.; Redondo, I.; Romero, L.; Santaolalla, J.; Soares, M. S.; Willmott, C.; Albajar, C.; Codispoti, G.; de Trocóniz, J. F.; Brun, H.; Cuevas, J.; Fernandez Menendez, J.; Folgueras, S.; Gonzalez Caballero, I.; Lloret Iglesias, L.; Piedra Gomez, J.; Brochero Cifuentes, J. A.; Cabrillo, I. J.; Calderon, A.; Chuang, S. H.; Duarte Campderros, J.; Felcini, M.; Fernandez, M.; Gomez, G.; Gonzalez Sanchez, J.; Graziano, A.; Jorda, C.; Lopez Virto, A.; Marco, J.; Marco, R.; Martinez Rivero, C.; Matorras, F.; Munoz Sanchez, F. J.; Rodrigo, T.; Rodríguez-Marrero, A. Y.; Ruiz-Jimeno, A.; Scodellaro, L.; Vila, I.; Vilar Cortabitarte, R.; Abbaneo, D.; Auffray, E.; Auzinger, G.; Bachtis, M.; Baillon, P.; Ball, A. H.; Barney, D.; Benitez, J. F.; Bernet, C.; Bianchi, G.; Bloch, P.; Bocci, A.; Bonato, A.; Botta, C.; Breuker, H.; Camporesi, T.; Cerminara, G.; Christiansen, T.; Coarasa Perez, J. A.; D’Enterria, D.; Dabrowski, A.; De Roeck, A.; Di Guida, S.; Dobson, M.; Dupont-Sagorin, N.; Elliott-Peisert, A.; Frisch, B.; Funk, W.; Georgiou, G.; Giffels, M.; Gigi, D.; Gill, K.; Giordano, D.; Girone, M.; Giunta, M.; Glege, F.; Gomez-Reino Garrido, R.; Govoni, P.; Gowdy, S.; Guida, R.; Gundacker, S.; Hammer, J.; Hansen, M.; Harris, P.; Hartl, C.; Harvey, J.; Hegner, B.; Hinzmann, A.; Innocente, V.; Janot, P.; Kaadze, K.; Karavakis, E.; Kousouris, K.; Lecoq, P.; Lee, Y. -J.; Lenzi, P.; Lourenço, C.; Magini, N.; Mäki, T.; Malberti, M.; Malgeri, L.; Mannelli, M.; Masetti, L.; Meijers, F.; Mersi, S.; Meschi, E.; Moser, R.; Mozer, M. U.; Mulders, M.; Musella, P.; Nesvold, E.; Orsini, L.; Palencia Cortezon, E.; Perez, E.; Perrozzi, L.; Petrilli, A.; Pfeiffer, A.; Pierini, M.; Pimiä, M.; Piparo, D.; Polese, G.; Quertenmont, L.; Racz, A.; Reece, W.; Rodrigues Antunes, J.; Rolandi, G.; Rovelli, C.; Rovere, M.; Sakulin, H.; Santanastasio, F.; Schäfer, C.; Schwick, C.; Segoni, I.; Sekmen, S.; Sharma, A.; Siegrist, P.; Silva, P.; Simon, M.; Sphicas, P.; Spiga, D.; Tsirou, A.; Veres, G. I.; Vlimant, J. R.; Wöhri, H. K.; Worm, S. D.; Zeuner, W. D.; Bertl, W.; Deiters, K.; Erdmann, W.; Gabathuler, K.; Horisberger, R.; Ingram, Q.; Kaestli, H. C.; König, S.; Kotlinski, D.; Langenegger, U.; Meier, F.; Renker, D.; Rohe, T.; Bäni, L.; Bortignon, P.; Buchmann, M. A.; Casal, B.; Chanon, N.; Deisher, A.; Dissertori, G.; Dittmar, M.; Donegà, M.; Dünser, M.; Eller, P.; Eugster, J.; Freudenreich, K.; Grab, C.; Hits, D.; Lecomte, P.; Lustermann, W.; Marini, A. C.; Martinez Ruiz del Arbol, P.; Mohr, N.; Moortgat, F.; Nägeli, C.; Nef, P.; Nessi-Tedaldi, F.; Pandolfi, F.; Pape, L.; Pauss, F.; Peruzzi, M.; Ronga, F. J.; Rossini, M.; Sala, L.; Sanchez, A. K.; Starodumov, A.; Stieger, B.; Takahashi, M.; Tauscher, L.; Thea, A.; Theofilatos, K.; Treille, D.; Urscheler, C.; Wallny, R.; Weber, H. A.; Wehrli, L.; Amsler, C.; Chiochia, V.; De Visscher, S.; Favaro, C.; Ivova Rikova, M.; Kilminster, B.; Millan Mejias, B.; Otiougova, P.; Robmann, P.; Snoek, H.; Tupputi, S.; Verzetti, M.; Chang, Y. H.; Chen, K. H.; Ferro, C.; Kuo, C. M.; Li, S. W.; Lin, W.; Lu, Y. J.; Singh, A. P.; Volpe, R.; Yu, S. S.; Bartalini, P.; Chang, P.; Chang, Y. H.; Chang, Y. W.; Chao, Y.; Chen, K. F.; Dietz, C.; Grundler, U.; Hou, W. -S.; Hsiung, Y.; Kao, K. Y.; Lei, Y. J.; Lu, R. -S.; Majumder, D.; Petrakou, E.; Shi, X.; Shiu, J. G.; Tzeng, Y. M.; Wan, X.; Wang, M.; Asavapibhop, B.; Srimanobhas, N.; Adiguzel, A.; Bakirci, M. N.; Cerci, S.; Dozen, C.; Dumanoglu, I.; Eskut, E.; Girgis, S.; Gokbulut, G.; Gurpinar, E.; Hos, I.; Kangal, E. E.; Karaman, T.; Karapinar, G.; Kayis Topaksu, A.; Onengut, G.; Ozdemir, K.; Ozturk, S.; Polatoz, A.; Sogut, K.; Sunar Cerci, D.; Tali, B.; Topakli, H.; Vergili, L. N.; Vergili, M.; Akin, I. V.; Aliev, T.; Bilin, B.; Bilmis, S.; Deniz, M.; Gamsizkan, H.; Guler, A. M.; Ocalan, K.; Ozpineci, A.; Serin, M.; Sever, R.; Surat, U. E.; Yalvac, M.; Yildirim, E.; Zeyrek, M.; Gülmez, E.; Isildak, B.; Kaya, M.; Kaya, O.; Ozkorucuklu, S.; Sonmez, N.; Cankocak, K.; Levchuk, L.; Brooke, J. J.; Clement, E.; Cussans, D.; Flacher, H.; Frazier, R.; Goldstein, J.; Grimes, M.; Heath, G. P.; Heath, H. F.; Kreczko, L.; Metson, S.; Newbold, D. M.; Nirunpong, K.; Poll, A.; Senkin, S.; Smith, V. J.; Williams, T.; Basso, L.; Bell, K. W.; Belyaev, A.; Brew, C.; Brown, R. M.; Cockerill, D. J. A.; Coughlan, J. A.; Harder, K.; Harper, S.; Jackson, J.; Kennedy, B. W.; Olaiya, E.; Petyt, D.; Radburn-Smith, B. C.; Shepherd-Themistocleous, C. H.; Tomalin, I. R.; Womersley, W. J.; Bainbridge, R.; Ball, G.; Beuselinck, R.; Buchmuller, O.; Colling, D.; Cripps, N.; Cutajar, M.; Dauncey, P.; Davies, G.; Della Negra, M.; Ferguson, W.; Fulcher, J.; Futyan, D.; Gilbert, A.; Guneratne Bryer, A.; Hall, G.; Hatherell, Z.; Hays, J.; Iles, G.; Jarvis, M.; Karapostoli, G.; Lyons, L.; Magnan, A. -M.; Marrouche, J.; Mathias, B.; Nandi, R.; Nash, J.; Nikitenko, A.; Pela, J.; Pesaresi, M.; Petridis, K.; Pioppi, M.; Raymond, D. M.; Rogerson, S.; Rose, A.; Ryan, M. J.; Seez, C.; Sharp, P.; Sparrow, A.; Stoye, M.; Tapper, A.; Vazquez Acosta, M.; Virdee, T.; Wakefield, S.; Wardle, N.; Whyntie, T.; Chadwick, M.; Cole, J. E.; Hobson, P. R.; Khan, A.; Kyberd, P.; Leggat, D.; Leslie, D.; Martin, W.; Reid, I. D.; Symonds, P.; Teodorescu, L.; Turner, M.; Hatakeyama, K.; Liu, H.; Scarborough, T.; Charaf, O.; Henderson, C.; Rumerio, P.; Avetisyan, A.; Bose, T.; Fantasia, C.; Heister, A.; St. John, J.; Lawson, P.; Lazic, D.; Rohlf, J.; Sperka, D.; Sulak, L.; Alimena, J.; Bhattacharya, S.; Christopher, G.; Cutts, D.; Demiragli, Z.; Ferapontov, A.; Garabedian, A.; Heintz, U.; Jabeen, S.; Kukartsev, G.; Laird, E.; Landsberg, G.; Luk, M.; Narain, M.; Nguyen, D.; Segala, M.; Sinthuprasith, T.; Speer, T.; Breedon, R.; Breto, G.; Calderon De La Barca Sanchez, M.; Chauhan, S.; Chertok, M.; Conway, J.; Conway, R.; Cox, P. T.; Dolen, J.; Erbacher, R.; Gardner, M.; Houtz, R.; Ko, W.; Kopecky, A.; Lander, R.; Mall, O.; Miceli, T.; Pellett, D.; Ricci-Tam, F.; Rutherford, B.; Searle, M.; Smith, J.; Squires, M.; Tripathi, M.; Vasquez Sierra, R.; Yohay, R.; Andreev, V.; Cline, D.; Cousins, R.; Duris, J.; Erhan, S.; Everaerts, P.; Farrell, C.; Hauser, J.; Ignatenko, M.; Jarvis, C.; Rakness, G.; Schlein, P.; Traczyk, P.; Valuev, V.; Weber, M.; Babb, J.; Clare, R.; Dinardo, M. E.; Ellison, J.; Gary, J. W.; Giordano, F.; Hanson, G.; Liu, H.; Long, O. R.; Luthra, A.; Nguyen, H.; Paramesvaran, S.; Sturdy, J.; Sumowidagdo, S.; Wilken, R.; Wimpenny, S.; Andrews, W.; Branson, J. G.; Cerati, G. B.; Cittolin, S.; Evans, D.; Holzner, A.; Kelley, R.; Lebourgeois, M.; Letts, J.; Macneill, I.; Mangano, B.; Padhi, S.; Palmer, C.; Petrucciani, G.; Pieri, M.; Sani, M.; Sharma, V.; Simon, S.; Sudano, E.; Tadel, M.; Tu, Y.; Vartak, A.; Wasserbaech, S.; Würthwein, F.; Yagil, A.; Yoo, J.; Barge, D.; Bellan, R.; Campagnari, C.; D’Alfonso, M.; Danielson, T.; Flowers, K.; Geffert, P.; Golf, F.; Incandela, J.; Justus, C.; Kalavase, P.; Kovalskyi, D.; Krutelyov, V.; Lowette, S.; Magaña Villalba, R.; Mccoll, N.; Pavlunin, V.; Ribnik, J.; Richman, J.; Rossin, R.; Stuart, D.; To, W.; West, C.; Apresyan, A.; Bornheim, A.; Chen, Y.; Di Marco, E.; Duarte, J.; Gataullin, M.; Ma, Y.; Mott, A.; Newman, H. B.; Rogan, C.; Spiropulu, M.; Timciuc, V.; Veverka, J.; Wilkinson, R.; Xie, S.; Yang, Y.; Zhu, R. Y.; Azzolini, V.; Calamba, A.; Carroll, R.; Ferguson, T.; Iiyama, Y.; Jang, D. W.; Liu, Y. F.; Paulini, M.; Vogel, H.; Vorobiev, I.; Cumalat, J. P.; Drell, B. R.; Ford, W. T.; Gaz, A.; Luiggi Lopez, E.; Smith, J. G.; Stenson, K.; Ulmer, K. A.; Wagner, S. R.; Alexander, J.; Chatterjee, A.; Eggert, N.; Gibbons, L. K.; Heltsley, B.; Hopkins, W.; Khukhunaishvili, A.; Kreis, B.; Mirman, N.; Nicolas Kaufman, G.; Patterson, J. R.; Ryd, A.; Salvati, E.; Sun, W.; Teo, W. D.; Thom, J.; Thompson, J.; Tucker, J.; Vaughan, J.; Weng, Y.; Winstrom, L.; Wittich, P.; Winn, D.; Abdullin, S.; Albrow, M.; Anderson, J.; Bauerdick, L. A. T.; Beretvas, A.; Berryhill, J.; Bhat, P. C.; Burkett, K.; Butler, J. N.; Chetluru, V.; Cheung, H. W. K.; Chlebana, F.; Elvira, V. D.; Fisk, I.; Freeman, J.; Gao, Y.; Green, D.; Gutsche, O.; Hanlon, J.; Harris, R. M.; Hirschauer, J.; Hooberman, B.; Jindariani, S.; Johnson, M.; Joshi, U.; Klima, B.; Kunori, S.; Kwan, S.; Leonidopoulos, C.; Linacre, J.; Lincoln, D.; Lipton, R.; Lykken, J.; Maeshima, K.; Marraffino, J. M.; Maruyama, S.; Mason, D.; McBride, P.; Mishra, K.; Mrenna, S.; Musienko, Y.; Newman-Holmes, C.; O’Dell, V.; Prokofyev, O.; Sexton-Kennedy, E.; Sharma, S.; Spalding, W. J.; Spiegel, L.; Taylor, L.; Tkaczyk, S.; Tran, N. V.; Uplegger, L.; Vaandering, E. W.; Vidal, R.; Whitmore, J.; Wu, W.; Yang, F.; Yun, J. C.; Acosta, D.; Avery, P.; Bourilkov, D.; Chen, M.; Cheng, T.; Das, S.; De Gruttola, M.; Di Giovanni, G. P.; Dobur, D.; Drozdetskiy, A.; Field, R. D.; Fisher, M.; Fu, Y.; Furic, I. K.; Gartner, J.; Hugon, J.; Kim, B.; Konigsberg, J.; Korytov, A.; Kropivnitskaya, A.; Kypreos, T.; Low, J. F.; Matchev, K.; Milenovic, P.; Mitselmakher, G.; Muniz, L.; Park, M.; Remington, R.; Rinkevicius, A.; Sellers, P.; Skhirtladze, N.; Snowball, M.; Yelton, J.; Zakaria, M.; Gaultney, V.; Hewamanage, S.; Lebolo, L. M.; Linn, S.; Markowitz, P.; Martinez, G.; Rodriguez, J. L.; Adams, T.; Askew, A.; Bochenek, J.; Chen, J.; Diamond, B.; Gleyzer, S. V.; Haas, J.; Hagopian, S.; Hagopian, V.; Jenkins, M.; Johnson, K. F.; Prosper, H.; Veeraraghavan, V.; Weinberg, M.; Baarmand, M. M.; Dorney, B.; Hohlmann, M.; Kalakhety, H.; Vodopiyanov, I.; Yumiceva, F.; Adams, M. R.; Anghel, I. M.; Apanasevich, L.; Bai, Y.; Bazterra, V. E.; Betts, R. R.; Bucinskaite, I.; Callner, J.; Cavanaugh, R.; Evdokimov, O.; Gauthier, L.; Gerber, C. E.; Hofman, D. J.; Khalatyan, S.; Lacroix, F.; O’Brien, C.; Silkworth, C.; Strom, D.; Turner, P.; Varelas, N.; Akgun, U.; Albayrak, E. A.; Bilki, B.; Clarida, W.; Duru, F.; Griffiths, S.; Merlo, J. -P.; Mermerkaya, H.; Mestvirishvili, A.; Moeller, A.; Nachtman, J.; Newsom, C. R.; Norbeck, E.; Onel, Y.; Ozok, F.; Sen, S.; Tan, P.; Tiras, E.; Wetzel, J.; Yetkin, T.; Yi, K.; Barnett, B. A.; Blumenfeld, B.; Bolognesi, S.; Fehling, D.; Giurgiu, G.; Gritsan, A. V.; Guo, Z. J.; Hu, G.; Maksimovic, P.; Swartz, M.; Whitbeck, A.; Baringer, P.; Bean, A.; Benelli, G.; Kenny, R. P.; Murray, M.; Noonan, D.; Sanders, S.; Stringer, R.; Tinti, G.; Wood, J. S.; Barfuss, A. F.; Bolton, T.; Chakaberia, I.; Ivanov, A.; Khalil, S.; Makouski, M.; Maravin, Y.; Shrestha, S.; Svintradze, I.; Gronberg, J.; Lange, D.; Rebassoo, F.; Wright, D.; Baden, A.; Calvert, B.; Eno, S. C.; Gomez, J. A.; Hadley, N. J.; Kellogg, R. G.; Kirn, M.; Kolberg, T.; Lu, Y.; Marionneau, M.; Mignerey, A. C.; Pedro, K.; Skuja, A.; Temple, J.; Tonjes, M. B.; Tonwar, S. C.; Apyan, A.; Bauer, G.; Bendavid, J.; Busza, W.; Butz, E.; Cali, I. A.; Chan, M.; Dutta, V.; Gomez Ceballos, G.; Goncharov, M.; Kim, Y.; Klute, M.; Krajczar, K.; Levin, A.; Luckey, P. D.; Ma, T.; Nahn, S.; Paus, C.; Ralph, D.; Roland, C.; Roland, G.; Rudolph, M.; Stephans, G. S. F.; Stöckli, F.; Sumorok, K.; Sung, K.; Velicanu, D.; Wenger, E. A.; Wolf, R.; Wyslouch, B.; Yang, M.; Yilmaz, Y.; Yoon, A. S.; Zanetti, M.; Zhukova, V.; Cooper, S. I.; Dahmes, B.; De Benedetti, A.; Franzoni, G.; Gude, A.; Kao, S. C.; Klapoetke, K.; Kubota, Y.; Mans, J.; Pastika, N.; Rusack, R.; Sasseville, M.; Singovsky, A.; Tambe, N.; Turkewitz, J.; Cremaldi, L. M.; Kroeger, R.; Perera, L.; Rahmat, R.; Sanders, D. A.; Avdeeva, E.; Bloom, K.; Bose, S.; Claes, D. R.; Dominguez, A.; Eads, M.; Keller, J.; Kravchenko, I.; Lazo-Flores, J.; Malik, S.; Snow, G. R.; Godshalk, A.; Iashvili, I.; Jain, S.; Kharchilava, A.; Kumar, A.; Rappoccio, S.; Alverson, G.; Barberis, E.; Baumgartel, D.; Chasco, M.; Haley, J.; Nash, D.; Orimoto, T.; Trocino, D.; Wood, D.; Zhang, J.; Anastassov, A.; Hahn, K. A.; Kubik, A.; Lusito, L.; Mucia, N.; Odell, N.; Ofierzynski, R. A.; Pollack, B.; Pozdnyakov, A.; Schmitt, M.; Stoynev, S.; Velasco, M.; Won, S.; Antonelli, L.; Berry, D.; Brinkerhoff, A.; Chan, K. M.; Hildreth, M.; Jessop, C.; Karmgard, D. J.; Kolb, J.; Lannon, K.; Luo, W.; Lynch, S.; Marinelli, N.; Morse, D. M.; Pearson, T.; Planer, M.; Ruchti, R.; Slaunwhite, J.; Valls, N.; Wayne, M.; Wolf, M.; Bylsma, B.; Durkin, L. S.; Hill, C.; Hughes, R.; Kotov, K.; Ling, T. Y.; Puigh, D.; Rodenburg, M.; Vuosalo, C.; Williams, G.; Winer, B. L.; Berry, E.; Elmer, P.; Halyo, V.; Hebda, P.; Hegeman, J.; Hunt, A.; Jindal, P.; Koay, S. A.; Lopes Pegna, D.; Lujan, P.; Marlow, D.; Medvedeva, T.; Mooney, M.; Olsen, J.; Piroué, P.; Quan, X.; Raval, A.; Saka, H.; Stickland, D.; Tully, C.; Werner, J. S.; Zuranski, A.; Brownson, E.; Lopez, A.; Mendez, H.; Ramirez Vargas, J. E.; Alagoz, E.; Barnes, V. E.; Benedetti, D.; Bolla, G.; Bortoletto, D.; De Mattia, M.; Everett, A.; Hu, Z.; Jones, M.; Koybasi, O.; Kress, M.; Laasanen, A. T.; Leonardo, N.; Maroussov, V.; Merkel, P.; Miller, D. H.; Neumeister, N.; Shipsey, I.; Silvers, D.; Svyatkovskiy, A.; Vidal Marono, M.; Yoo, H. D.; Zablocki, J.; Zheng, Y.; Guragain, S.; Parashar, N.; Adair, A.; Akgun, B.; Boulahouache, C.; Ecklund, K. M.; Geurts, F. J. M.; Li, W.; Padley, B. P.; Redjimi, R.; Roberts, J.; Zabel, J.; Betchart, B.; Bodek, A.; Chung, Y. S.; Covarelli, R.; de Barbaro, P.; Demina, R.; Eshaq, Y.; Ferbel, T.; Garcia-Bellido, A.; Goldenzweig, P.; Han, J.; Harel, A.; Miner, D. C.; Vishnevskiy, D.; Zielinski, M.; Bhatti, A.; Ciesielski, R.; Demortier, L.; Goulianos, K.; Lungu, G.; Malik, S.; Mesropian, C.; Arora, S.; Barker, A.; Chou, J. P.; Contreras-Campana, C.; Contreras-Campana, E.; Duggan, D.; Ferencek, D.; Gershtein, Y.; Gray, R.; Halkiadakis, E.; Hidas, D.; Lath, A.; Panwalkar, S.; Park, M.; Patel, R.; Rekovic, V.; Robles, J.; Rose, K.; Salur, S.; Schnetzer, S.; Seitz, C.; Somalwar, S.; Stone, R.; Thomas, S.; Walker, M.; Cerizza, G.; Hollingsworth, M.; Spanier, S.; Yang, Z. C.; York, A.; Eusebi, R.; Flanagan, W.; Gilmore, J.; Kamon, T.; Khotilovich, V.; Montalvo, R.; Osipenkov, I.; Pakhotin, Y.; Perloff, A.; Roe, J.; Safonov, A.; Sakuma, T.; Sengupta, S.; Suarez, I.; Tatarinov, A.; Toback, D.; Akchurin, N.; Damgov, J.; Dragoiu, C.; Dudero, P. R.; Jeong, C.; Kovitanggoon, K.; Lee, S. W.; Libeiro, T.; Volobouev, I.; Appelt, E.; Delannoy, A. G.; Florez, C.; Greene, S.; Gurrola, A.; Johns, W.; Kurt, P.; Maguire, C.; Melo, A.; Sharma, M.; Sheldon, P.; Snook, B.; Tuo, S.; Velkovska, J.; Arenton, M. W.; Balazs, M.; Boutle, S.; Cox, B.; Francis, B.; Goodell, J.; Hirosky, R.; Ledovskoy, A.; Lin, C.; Neu, C.; Wood, J.; Gollapinni, S.; Harr, R.; Karchin, P. E.; Kottachchi Kankanamge Don, C.; Lamichhane, P.; Sakharov, A.; Anderson, M.; Belknap, D. A.; Borrello, L.; Carlsmith, D.; Cepeda, M.; Dasu, S.; Friis, E.; Gray, L.; Grogg, K. S.; Grothe, M.; Hall-Wilton, R.; Herndon, M.; Hervé, A.; Klabbers, P.; Klukas, J.; Lanaro, A.; Lazaridis, C.; Loveless, R.; Mohapatra, A.; Ojalvo, I.; Palmonari, F.; Pierro, G. A.; Ross, I.; Savin, A.; Smith, W. H.; Swanson, J.

    2013-09-01

    The results of searches for supersymmetry by the CMS experiment are interpreted in the framework of simplified models. The results are based on data corresponding to an integrated luminosity of 4.73 to 4.98 inverse femtobarns. The data were collected at the LHC in proton-proton collisions at a center-of-mass energy of 7 TeV. This paper describes the method of interpretation and provides upper limits on the product of the production cross section and branching fraction as a function of new particle masses for a number of simplified models. These limits and the corresponding experimental acceptance calculations can be used to constrain other theoretical models and to compare different supersymmetry-inspired analyses.

  10. Application of the simplified J-estimation scheme Aramis to mismatching welds in CCP; Application du concept d`integrale J dans l`outil Aramis aux effets de mismatch sur des eprouvettes CCP

    Energy Technology Data Exchange (ETDEWEB)

    Eripret, C.; Franco, C.; Gilles, P.

    1995-12-31

    The J-based criteria give reasonable predictions of the failure behaviour of ductile cracked metallic structures, even if the material characterization may be sensitive to the size of the specimens. However in cracked welds, this phenomenon due to stress triaxiality effects could be enhanced. Furthermore, the application of conventional methods of toughness measurement (ESIS or ASTM standard) have evidenced a strong influence of the portion of the weld metal in the specimen. Several authors have shown the inadequacy of the simplified J-estimation methods developed for homogeneous materials. These heterogeneity effects mainly related to the mismatch ratio (ratio of weld metal yield strength upon base metal yield strength) as well as to the geometrical parameter h/W-a (weld width upon ligament size). In order to make decisive progress in this field, the Atomic Energy Commission (CEA), the PWR manufacturer FRAMATOME, and the French utility (EDF) have launched a large research program on cracked piping welds behaviour. As part of this program, a new J-estimation scheme, so called ARAMIS, has been developed to account for the influence of both materials, i.e. base metal and weld metal, on the structural resistance of cracked welds. It has been shown that, when the mismatch is high, and when the ligament size is small compared to the weld width, a classical J-based method using the softer material properties is very conservative. On the opposite the ARAMIS method provides a good estimate of J, because it predicts pretty well the shift of the cracked weld limit load, due to the presence of the weld. the influence of geometrical parameters such as crack size, weld width, or specimen length is property accounted for. (authors). 23 refs., 8 figs., 1 tab., 1 appendix.

  11. Extended Finite Element Method with Simplified Spherical Harmonics Approximation for the Forward Model of Optical Molecular Imaging

    Directory of Open Access Journals (Sweden)

    Wei Li

    2012-01-01

    Full Text Available An extended finite element method (XFEM for the forward model of 3D optical molecular imaging is developed with simplified spherical harmonics approximation (SPN. In XFEM scheme of SPN equations, the signed distance function is employed to accurately represent the internal tissue boundary, and then it is used to construct the enriched basis function of the finite element scheme. Therefore, the finite element calculation can be carried out without the time-consuming internal boundary mesh generation. Moreover, the required overly fine mesh conforming to the complex tissue boundary which leads to excess time cost can be avoided. XFEM conveniences its application to tissues with complex internal structure and improves the computational efficiency. Phantom and digital mouse experiments were carried out to validate the efficiency of the proposed method. Compared with standard finite element method and classical Monte Carlo (MC method, the validation results show the merits and potential of the XFEM for optical imaging.

  12. On simplified application of multidimensional Savitzky-Golay filters and differentiators

    Science.gov (United States)

    Shekhar, Chandra

    2016-02-01

    I propose a simplified approach for multidimensional Savitzky-Golay filtering, to enable its fast and easy implementation in scientific and engineering applications. The proposed method, which is derived from a generalized framework laid out by Thornley (D. J. Thornley, "Novel anisotropic multidimensional convolution filters for derivative estimation and reconstruction" in Proceedings of International Conference on Signal Processing and Communications, November 2007), first transforms any given multidimensional problem into a unique one, by transforming coordinates of the sampled data nodes to unity-spaced, uniform data nodes, and then performs filtering and calculates partial derivatives on the unity-spaced nodes. It is followed by transporting the calculated derivatives back onto the original data nodes by using the chain rule of differentiation. The burden to performing the most cumbersome task, which is to carry out the filtering and to obtain derivatives on the unity-spaced nodes, is almost eliminated by providing convolution coefficients for a number of convolution kernel sizes and polynomial orders, up to four spatial dimensions. With the availability of the convolution coefficients, the task of filtering at a data node reduces merely to multiplication of two known matrices. Simplified strategies to adequately address near-boundary data nodes and to calculate partial derivatives there are also proposed. Finally, the proposed methodologies are applied to a three-dimensional experimentally obtained data set, which shows that multidimensional Savitzky-Golay filters and differentiators perform well in both the internal and the near-boundary regions of the domain.

  13. Simplified Methods Applied to Nonlinear Motion of Spar Platforms

    Energy Technology Data Exchange (ETDEWEB)

    Haslum, Herbjoern Alf

    2000-07-01

    Simplified methods for prediction of motion response of spar platforms are presented. The methods are based on first and second order potential theory. Nonlinear drag loads and the effect of the pumping motion in a moon-pool are also considered. Large amplitude pitch motions coupled to extreme amplitude heave motions may arise when spar platforms are exposed to long period swell. The phenomenon is investigated theoretically and explained as a Mathieu instability. It is caused by nonlinear coupling effects between heave, surge, and pitch. It is shown that for a critical wave period, the envelope of the heave motion makes the pitch motion unstable. For the same wave period, a higher order pitch/heave coupling excites resonant heave response. This mutual interaction largely amplifies both the pitch and the heave response. As a result, the pitch/heave instability revealed in this work is more critical than the previously well known Mathieu's instability in pitch which occurs if the wave period (or the natural heave period) is half the natural pitch period. The Mathieu instability is demonstrated both by numerical simulations with a newly developed calculation tool and in model experiments. In order to learn more about the conditions for this instability to occur and also how it may be controlled, different damping configurations (heave damping disks and pitch/surge damping fins) are evaluated both in model experiments and by numerical simulations. With increased drag damping, larger wave amplitudes and more time are needed to trigger the instability. The pitch/heave instability is a low probability of occurrence phenomenon. Extreme wave periods are needed for the instability to be triggered, about 20 seconds for a typical 200m draft spar. However, it may be important to consider the phenomenon in design since the pitch/heave instability is very critical. It is also seen that when classical spar platforms (constant cylindrical cross section and about 200m draft

  14. A Simplified Method for Laboratory Preparation of Organ Specific Indium 113m Compounds

    Energy Technology Data Exchange (ETDEWEB)

    Adatepe, M H; Potchen, E James [Washington University School of Medicine, St. Louis (United States)

    1969-03-15

    Generator systems producing short lived nuclides from longer lived parents have distinct clinical advantages. They are more economical, result in a lower radiation dose, and can make short lived scanning readily available even in areas remote from rapid radiopharmaceutical delivery services. The {sup 113}Sn-{sup 113m}In generator has the additional advantage that, as a transition metal, Indium can be readily complexed into organ specific preparations. 113Sn, a reactor produced nuclide with a 118 day half life, is absorbed on a zirconium or silica gel column. the generator is eluded with 5 to 8 ml of 0.05 N HCL solution at pH 1.3-1.4. The daughter nuclide, {sup 113m}In, has a half life of 1.7 hours and emits a 393 Kev monoenergetic gamma ray. Previous methods for labeling organ specific complexes with {sup 113m}In required terminal autoclaving before injection. With the recent introduction of sterile, apyrogenic {sup 113}Sn-{sup 113m}In generators, we have developed a simplified technique for the laboratory preparation of Indium labeled compounds. This method eliminates autoclaving and titration enabling us to pre-prepare organ specific complexes for blood pool, liver, spleen, brain, kidney and lung scanning.

  15. An improved method for preparing Agrobacterium cells that simplifies the Arabidopsis transformation protocol

    Directory of Open Access Journals (Sweden)

    Ülker Bekir

    2006-10-01

    Full Text Available Abstract Background The Agrobacterium vacuum (Bechtold et al 1993 and floral-dip (Clough and Bent 1998 are very efficient methods for generating transgenic Arabidopsis plants. These methods allow plant transformation without the need for tissue culture. Large volumes of bacterial cultures grown in liquid media are necessary for both of these transformation methods. This limits the number of transformations that can be done at a given time due to the need for expensive large shakers and limited space on them. Additionally, the bacterial colonies derived from solid media necessary for starting these liquid cultures often fail to grow in such large volumes. Therefore the optimum stage of plant material for transformation is often missed and new plant material needs to be grown. Results To avoid problems associated with large bacterial liquid cultures, we investigated whether bacteria grown on plates are also suitable for plant transformation. We demonstrate here that bacteria grown on plates can be used with similar efficiency for transforming plants even after one week of storage at 4°C. This makes it much easier to synchronize Agrobacterium and plants for transformation. DNA gel blot analysis was carried out on the T1 plants surviving the herbicide selection and demonstrated that the surviving plants are indeed transgenic. Conclusion The simplified method works as efficiently as the previously reported protocols and significantly reduces the workload, cost and time. Additionally, the protocol reduces the risk of large scale contaminations involving GMOs. Most importantly, many more independent transformations per day can be performed using this modified protocol.

  16. Simplified Estimation of Tritium Inventory in Stainless Steel

    International Nuclear Information System (INIS)

    Willms, R. Scott

    2005-01-01

    An important part of tritium facility waste management is estimating the residual tritium inventory in stainless steel. This was needed as part of the decontamination and decommissioning associated with the Tritium Systems Test Assembly at Los Alamos National Laboratory. In particular, the disposal path for three, large tanks would vary substantially depending on the tritium inventory in the stainless steel walls. For this purpose the time-dependant diffusion equation was solved using previously measured parameters. These results were compared to previous work that measured the tritium inventory in the stainless steel wall of a 50-L tritium container. Good agreement was observed. These results are reduced to a simple algebraic equation that can readily be used to estimate tritium inventories in room temperature stainless steel based on tritium partial pressure and exposure time. Results are available for both constant partial pressure exposures and for varying partial pressures. Movies of the time dependant results were prepared which are particularly helpful for interpreting results and drawing conclusions

  17. Actual evapotranspiration modeling using the operational Simplified Surface Energy Balance (SSEBop) approach

    Science.gov (United States)

    Savoca, Mark E.; Senay, Gabriel B.; Maupin, Molly A.; Kenny, Joan F.; Perry, Charles A.

    2013-01-01

    Remote-sensing technology and surface-energy-balance methods can provide accurate and repeatable estimates of actual evapotranspiration (ETa) when used in combination with local weather datasets over irrigated lands. Estimates of ETa may be used to provide a consistent, accurate, and efficient approach for estimating regional water withdrawals for irrigation and associated consumptive use (CU), especially in arid cropland areas that require supplemental water due to insufficient natural supplies from rainfall, soil moisture, or groundwater. ETa in these areas is considered equivalent to CU, and represents the part of applied irrigation water that is evaporated and/or transpired, and is not available for immediate reuse. A recent U.S. Geological Survey study demonstrated the application of the remote-sensing-based Simplified Surface Energy Balance (SSEB) model to estimate 10-year average ETa at 1-kilometer resolution on national and regional scales, and compared those ETa values to the U.S. Geological Survey’s National Water-Use Information Program’s 1995 county estimates of CU. The operational version of the operational SSEB (SSEBop) method is now used to construct monthly, county-level ETa maps of the conterminous United States for the years 2000, 2005, and 2010. The performance of the SSEBop was evaluated using eddy covariance flux tower datasets compiled from 2005 datasets, and the results showed a strong linear relationship in different land cover types across diverse ecosystems in the conterminous United States (correlation coefficient [r] ranging from 0.75 to 0.95). For example, r for woody savannas (0.75), grassland (0.75), forest (0.82), cropland (0.84), shrub land (0.89), and urban (0.95). A comparison of the remote-sensing SSEBop method for estimating ETa and the Hamon temperature method for estimating potential ET (ETp) also was conducted, using regressions of all available county averages of ETa for 2005 and 2010, and yielded correlations of r = 0

  18. 29 CFR 2520.104-48 - Alternative method of compliance for model simplified employee pensions-IRS Form 5305-SEP.

    Science.gov (United States)

    2010-07-01

    ... employee pensions-IRS Form 5305-SEP. 2520.104-48 Section 2520.104-48 Labor Regulations Relating to Labor... compliance for model simplified employee pensions—IRS Form 5305-SEP. Under the authority of section 110 of... Security Act of 1974 in the case of a simplified employee pension (SEP) described in section 408(k) of the...

  19. System and method for traffic signal timing estimation

    KAUST Repository

    Dumazert, Julien; Claudel, Christian G.

    2015-01-01

    A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.

  20. System and method for traffic signal timing estimation

    KAUST Repository

    Dumazert, Julien

    2015-12-30

    A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.

  1. A Simplified Control Method for Tie-Line Power of DC Micro-Grid

    Directory of Open Access Journals (Sweden)

    Yanbo Che

    2018-04-01

    Full Text Available Compared with the AC micro-grid, the DC micro-grid has low energy loss and no issues of frequency stability, which makes it more accessible for distributed energy. Thus, the DC micro-grid has good potential for development. A variety of renewable energy is included in the DC micro-grid, which is easily affected by the environment, causing fluctuation of the DC voltage. For grid-connected DC micro-grid with droop control strategy, the tie-line power is affected by fluctuations in the DC voltage, which sets higher requirements for coordinated control of the DC micro-grid. This paper presents a simplified control method to maintain a constant tie-line power that is suitable for the DC micro-grid with the droop control strategy. By coordinating the designs of the droop control characteristics of generators, energy storage units and grid-connected inverter, a dead band is introduced to the droop control to improve the system performance. The tie-line power in the steady state is constant. When a large disturbance occurs, the AC power grid can provide power support to the micro-grid in time. The simulation example verifies the effectiveness of the proposed control strategy.

  2. [Simplified laparoscopic gastric bypass. Initial experience].

    Science.gov (United States)

    Hernández-Miguelena, Luis; Maldonado-Vázquez, Angélica; Cortes-Romano, Pablo; Ríos-Cruz, Daniel; Marín-Domínguez, Raúl; Castillo-González, Armando

    2014-01-01

    Obesity surgery includes various gastrointestinal procedures. Roux-en-Y gastric bypass is the prototype of mixed procedures being the most practiced worldwide. A similar and novel technique has been adopted by Dr. Almino Cardoso Ramos and Dr. Manoel Galvao called "simplified bypass," which has been accepted due to the greater ease and very similar results to the conventional technique. The aim of this study is to describe the results of the simplified gastric bypass for treatment of morbid obesity in our institution. We performed a descriptive, retrospective study of all patients undergoing simplified gastric bypass from January 2008 to July 2012 in the obesity clinic of a private hospital in Mexico City. A total of 90 patients diagnosed with morbid obesity underwent simplified gastric bypass. Complications occurred in 10% of patients; these were more frequent bleeding and internal hernia. Mortality in the study period was 0%. The average weight loss at 12 months was 72.7%. Simplified gastric bypass surgery is safe with good mid-term results and a loss of adequate weight in 71% of cases.

  3. Evaluation of selected static methods used to estimate element mobility, acid-generating and acid-neutralizing potentials associated with geologically diverse mining wastes

    Science.gov (United States)

    Hageman, Philip L.; Seal, Robert R.; Diehl, Sharon F.; Piatak, Nadine M.; Lowers, Heather

    2015-01-01

    A comparison study of selected static leaching and acid–base accounting (ABA) methods using a mineralogically diverse set of 12 modern-style, metal mine waste samples was undertaken to understand the relative performance of the various tests. To complement this study, in-depth mineralogical studies were conducted in order to elucidate the relationships between sample mineralogy, weathering features, and leachate and ABA characteristics. In part one of the study, splits of the samples were leached using six commonly used leaching tests including paste pH, the U.S. Geological Survey (USGS) Field Leach Test (FLT) (both 5-min and 18-h agitation), the U.S. Environmental Protection Agency (USEPA) Method 1312 SPLP (both leachate pH 4.2 and leachate pH 5.0), and the USEPA Method 1311 TCLP (leachate pH 4.9). Leachate geochemical trends were compared in order to assess differences, if any, produced by the various leaching procedures. Results showed that the FLT (5-min agitation) was just as effective as the 18-h leaching tests in revealing the leachate geochemical characteristics of the samples. Leaching results also showed that the TCLP leaching test produces inconsistent results when compared to results produced from the other leaching tests. In part two of the study, the ABA was determined on splits of the samples using both well-established traditional static testing methods and a relatively quick, simplified net acid–base accounting (NABA) procedure. Results showed that the traditional methods, while time consuming, provide the most in-depth data on both the acid generating, and acid neutralizing tendencies of the samples. However, the simplified NABA method provided a relatively fast, effective estimation of the net acid–base account of the samples. Overall, this study showed that while most of the well-established methods are useful and effective, the use of a simplified leaching test and the NABA acid–base accounting method provide investigators fast

  4. Reverse survival method of fertility estimation: An evaluation

    Directory of Open Access Journals (Sweden)

    Thomas Spoorenberg

    2014-07-01

    Full Text Available Background: For the most part, demographers have relied on the ever-growing body of sample surveys collecting full birth history to derive total fertility estimates in less statistically developed countries. Yet alternative methods of fertility estimation can return very consistent total fertility estimates by using only basic demographic information. Objective: This paper evaluates the consistency and sensitivity of the reverse survival method -- a fertility estimation method based on population data by age and sex collected in one census or a single-round survey. Methods: A simulated population was first projected over 15 years using a set of fertility and mortality age and sex patterns. The projected population was then reverse survived using the Excel template FE_reverse_4.xlsx, provided with Timæus and Moultrie (2012. Reverse survival fertility estimates were then compared for consistency to the total fertility rates used to project the population. The sensitivity was assessed by introducing a series of distortions in the projection of the population and comparing the difference implied in the resulting fertility estimates. Results: The reverse survival method produces total fertility estimates that are very consistent and hardly affected by erroneous assumptions on the age distribution of fertility or by the use of incorrect mortality levels, trends, and age patterns. The quality of the age and sex population data that is 'reverse survived' determines the consistency of the estimates. The contribution of the method for the estimation of past and present trends in total fertility is illustrated through its application to the population data of five countries characterized by distinct fertility levels and data quality issues. Conclusions: Notwithstanding its simplicity, the reverse survival method of fertility estimation has seldom been applied. The method can be applied to a large body of existing and easily available population data

  5. Simplified approach to dynamic process modelling. Annex 4

    International Nuclear Information System (INIS)

    Danilytchev, A.; Elistratov, D.; Stogov, V.

    2010-01-01

    This document presents the OKBM contribution to the analysis of a benchmark of BN-600 reactor hybrid core with simultaneous loading of uranium fuel and MOX within the framework of the international IAEA Co-ordinated Research Project 'Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects'. In accordance with Action 12 defined during the second RCM, the simplified transient analysis was carried out on the basis of the reactivity coefficients sets, presented by all CRP participants. Purpose of present comparison is the evaluation of spread in the basic transient parameters in connection with spread in the used reactivity coefficients. A ULOF accident initial stage on the simplified model was calculated by using the SAS4A code

  6. Simplified Method for Preliminary EIA of WE Installations based on Newtechnology Classification

    DEFF Research Database (Denmark)

    Margheritini, Lucia

    2010-01-01

    The Environmental Impact Assessment (EIA) is an environmental management instrument implemented worldwide. Full scale WECs are expected to be subjects to EIA. The consents application process can be a very demanding for Wave Energy Converters (WECs) developers. The process is possibly aggravated...... depending on few strategic parameters to simplify and speed up the scoping procedure and to provide an easier understanding of the technologies to the authorities and bodies involved in the EIA of WECs....

  7. Statistically Efficient Methods for Pitch and DOA Estimation

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2013-01-01

    , it was recently considered to estimate the DOA and pitch jointly. In this paper, we propose two novel methods for DOA and pitch estimation. They both yield maximum-likelihood estimates in white Gaussian noise scenar- ios, where the SNR may be different across channels, as opposed to state-of-the-art methods......Traditionally, direction-of-arrival (DOA) and pitch estimation of multichannel, periodic sources have been considered as two separate problems. Separate estimation may render the task of resolving sources with similar DOA or pitch impossible, and it may decrease the estimation accuracy. Therefore...

  8. Evaluation of Residual Stresses using Ring Core Method

    Directory of Open Access Journals (Sweden)

    Holý S.

    2010-06-01

    Full Text Available The method for measuring residual stresses using ring-core method is described. Basic relations are given for residual stress measurement along the specimen depth and simplified method is described for average residual stress estimation in the drilled layer for known principal stress directions. The estimation of calculated coefficients using FEM is described. Comparison of method sensitivity is made with hole-drilling method. The device for method application is described and an example of experiment is introduced. The accuracy of method is discussed. The influence of strain gauge rosette misalignment to the evaluated residual stresses is performed using FEM.

  9. A Fast Soft Bit Error Rate Estimation Method

    Directory of Open Access Journals (Sweden)

    Ait-Idir Tarik

    2010-01-01

    Full Text Available We have suggested in a previous publication a method to estimate the Bit Error Rate (BER of a digital communications system instead of using the famous Monte Carlo (MC simulation. This method was based on the estimation of the probability density function (pdf of soft observed samples. The kernel method was used for the pdf estimation. In this paper, we suggest to use a Gaussian Mixture (GM model. The Expectation Maximisation algorithm is used to estimate the parameters of this mixture. The optimal number of Gaussians is computed by using Mutual Information Theory. The analytical expression of the BER is therefore simply given by using the different estimated parameters of the Gaussian Mixture. Simulation results are presented to compare the three mentioned methods: Monte Carlo, Kernel and Gaussian Mixture. We analyze the performance of the proposed BER estimator in the framework of a multiuser code division multiple access system and show that attractive performance is achieved compared with conventional MC or Kernel aided techniques. The results show that the GM method can drastically reduce the needed number of samples to estimate the BER in order to reduce the required simulation run-time, even at very low BER.

  10. Coalescent methods for estimating phylogenetic trees.

    Science.gov (United States)

    Liu, Liang; Yu, Lili; Kubatko, Laura; Pearl, Dennis K; Edwards, Scott V

    2009-10-01

    We review recent models to estimate phylogenetic trees under the multispecies coalescent. Although the distinction between gene trees and species trees has come to the fore of phylogenetics, only recently have methods been developed that explicitly estimate species trees. Of the several factors that can cause gene tree heterogeneity and discordance with the species tree, deep coalescence due to random genetic drift in branches of the species tree has been modeled most thoroughly. Bayesian approaches to estimating species trees utilizes two likelihood functions, one of which has been widely used in traditional phylogenetics and involves the model of nucleotide substitution, and the second of which is less familiar to phylogeneticists and involves the probability distribution of gene trees given a species tree. Other recent parametric and nonparametric methods for estimating species trees involve parsimony criteria, summary statistics, supertree and consensus methods. Species tree approaches are an appropriate goal for systematics, appear to work well in some cases where concatenation can be misleading, and suggest that sampling many independent loci will be paramount. Such methods can also be challenging to implement because of the complexity of the models and computational time. In addition, further elaboration of the simplest of coalescent models will be required to incorporate commonly known issues such as deviation from the molecular clock, gene flow and other genetic forces.

  11. Statistical error estimation of the Feynman-α method using the bootstrap method

    International Nuclear Information System (INIS)

    Endo, Tomohiro; Yamamoto, Akio; Yagi, Takahiro; Pyeon, Cheol Ho

    2016-01-01

    Applicability of the bootstrap method is investigated to estimate the statistical error of the Feynman-α method, which is one of the subcritical measurement techniques on the basis of reactor noise analysis. In the Feynman-α method, the statistical error can be simply estimated from multiple measurements of reactor noise, however it requires additional measurement time to repeat the multiple times of measurements. Using a resampling technique called 'bootstrap method' standard deviation and confidence interval of measurement results obtained by the Feynman-α method can be estimated as the statistical error, using only a single measurement of reactor noise. In order to validate our proposed technique, we carried out a passive measurement of reactor noise without any external source, i.e. with only inherent neutron source by spontaneous fission and (α,n) reactions in nuclear fuels at the Kyoto University Criticality Assembly. Through the actual measurement, it is confirmed that the bootstrap method is applicable to approximately estimate the statistical error of measurement results obtained by the Feynman-α method. (author)

  12. Simplifying the audit of risk factor recording and control

    DEFF Research Database (Denmark)

    Zhao, Min; Cooney, Marie Therese; Klipstein-Grobusch, Kerstin

    2016-01-01

    BACKGROUND: To simplify the assessment of the recording and control of coronary heart disease risk factors in different countries and regions. DESIGN: The SUrvey of Risk Factors (SURF) is an international clinical audit. METHODS: Data on consecutive patients with established coronary heart disease...

  13. Online Internal Temperature Estimation for Lithium-Ion Batteries Based on Kalman Filter

    OpenAIRE

    Jinlei Sun; Guo Wei; Lei Pei; Rengui Lu; Kai Song; Chao Wu; Chunbo Zhu

    2015-01-01

    The battery internal temperature estimation is important for the thermal safety in applications, because the internal temperature is hard to measure directly. In this work, an online internal temperature estimation method based on a simplified thermal model using a Kalman filter is proposed. As an improvement, the influences of entropy change and overpotential on heat generation are analyzed quantitatively. The model parameters are identified through a current pulse test. The charge/discharg...

  14. Automatic estimation of pressure-dependent rate coefficients.

    Science.gov (United States)

    Allen, Joshua W; Goldsmith, C Franklin; Green, William H

    2012-01-21

    A general framework is presented for accurately and efficiently estimating the phenomenological pressure-dependent rate coefficients for reaction networks of arbitrary size and complexity using only high-pressure-limit information. Two aspects of this framework are discussed in detail. First, two methods of estimating the density of states of the species in the network are presented, including a new method based on characteristic functional group frequencies. Second, three methods of simplifying the full master equation model of the network to a single set of phenomenological rates are discussed, including a new method based on the reservoir state and pseudo-steady state approximations. Both sets of methods are evaluated in the context of the chemically-activated reaction of acetyl with oxygen. All three simplifications of the master equation are usually accurate, but each fails in certain situations, which are discussed. The new methods usually provide good accuracy at a computational cost appropriate for automated reaction mechanism generation.

  15. Automatic estimation of pressure-dependent rate coefficients

    KAUST Repository

    Allen, Joshua W.; Goldsmith, C. Franklin; Green, William H.

    2012-01-01

    A general framework is presented for accurately and efficiently estimating the phenomenological pressure-dependent rate coefficients for reaction networks of arbitrary size and complexity using only high-pressure-limit information. Two aspects of this framework are discussed in detail. First, two methods of estimating the density of states of the species in the network are presented, including a new method based on characteristic functional group frequencies. Second, three methods of simplifying the full master equation model of the network to a single set of phenomenological rates are discussed, including a new method based on the reservoir state and pseudo-steady state approximations. Both sets of methods are evaluated in the context of the chemically-activated reaction of acetyl with oxygen. All three simplifications of the master equation are usually accurate, but each fails in certain situations, which are discussed. The new methods usually provide good accuracy at a computational cost appropriate for automated reaction mechanism generation. This journal is © the Owner Societies.

  16. Accounting for uncertain fault geometry in earthquake source inversions - I: theory and simplified application

    Science.gov (United States)

    Ragon, Théa; Sladen, Anthony; Simons, Mark

    2018-05-01

    The ill-posed nature of earthquake source estimation derives from several factors including the quality and quantity of available observations and the fidelity of our forward theory. Observational errors are usually accounted for in the inversion process. Epistemic errors, which stem from our simplified description of the forward problem, are rarely dealt with despite their potential to bias the estimate of a source model. In this study, we explore the impact of uncertainties related to the choice of a fault geometry in source inversion problems. The geometry of a fault structure is generally reduced to a set of parameters, such as position, strike and dip, for one or a few planar fault segments. While some of these parameters can be solved for, more often they are fixed to an uncertain value. We propose a practical framework to address this limitation by following a previously implemented method exploring the impact of uncertainties on the elastic properties of our models. We develop a sensitivity analysis to small perturbations of fault dip and position. The uncertainties in fault geometry are included in the inverse problem under the formulation of the misfit covariance matrix that combines both prediction and observation uncertainties. We validate this approach with the simplified case of a fault that extends infinitely along strike, using both Bayesian and optimization formulations of a static inversion. If epistemic errors are ignored, predictions are overconfident in the data and source parameters are not reliably estimated. In contrast, inclusion of uncertainties in fault geometry allows us to infer a robust posterior source model. Epistemic uncertainties can be many orders of magnitude larger than observational errors for great earthquakes (Mw > 8). Not accounting for uncertainties in fault geometry may partly explain observed shallow slip deficits for continental earthquakes. Similarly, ignoring the impact of epistemic errors can also bias estimates of

  17. Simplified expressions of the T-matrix integrals for electromagnetic scattering.

    Science.gov (United States)

    Somerville, Walter R C; Auguié, Baptiste; Le Ru, Eric C

    2011-09-01

    The extended boundary condition method, also called the null-field method, provides a semianalytic solution to the problem of electromagnetic scattering by a particle by constructing a transition matrix (T-matrix) that links the scattered field to the incident field. This approach requires the computation of specific integrals over the particle surface, which are typically evaluated numerically. We introduce here a new set of simplified expressions for these integrals in the commonly studied case of axisymmetric particles. Simplifications are obtained using the differentiation properties of the radial functions (spherical Bessel) and angular functions (associated Legendre functions) and integrations by parts. The resulting simplified expressions not only lead to faster computations, but also reduce the risks of loss of precision and provide a simpler framework for further analytical work.

  18. Characteristics estimation of coal liquefaction residue; Sekitan ekika zansa seijo no suisan ni kansuru kento

    Energy Technology Data Exchange (ETDEWEB)

    Itonaga, M.; Imada, K. [Nippon Steel Corp., Tokyo (Japan); Okada, Y.; Inokuchi, K. [Mitsui SRC Development Co. Ltd., Tokyo (Japan)

    1996-10-28

    The paper studied a possibility of estimating characteristics of coal liquefaction residue from liquefaction conditions in the case of fixing coal kind in the NEDOL process coal liquefaction PSU. Wyoming coal was used for the study, and the already proposed simplified liquefaction reaction models were used. Among material balances explained by the models, those of asphaltene, preasphaltene, THF insoluble matters are concerned with residue composition. Ash content is separately calculated from ash balance. Reaction velocity constants of simplified liquefaction reaction models which influence the residue composition were obtained by the multiple regression method from experimental results in the past. The estimation expression of residue viscosity was introduced from residue ash/composition. When the residue composition is estimated by the model from liquefaction conditions, and the residue viscosity is obtained using it, the higher the liquefaction temperature is, the higher the residue viscosity is. The result obtained well agreed the measuring result. The simplified liquefaction model of a certain coal kind has been established, and characteristics of residue can be estimated even at liquefaction conditions which have never been experienced before if there is a certain amount of the accumulated data on residue composition/characteristics. 4 refs., 4 figs., 4 tabs.

  19. Simplified Fuzzy Control for Flux-Weakening Speed Control of IPMSM Drive

    Directory of Open Access Journals (Sweden)

    M. J. Hossain

    2011-01-01

    Full Text Available This paper presents a simplified fuzzy logic-based speed control scheme of an interior permanent magnet synchronous motor (IPMSM above the base speed using a flux-weakening method. In this work, nonlinear expressions of d-axis and q-axis currents of the IPMSM have been derived and subsequently incorporated in the control algorithm for the practical purpose in order to implement fuzzy-based flux-weakening strategy to operate the motor above the base speed. The fundamentals of fuzzy logic algorithms as related to motor control applications are also illustrated. A simplified fuzzy speed controller (FLC for the IPMSM drive has been designed and incorporated in the drive system to maintain high performance standards. The efficacy of the proposed simplified FLC-based IPMSM drive is verified by simulation at various dynamic operating conditions. The simplified FLC is found to be robust and efficient. Laboratory test results of proportional integral (PI controller-based IPMSM drive have been compared with the simulated results of fuzzy controller-based flux-weakening IPMSM drive system.

  20. A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT

    NARCIS (Netherlands)

    MIKOSCH, T; WANG, QA

    We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.

  1. A Generalized Autocovariance Least-Squares Method for Covariance Estimation

    DEFF Research Database (Denmark)

    Åkesson, Bernt Magnus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2007-01-01

    A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter.......A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter....

  2. Finite element method solution of simplified P3 equation for flexible geometry handling

    International Nuclear Information System (INIS)

    Ryu, Eun Hyun; Joo, Han Gyu

    2011-01-01

    In order to obtain efficiently core flux solutions which would be much closer to the transport solution than the diffusion solution is, not being limited by the geometry of the core, the simplified P 3 (SP 3 ) equation is solved with the finite element method (FEM). A generic mesh generator, GMSH, is used to generate linear and quadratic mesh data. The linear system resulting from the SP 3 FEM discretization is solved by Krylov subspace methods (KSM). A symmetric form of the SP 3 equation is derived to apply the conjugate gradient method rather than the KSMs for nonsymmetric linear systems. An optional iso-parametric quadratic mapping scheme, which is to selectively model nonlinear shapes with a quadratic mapping to prevent significant mismatch in local domain volume, is also implemented for efficient application of arbitrary geometry handling. The gain in the accuracy attainable by the SP 3 solution over the diffusion solution is assessed by solving numerous benchmark problems having various core geometries including the IAEA PWR problems involving rectangular fuels and the Takeda fast reactor problems involving hexagonal fuels. The reference transport solution is produced by the McCARD Monte Carlo code and the multiplication factor and power distribution errors are assessed. In addition, the effect of quadratic mapping is examined for circular cell problems. It is shown that significant accuracy gain is possible with the SP 3 solution for the fast reactor problems whereas only marginal improvement is noted for thermal reactor problems. The quadratic mapping is also quite effective handling geometries with curvature. (author)

  3. Simplified approach to MR image quantification of the rheumatoid wrist: a pilot study

    International Nuclear Information System (INIS)

    Kamishima, Tamotsu; Terae, Satoshi; Shirato, Hiroki; Tanimura, Kazuhide; Aoki, Yuko; Shimizu, Masato; Matsuhashi, Megumi; Fukae, Jun; Kosaka, Naoki; Kon, Yujiro

    2011-01-01

    To determine an optimal threshold in a simplified 3D-based volumetry of abnormal signals in rheumatoid wrists utilizing contrast and non-contrast MR data, and investigate the feasibility and reliability of this method. MR images of bilateral hands of 15 active rheumatoid patients were assessed before and 5 months after the initiation of tocilizumab infusion protocol. The volumes of abnormal signals were measured on STIR and post-contrast fat-suppressed T1-weighted images. Three-dimensional volume rendering of the images was used for segmentation of the wrist by an MR technologist and a radiologist. Volumetric data were obtained with variable thresholding (1, 1.25, 1.5, 1.75, and 2 times the muscle signal), and were compared to clinical data and semiquantitative MR scoring (RAMRIS) of the wrist. Intra- and interobserver variability and time needed for volumetry measurements were assessed. The volumetric data correlated favorably with clinical parameters almost throughout the pre-determined thresholds. Interval differences in volumetric data correlated favorably with those of RAMRIS when the threshold was set at more than 1.5 times the muscle signal. The repeatability index was lower than the average of the interval differences in volumetric data when the threshold was set at 1.5-1.75 for STIR data. Intra- and interobserver variability for volumetry was 0.79-0.84. The time required for volumetry was shorter than that for RAMRIS. These results suggest that a simplified MR volumetric data acquisition may provide gross estimates of disease activity when the threshold is set properly. Such estimation can be achieved quickly by non-imaging specialists and without contrast administration. (orig.)

  4. Simplified approach to MR image quantification of the rheumatoid wrist: a pilot study

    Energy Technology Data Exchange (ETDEWEB)

    Kamishima, Tamotsu; Terae, Satoshi; Shirato, Hiroki [Hokkaido University Hospital, Department of Radiology, Sapporo City (Japan); Tanimura, Kazuhide; Aoki, Yuko; Shimizu, Masato; Matsuhashi, Megumi; Fukae, Jun [Hokkaido Medical Center for Rheumatic Diseases, Sapporo City, Hokkaido (Japan); Kosaka, Naoki [Tokeidai Memorial Hospital, Sapporo City, Hokkaido (Japan); Kon, Yujiro [St. Thomas' Hospital, Lupus Research Unit, The Rayne Institute, London (United Kingdom)

    2011-01-15

    To determine an optimal threshold in a simplified 3D-based volumetry of abnormal signals in rheumatoid wrists utilizing contrast and non-contrast MR data, and investigate the feasibility and reliability of this method. MR images of bilateral hands of 15 active rheumatoid patients were assessed before and 5 months after the initiation of tocilizumab infusion protocol. The volumes of abnormal signals were measured on STIR and post-contrast fat-suppressed T1-weighted images. Three-dimensional volume rendering of the images was used for segmentation of the wrist by an MR technologist and a radiologist. Volumetric data were obtained with variable thresholding (1, 1.25, 1.5, 1.75, and 2 times the muscle signal), and were compared to clinical data and semiquantitative MR scoring (RAMRIS) of the wrist. Intra- and interobserver variability and time needed for volumetry measurements were assessed. The volumetric data correlated favorably with clinical parameters almost throughout the pre-determined thresholds. Interval differences in volumetric data correlated favorably with those of RAMRIS when the threshold was set at more than 1.5 times the muscle signal. The repeatability index was lower than the average of the interval differences in volumetric data when the threshold was set at 1.5-1.75 for STIR data. Intra- and interobserver variability for volumetry was 0.79-0.84. The time required for volumetry was shorter than that for RAMRIS. These results suggest that a simplified MR volumetric data acquisition may provide gross estimates of disease activity when the threshold is set properly. Such estimation can be achieved quickly by non-imaging specialists and without contrast administration. (orig.)

  5. Simplification of an MCNP model designed for dose rate estimation

    Science.gov (United States)

    Laptev, Alexander; Perry, Robert

    2017-09-01

    A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.

  6. MINOS: A simplified Pn solver for core calculation

    International Nuclear Information System (INIS)

    Baudron, A.M.; Lautard, J.J.

    2007-01-01

    This paper describes a new generation of the neutronic core solver MINOS resulting from developments done in the DESCARTES project. For performance reasons, the numerical method of the existing MINOS solver in the SAPHYR system has been reused in the new system. It is based on the mixed-dual finite element approximation of the simplified transport equation. We have extended the previous method to the treatment of unstructured geometries composed by quadrilaterals, allowing us to treat geometries where fuel pins are exactly represented. For Cartesian geometries, the solver takes into account assembly discontinuity coefficients in the simplified P n context. The solver has been rewritten in C + + programming language using an object-oriented design. Its general architecture was reconsidered in order to improve its capability of evolution and its maintainability. Moreover, the performance of the previous version has been improved mainly regarding the matrix construction time; this result improves significantly the performance of the solver in the context of industrial application requiring thermal-hydraulic feedback and depletion calculations. (authors)

  7. Simplified analysis for liquid pathway studies

    International Nuclear Information System (INIS)

    Codell, R.B.

    1984-08-01

    The analysis of the potential contamination of surface water via groundwater contamination from severe nuclear accidents is routinely calculated during licensing reviews. This analysis is facilitated by the methods described in this report, which is codified into a BASIC language computer program, SCREENLP. This program performs simplified calculations for groundwater and surface water transport and calculates population doses to potential users for the contaminated water irrespective of possible mitigation methods. The results are then compared to similar analyses performed using data for the generic sites in NUREG-0440, Liquid Pathway Generic Study, to determine if the site being investigated would pose any unusual liquid pathway hazards

  8. An innovative method for coordinate measuring machine one-dimensional self-calibration with simplified experimental process.

    Science.gov (United States)

    Fang, Cheng; Butler, David Lee

    2013-05-01

    In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.

  9. Unemployment estimation: Spatial point referenced methods and models

    KAUST Repository

    Pereira, Soraia

    2017-06-26

    Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to

  10. Lifetime estimates of a fusion reactor first wall by linear damage summation and strain range partitioning methods

    International Nuclear Information System (INIS)

    Liu, K.C.; Grossbeck, M.L.

    1979-01-01

    A generalized model of a first wall made of 20% cold-worked steel was examined for neutron wall loadings ranging from 2 to 5 MW/m 2 . A spectrum of simplified on-off duty cycles was assumed with a 95% burn time. Independent evaluations of cyclic lifetimes were based on two methods: the method of linear damage summation currently being employed for use in ASME high-temperature design Code Case N-47 and that of strain range partitioning being studied for inclusion in the design code. An important point is that the latter method can incorporate a known decrease in ductility for materials subject to irradiation as a parameter, so low-cycle fatigue behavior can be estimated for irradiated material. Lifetimes predicted by the two methods agree reasonably well despite their diversity in concept. Lack of high-cycle fatigue data for the material tested at temperatures within the range of our interest precludes making conclusions on the accuracy of the predicted results, but such data are forthcoming. The analysis includes stress relaxation due to thermal and irradiation-induced creep. Reduced ductility values from irradiations that simulate the environment of the first wall of a fusion reactor were used to estimate the lifetime of the first wall under irradiation. These results indicate that 20% cold-worked type 316 stainless steel could be used as a first-wall material meeting a 8 to 10 MW-year/m 2 lifetime goal for a neutron wall loading of about 2 MW-year/m 2 and a maximum temperature of about 500 0 C

  11. On the Methods for Estimating the Corneoscleral Limbus.

    Science.gov (United States)

    Jesus, Danilo A; Iskander, D Robert

    2017-08-01

    The aim of this study was to develop computational methods for estimating limbus position based on the measurements of three-dimensional (3-D) corneoscleral topography and ascertain whether corneoscleral limbus routinely estimated from the frontal image corresponds to that derived from topographical information. Two new computational methods for estimating the limbus position are proposed: One based on approximating the raw anterior eye height data by series of Zernike polynomials and one that combines the 3-D corneoscleral topography with the frontal grayscale image acquired with the digital camera in-built in the profilometer. The proposed methods are contrasted against a previously described image-only-based procedure and to a technique of manual image annotation. The estimates of corneoscleral limbus radius were characterized with a high precision. The group average (mean ± standard deviation) of the maximum difference between estimates derived from all considered methods was 0.27 ± 0.14 mm and reached up to 0.55 mm. The four estimating methods lead to statistically significant differences (nonparametric ANOVA (the Analysis of Variance) test, p 0.05). Precise topographical limbus demarcation is possible either from the frontal digital images of the eye or from the 3-D topographical information of corneoscleral region. However, the results demonstrated that the corneoscleral limbus estimated from the anterior eye topography does not always correspond to that obtained through image-only based techniques. The experimental findings have shown that 3-D topography of anterior eye, in the absence of a gold standard, has the potential to become a new computational methodology for estimating the corneoscleral limbus.

  12. Simplified Models for LHC New Physics Searches

    CERN Document Server

    Alves, Daniele; Arora, Sanjay; Bai, Yang; Baumgart, Matthew; Berger, Joshua; Buckley, Matthew; Butler, Bart; Chang, Spencer; Cheng, Hsin-Chia; Cheung, Clifford; Chivukula, R.Sekhar; Cho, Won Sang; Cotta, Randy; D'Alfonso, Mariarosaria; El Hedri, Sonia; Essig, Rouven; Evans, Jared A.; Fitzpatrick, Liam; Fox, Patrick; Franceschini, Roberto; Freitas, Ayres; Gainer, James S.; Gershtein, Yuri; Gray, Richard; Gregoire, Thomas; Gripaios, Ben; Gunion, Jack; Han, Tao; Haas, Andy; Hansson, Per; Hewett, JoAnne; Hits, Dmitry; Hubisz, Jay; Izaguirre, Eder; Kaplan, Jared; Katz, Emanuel; Kilic, Can; Kim, Hyung-Do; Kitano, Ryuichiro; Koay, Sue Ann; Ko, Pyungwon; Krohn, David; Kuflik, Eric; Lewis, Ian; Lisanti, Mariangela; Liu, Tao; Liu, Zhen; Lu, Ran; Luty, Markus; Meade, Patrick; Morrissey, David; Mrenna, Stephen; Nojiri, Mihoko; Okui, Takemichi; Padhi, Sanjay; Papucci, Michele; Park, Michael; Park, Myeonghun; Perelstein, Maxim; Peskin, Michael; Phalen, Daniel; Rehermann, Keith; Rentala, Vikram; Roy, Tuhin; Ruderman, Joshua T.; Sanz, Veronica; Schmaltz, Martin; Schnetzer, Stephen; Schuster, Philip; Schwaller, Pedro; Schwartz, Matthew D.; Schwartzman, Ariel; Shao, Jing; Shelton, Jessie; Shih, David; Shu, Jing; Silverstein, Daniel; Simmons, Elizabeth; Somalwar, Sunil; Spannowsky, Michael; Spethmann, Christian; Strassler, Matthew; Su, Shufang; Tait, Tim; Thomas, Brooks; Thomas, Scott; Toro, Natalia; Volansky, Tomer; Wacker, Jay; Waltenberger, Wolfgang; Yavin, Itay; Yu, Felix; Zhao, Yue; Zurek, Kathryn

    2012-01-01

    This document proposes a collection of simplified models relevant to the design of new-physics searches at the LHC and the characterization of their results. Both ATLAS and CMS have already presented some results in terms of simplified models, and we encourage them to continue and expand this effort, which supplements both signature-based results and benchmark model interpretations. A simplified model is defined by an effective Lagrangian describing the interactions of a small number of new particles. Simplified models can equally well be described by a small number of masses and cross-sections. These parameters are directly related to collider physics observables, making simplified models a particularly effective framework for evaluating searches and a useful starting point for characterizing positive signals of new physics. This document serves as an official summary of the results from the "Topologies for Early LHC Searches" workshop, held at SLAC in September of 2010, the purpose of which was to develop a...

  13. Comparison of methods for estimating carbon in harvested wood products

    International Nuclear Information System (INIS)

    Claudia Dias, Ana; Louro, Margarida; Arroja, Luis; Capela, Isabel

    2009-01-01

    There is a great diversity of methods for estimating carbon storage in harvested wood products (HWP) and, therefore, it is extremely important to agree internationally on the methods to be used in national greenhouse gas inventories. This study compares three methods for estimating carbon accumulation in HWP: the method suggested by Winjum et al. (Winjum method), the tier 2 method proposed by the IPCC Good Practice Guidance for Land Use, Land-Use Change and Forestry (GPG LULUCF) (GPG tier 2 method) and a method consistent with GPG LULUCF tier 3 methods (GPG tier 3 method). Carbon accumulation in HWP was estimated for Portugal under three accounting approaches: stock-change, production and atmospheric-flow. The uncertainty in the estimates was also evaluated using Monte Carlo simulation. The estimates of carbon accumulation in HWP obtained with the Winjum method differed substantially from the estimates obtained with the other methods, because this method tends to overestimate carbon accumulation with the stock-change and the production approaches and tends to underestimate carbon accumulation with the atmospheric-flow approach. The estimates of carbon accumulation provided by the GPG methods were similar, but the GPG tier 3 method reported the lowest uncertainties. For the GPG methods, the atmospheric-flow approach produced the largest estimates of carbon accumulation, followed by the production approach and the stock-change approach, by this order. A sensitivity analysis showed that using the ''best'' available data on production and trade of HWP produces larger estimates of carbon accumulation than using data from the Food and Agriculture Organization. (author)

  14. Simplified method for preparation of concentrated exoproteins produced by Staphylococcus aureus grown on surface of cellophane bag containing liquid medium.

    Science.gov (United States)

    Ikigai, H; Seki, K; Nishihara, S; Masuda, S

    1988-01-01

    A simplified method for preparation of concentrated exoproteins including protein A and alpha-toxin produced by Staphylococcus aureus was successfully devised. The concentrated proteins were obtained by cultivating S. aureus organisms on the surface of a liquid medium-containing cellophane bag enclosed in a sterilized glass flask. With the same amount of medium, the total amount of proteins obtained by the method presented here was identical with that obtained by conventional liquid culture. The concentration of proteins obtained by the method, however, was high enough to observe their distinct bands stained on polyacrylamide gel electrophoresis. This method was considered quite useful not only for large-scale cultivation for the purification of staphylococcal proteins but also for small-scale study using the proteins. The precise description of the method was presented and its possible usefulness was discussed.

  15. Evaluation of non cyanide methods for hemoglobin estimation

    Directory of Open Access Journals (Sweden)

    Vinaya B Shah

    2011-01-01

    Full Text Available Background: The hemoglobincyanide method (HiCN method for measuring hemoglobin is used extensively worldwide; its advantages are the ready availability of a stable and internationally accepted reference standard calibrator. However, its use may create a problem, as the waste disposal of large volumes of reagent containing cyanide constitutes a potential toxic hazard. Aims and Objective: As an alternative to drabkin`s method of Hb estimation, we attempted to estimate hemoglobin by other non-cyanide methods: alkaline hematin detergent (AHD-575 using Triton X-100 as lyser and alkaline- borax method using quarternary ammonium detergents as lyser. Materials and Methods: The hemoglobin (Hb results on 200 samples of varying Hb concentrations obtained by these two cyanide free methods were compared with a cyanmethemoglobin method on a colorimeter which is light emitting diode (LED based. Hemoglobin was also estimated in one hundred blood donors and 25 blood samples of infants and compared by these methods. Statistical analysis used was Pearson`s correlation coefficient. Results: The response of the non cyanide method is linear for serially diluted blood samples over the Hb concentration range from 3gm/dl -20 gm/dl. The non cyanide methods has a precision of + 0.25g/dl (coefficient of variation= (2.34% and is suitable for use with fixed wavelength or with colorimeters at wavelength- 530 nm and 580 nm. Correlation of these two methods was excellent (r=0.98. The evaluation has shown it to be as reliable and reproducible as HiCN for measuring hemoglobin at all concentrations. The reagents used in non cyanide methods are non-biohazardous and did not affect the reliability of data determination and also the cost was less than HiCN method. Conclusions: Thus, non cyanide methods of Hb estimation offer possibility of safe and quality Hb estimation and should prove useful for routine laboratory use. Non cyanide methods is easily incorporated in hemobloginometers

  16. FastCloning: a highly simplified, purification-free, sequence- and ligation-independent PCR cloning method

    Directory of Open Access Journals (Sweden)

    Lu Jia

    2011-10-01

    Full Text Available Abstract Background Although a variety of methods and expensive kits are available, molecular cloning can be a time-consuming and frustrating process. Results Here we report a highly simplified, reliable, and efficient PCR-based cloning technique to insert any DNA fragment into a plasmid vector or into a gene (cDNA in a vector at any desired position. With this method, the vector and insert are PCR amplified separately, with only 18 cycles, using a high fidelity DNA polymerase. The amplified insert has the ends with ~16-base overlapping with the ends of the amplified vector. After DpnI digestion of the mixture of the amplified vector and insert to eliminate the DNA templates used in PCR reactions, the mixture is directly transformed into competent E. coli cells to obtain the desired clones. This technique has many advantages over other cloning methods. First, it does not need gel purification of the PCR product or linearized vector. Second, there is no need of any cloning kit or specialized enzyme for cloning. Furthermore, with reduced number of PCR cycles, it also decreases the chance of random mutations. In addition, this method is highly effective and reproducible. Finally, since this cloning method is also sequence independent, we demonstrated that it can be used for chimera construction, insertion, and multiple mutations spanning a stretch of DNA up to 120 bp. Conclusion Our FastCloning technique provides a very simple, effective, reliable, and versatile tool for molecular cloning, chimera construction, insertion of any DNA sequences of interest and also for multiple mutations in a short stretch of a cDNA.

  17. Simplified realistic human head model for simulating Tumor Treating Fields (TTFields).

    Science.gov (United States)

    Wenger, Cornelia; Bomzon, Ze'ev; Salvador, Ricardo; Basser, Peter J; Miranda, Pedro C

    2016-08-01

    Tumor Treating Fields (TTFields) are alternating electric fields in the intermediate frequency range (100-300 kHz) of low-intensity (1-3 V/cm). TTFields are an anti-mitotic treatment against solid tumors, which are approved for Glioblastoma Multiforme (GBM) patients. These electric fields are induced non-invasively by transducer arrays placed directly on the patient's scalp. Cell culture experiments showed that treatment efficacy is dependent on the induced field intensity. In clinical practice, a software called NovoTalTM uses head measurements to estimate the optimal array placement to maximize the electric field delivery to the tumor. Computational studies predict an increase in the tumor's electric field strength when adapting transducer arrays to its location. Ideally, a personalized head model could be created for each patient, to calculate the electric field distribution for the specific situation. Thus, the optimal transducer layout could be inferred from field calculation rather than distance measurements. Nonetheless, creating realistic head models of patients is time-consuming and often needs user interaction, because automated image segmentation is prone to failure. This study presents a first approach to creating simplified head models consisting of convex hulls of the tissue layers. The model is able to account for anisotropic conductivity in the cortical tissues by using a tensor representation estimated from Diffusion Tensor Imaging. The induced electric field distribution is compared in the simplified and realistic head models. The average field intensities in the brain and tumor are generally slightly higher in the realistic head model, with a maximal ratio of 114% for a simplified model with reasonable layer thicknesses. Thus, the present pipeline is a fast and efficient means towards personalized head models with less complexity involved in characterizing tissue interfaces, while enabling accurate predictions of electric field distribution.

  18. A simplified method for assessing cytotechnologist workload.

    Science.gov (United States)

    Vaickus, Louis J; Tambouret, Rosemary

    2014-01-01

    Examining cytotechnologist workflow and how it relates to job performance and patient safety is important in determining guidelines governing allowable workloads. This report discusses the development of a software tool that significantly simplifies the process of analyzing cytotechnologist workload while simultaneously increasing the quantity and resolution of the data collected. The program runs in Microsoft Excel and minimizes manual data entry and data transcription by automating as many tasks as is feasible. Data show the cytotechnologists tested were remarkably consistent in the amount of time it took them to screen a cervical cytology (Gyn) or a nongynecologic cytology (Non-Gyn) case and that this amount of time was directly proportional to the number of slides per case. Namely, the time spent per slide did not differ significantly in Gyn versus Non-Gyn cases (216 ± 3.4 seconds and 235 ± 24.6 seconds, respectively; P=.16). There was no significant difference in the amount of time needed to complete a Gyn case between the morning and the evening (314 ± 4.7 seconds and 312 ± 7.1 seconds; P=.39), but a significantly increased time spent screening Non-Gyn cases (slide-adjusted) in the afternoon hours (323 ± 20.1 seconds and 454 ± 67.6 seconds; P=.027), which was largely the result of significantly increased time spent on prescreening activities such as checking the electronic medical record (62 ± 6.9 seconds and 145 ± 36 seconds; P=.006). This Excel-based data collection tool generates highly detailed data in an unobtrusive manner and is highly customizable to the individual working environment and clinical climate. © 2013 American Cancer Society.

  19. Simplification of an MCNP model designed for dose rate estimation

    Directory of Open Access Journals (Sweden)

    Laptev Alexander

    2017-01-01

    Full Text Available A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.

  20. Methods for estimating the semivariogram

    DEFF Research Database (Denmark)

    Lophaven, Søren Nymand; Carstensen, Niels Jacob; Rootzen, Helle

    2002-01-01

    . In the existing literature various methods for modelling the semivariogram have been proposed, while only a few studies have been made on comparing different approaches. In this paper we compare eight approaches for modelling the semivariogram, i.e. six approaches based on least squares estimation...... maximum likelihood performed better than the least squares approaches. We also applied maximum likelihood and least squares estimation to a real dataset, containing measurements of salinity at 71 sampling stations in the Kattegat basin. This showed that the calculation of spatial predictions...

  1. SPEAK YOUR MIND: SIMPLIFIED DEBATES AS A LEARNING TOOL AT THE UNIVERSITY LEVEL

    Directory of Open Access Journals (Sweden)

    LUSTIGOVÁ, Lenka

    2011-03-01

    Full Text Available This study focuses on the development of speaking skills in intermediate and lower level university classes through the simplified format of debates. The aim of this paper is to describe teaching observations with special attention given to the preparatory stages, strengths and challenges of simplified debate faced by both the teacher and the students. Observations were made while teaching speaking through simple debate to 19 - 20 year-old-students of general English at the Czech University of Life Sciences Prague in intermediate and lower level classes. By describing the methods and procedures used to engage in debates, this paper aims to enrich pedagogical methods for effectively teaching speaking skills and thus serve ESL teachers at large. By contextualizing debate within a milieu larger than the ESL classroom, this study also accesses possibilities for further application of simplified debate to heighten training for other subjects, while drawing upon the democratic context supported by debate.

  2. A simplified model of aerosol removal by natural processes in reactor containments

    Energy Technology Data Exchange (ETDEWEB)

    Powers, D.A.; Washington, K.E.; Sprung, J.L. [Sandia National Labs., Albuquerque, NM (United States); Burson, S.B. [Nuclear Regulatory Commission, Washington, DC (United States)

    1996-07-01

    Simplified formulae are developed for estimating the aerosol decontamination that can be achieved by natural processes in the containments of pressurized water reactors and in the drywells of boiling water reactors under severe accident conditions. These simplified formulae were derived by correlation of results of Monte Carlo uncertainty analyses of detailed models of aerosol behavior under accident conditions. Monte Carlo uncertainty analyses of decontamination by natural aerosol processes are reported for 1,000, 2,000, 3,000, and 4,000 MW(th) pressurized water reactors and for 1,500, 2,500, and 3,500 MW(th) boiling water reactors. Uncertainty distributions for the decontamination factors and decontamination coefficients as functions of time were developed in the Monte Carlo analyses by considering uncertainties in aerosol processes, material properties, reactor geometry and severe accident progression. Phenomenological uncertainties examined in this work included uncertainties in aerosol coagulation by gravitational collision, Brownian diffusion, turbulent diffusion and turbulent inertia. Uncertainties in aerosol deposition by gravitational settling, thermophoresis, diffusiophoresis, and turbulent diffusion were examined. Electrostatic charging of aerosol particles in severe accidents is discussed. Such charging could affect both the coagulation and deposition of aerosol particles. Electrostatic effects are not considered in most available models of aerosol behavior during severe accidents and cause uncertainties in predicted natural decontamination processes that could not be taken in to account in this work. Median (50%), 90 and 10% values of the uncertainty distributions for effective decontamination coefficients were correlated with time and reactor thermal power. These correlations constitute a simplified model that can be used to estimate the decontamination by natural aerosol processes at 3 levels of conservatism. Applications of the model are described.

  3. The simplified interaction tool for efficient and accurate underwater shock analysis of naval ships

    NARCIS (Netherlands)

    Aanhold, J.E. van; Trouwborst, W.; Vaders, J.A.A.

    2014-01-01

    In order to satisfy the need for good quality UNDEX response estimates of naval ships, TNO developed the Simplified Interaction Tool (SIT) for underwater shock analysis. The SIT is a module of user routines linked to LS-DYNA, which generates the UNDEX loading on the wet hull of a 3D finite element

  4. Estimation of effective population size in continuously distributed populations: There goes the neighborhood

    Science.gov (United States)

    M. C. Neel; K. McKelvey; N. Ryman; M. W. Lloyd; R. Short Bull; F. W. Allendorf; M. K. Schwartz; R. S. Waples

    2013-01-01

    Use of genetic methods to estimate effective population size (Ne) is rapidly increasing, but all approaches make simplifying assumptions unlikely to be met in real populations. In particular, all assume a single, unstructured population, and none has been evaluated for use with continuously distributed species. We simulated continuous populations with local mating...

  5. Bayesian Inference Methods for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand

    2013-01-01

    This thesis deals with sparse Bayesian learning (SBL) with application to radio channel estimation. As opposed to the classical approach for sparse signal representation, we focus on the problem of inferring complex signals. Our investigations within SBL constitute the basis for the development...... of Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation...... analysis of the complex prior representation, where we show that the ability to induce sparse estimates of a given prior heavily depends on the inference method used and, interestingly, whether real or complex variables are inferred. We also show that the Bayesian estimators derived from the proposed...

  6. Order statistics & inference estimation methods

    CERN Document Server

    Balakrishnan, N

    1991-01-01

    The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co

  7. Investigation of MLE in nonparametric estimation methods of reliability function

    International Nuclear Information System (INIS)

    Ahn, Kwang Won; Kim, Yoon Ik; Chung, Chang Hyun; Kim, Kil Yoo

    2001-01-01

    There have been lots of trials to estimate a reliability function. In the ESReDA 20 th seminar, a new method in nonparametric way was proposed. The major point of that paper is how to use censored data efficiently. Generally there are three kinds of approach to estimate a reliability function in nonparametric way, i.e., Reduced Sample Method, Actuarial Method and Product-Limit (PL) Method. The above three methods have some limits. So we suggest an advanced method that reflects censored information more efficiently. In many instances there will be a unique maximum likelihood estimator (MLE) of an unknown parameter, and often it may be obtained by the process of differentiation. It is well known that the three methods generally used to estimate a reliability function in nonparametric way have maximum likelihood estimators that are uniquely exist. So, MLE of the new method is derived in this study. The procedure to calculate a MLE is similar just like that of PL-estimator. The difference of the two is that in the new method, the mass (or weight) of each has an influence of the others but the mass in PL-estimator not

  8. Methods to estimate the genetic risk

    International Nuclear Information System (INIS)

    Ehling, U.H.

    1989-01-01

    The estimation of the radiation-induced genetic risk to human populations is based on the extrapolation of results from animal experiments. Radiation-induced mutations are stochastic events. The probability of the event depends on the dose; the degree of the damage dose not. There are two main approaches in making genetic risk estimates. One of these, termed the direct method, expresses risk in terms of expected frequencies of genetic changes induced per unit dose. The other, referred to as the doubling dose method or the indirect method, expresses risk in relation to the observed incidence of genetic disorders now present in man. The advantage of the indirect method is that not only can Mendelian mutations be quantified, but also other types of genetic disorders. The disadvantages of the method are the uncertainties in determining the current incidence of genetic disorders in human and, in addition, the estimasion of the genetic component of congenital anomalies, anomalies expressed later and constitutional and degenerative diseases. Using the direct method we estimated that 20-50 dominant radiation-induced mutations would be expected in 19 000 offspring born to parents exposed in Hiroshima and Nagasaki, but only a small proportion of these mutants would have been detected with the techniques used for the population study. These methods were used to predict the genetic damage from the fallout of the reactor accident at Chernobyl in the vicinity of Southern Germany. The lack of knowledge for the interaction of chemicals with ionizing radiation and the discrepancy between the high safety standards for radiation protection and the low level of knowledge for the toxicological evaluation of chemical mutagens will be emphasized. (author)

  9. A method of estimating log weights.

    Science.gov (United States)

    Charles N. Mann; Hilton H. Lysons

    1972-01-01

    This paper presents a practical method of estimating the weights of logs before they are yarded. Knowledge of log weights is required to achieve optimum loading of modern yarding equipment. Truckloads of logs are weighed and measured to obtain a local density index (pounds per cubic foot) for a species of logs. The density index is then used to estimate the weights of...

  10. Simplified proceeding as a civil procedure model

    Directory of Open Access Journals (Sweden)

    Олексій Юрійович Зуб

    2016-01-01

    Full Text Available Currently the directions for the development of modern civil procedural law such as optimization, facilitation, forwarding proceedings promoting the increase of the civil procedure efficiency factor are of peculiar importance. Their results are occurrence and functionality of simplified proceedings system designed to facilitate significantly hearing some categories of cases, promotion of their consideration within reasonable time and reduce legal expenses so far as it is possible. The category “simplified proceedings” in the native science of the procedural law is underexamined. A good deal of scientists-processualists were limited to studying summary (in the context of optimization as a way to improve the civil procedural form, summary proceedings and procedures functioning in terms of the mentioned proceedings, consideration of case in absentia as well as their modification. Among the Ukrainian scientist who studied some aspects of the simplified proceedings are: E. A. Belyanevych, V. I. Bobrik, S. V. Vasilyev, M. V. Verbitska, S. I. Zapara, A. A. Zgama, V. V. Komarov, D. D. Luspenuk, U. V. Navrotska, V. V. Protsenko, T. V. Stepanova, E. A. Talukin, S. Y. Fursa, M. Y. Shtefan others. The problems of the simplified proceedings were studied by the foreign scientists as well, such as: N. Andrews, Y. Y. Grubanon, N. A. Gromoshina, E. P. Kochanenko, J. Kohler, D. I. Krumskiy, E. M. Muradjan, I. V. Reshetnikova, U. Seidel, N. V. Sivak, M. Z. Shvarts, V. V. Yarkov and others. The paper objective is to develop theoretically supported, practically reasonable notion of simplified proceedings in the civil process, and also basing on the notion of simplified proceedings, international experience of the legislative regulation of simplified proceedings, native and foreign doctrine, to distinguish essential features of simplified proceedings in the civil process and to describe them. In the paper we generated the notion of simplified proceedings that

  11. A Fast LMMSE Channel Estimation Method for OFDM Systems

    Directory of Open Access Journals (Sweden)

    Zhou Wen

    2009-01-01

    Full Text Available A fast linear minimum mean square error (LMMSE channel estimation method has been proposed for Orthogonal Frequency Division Multiplexing (OFDM systems. In comparison with the conventional LMMSE channel estimation, the proposed channel estimation method does not require the statistic knowledge of the channel in advance and avoids the inverse operation of a large dimension matrix by using the fast Fourier transform (FFT operation. Therefore, the computational complexity can be reduced significantly. The normalized mean square errors (NMSEs of the proposed method and the conventional LMMSE estimation have been derived. Numerical results show that the NMSE of the proposed method is very close to that of the conventional LMMSE method, which is also verified by computer simulation. In addition, computer simulation shows that the performance of the proposed method is almost the same with that of the conventional LMMSE method in terms of bit error rate (BER.

  12. A Computationally Efficient Method for Polyphonic Pitch Estimation

    Directory of Open Access Journals (Sweden)

    Ruohua Zhou

    2009-01-01

    Full Text Available This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  13. Experience with simplified inelastic analysis of piping designed for elevated temperature service

    International Nuclear Information System (INIS)

    Severud, L.K.

    1980-03-01

    Screening rules and preliminary design of FFTF piping were developed in 1974 based on expected behavior and engineering judgment, approximate calculations, and a few detailed inelastic analyses of pipelines. This paper provides findings from six additional detailed inelastic analyses with correlations to the simplified analysis screening rules. In addition, simplified analysis methods for treating weldment local stresses and strains as well as fabrication induced flaws are described. Based on the FFTF experience, recommendations for future Code and technology work to reduce design analysis costs are identified

  14. Conceptual design study on simplified and safer cooling systems for sodium cooled FBRs

    International Nuclear Information System (INIS)

    Hayafune, Hiroki; Shimakawa, Yoshio; Ishikawa, Hiroyasu; Kubota, Kenichi; Kobayashi, Jun; Kasai, Shigeo

    2000-06-01

    The objective of this study is to create the FBR plant concepts increasing economy and safety for the Phase-I 'Feasibility Studies on Commercialized Fast Reactor System'. In this study, various concepts of simplified 2ry cooling system for sodium cooled FBRs are considered and evaluated from the view points of technological feasibility, economy, and safety. The concepts in the study are considered on the basis of the following points of view. 1. To simplify 2ry cooling system by moderating and localizing the sodium-water reaction in the steam generator of the FBRs. 2. To simplify 2ry cooling system by eliminating the sodium-water reaction using integrated IHX-SG unit. 3. To simplify 2ry cooling system by eliminating the sodium-water reaction using a power generating system other than the steam generator. As the result of the study, 12 concepts and 3 innovative concepts are proposed. The evaluation study for those concepts shows the following technical prospects. 1. 2 concepts of integrated IHX-SG unit can eliminate the sodium-water reaction. Separated IHX and SG tubes unit using Lead-Bismuth as the heat transfer medium. Integrated IHX-SG unit using copper as the heat transfer medium. 2. Cost reduction effect by simplified 2ry cooling system using integrated IHX-SG unit is estimated 0 to 5%. 3. All of the integrated IHX-SG unit concepts have more weight and larger size than conventional steam generator unit. The weight of the unit during transporting and lifting would limit capacity of heat transfer system. These evaluation results will be compared with the results in JFY 2000 and used for the Phase-II study. (author)

  15. A Comparative Study of Distribution System Parameter Estimation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup

    2016-07-17

    In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.

  16. Evaluation of three paediatric weight estimation methods in Singapore.

    Science.gov (United States)

    Loo, Pei Ying; Chong, Shu-Ling; Lek, Ngee; Bautista, Dianne; Ng, Kee Chong

    2013-04-01

    Rapid paediatric weight estimation methods in the emergency setting have not been evaluated for South East Asian children. This study aims to assess the accuracy and precision of three such methods in Singapore children: Broselow-Luten (BL) tape, Advanced Paediatric Life Support (APLS) (estimated weight (kg) = 2 (age + 4)) and Luscombe (estimated weight (kg) = 3 (age) + 7) formulae. We recruited 875 patients aged 1-10 years in a Paediatric Emergency Department in Singapore over a 2-month period. For each patient, true weight and height were determined. True height was cross-referenced to the BL tape markings and used to derive estimated weight (virtual BL tape method), while patient's round-down age (in years) was used to derive estimated weights using APLS and Luscombe formulae, respectively. The percentage difference between the true and estimated weights was calculated. For each method, the bias and extent of agreement were quantified using Bland-Altman method (mean percentage difference (MPD) and 95% limits of agreement (LOA)). The proportion of weight estimates within 10% of true weight (p₁₀) was determined. The BL tape method marginally underestimated weights (MPD +0.6%; 95% LOA -26.8% to +28.1%; p₁₀ 58.9%). The APLS formula underestimated weights (MPD +7.6%; 95% LOA -26.5% to +41.7%; p₁₀ 45.7%). The Luscombe formula overestimated weights (MPD -7.4%; 95% LOA -51.0% to +36.2%; p₁₀ 37.7%). Of the three methods we evaluated, the BL tape method provided the most accurate and precise weight estimation for Singapore children. The APLS and Luscombe formulae underestimated and overestimated the children's weights, respectively, and were considerably less precise. © 2013 The Authors. Journal of Paediatrics and Child Health © 2013 Paediatrics and Child Health Division (Royal Australasian College of Physicians).

  17. Joint Pitch and DOA Estimation Using the ESPRIT method

    DEFF Research Database (Denmark)

    Wu, Yuntao; Amir, Leshem; Jensen, Jesper Rindom

    2015-01-01

    In this paper, the problem of joint multi-pitch and direction-of-arrival (DOA) estimation for multi-channel harmonic sinusoidal signals is considered. A spatio-temporal matrix signal model for a uniform linear array is defined, and then the ESPRIT method based on subspace techniques that exploits...... the invariance property in the time domain is first used to estimate the multi pitch frequencies of multiple harmonic signals. Followed by the estimated pitch frequencies, the DOA estimations based on the ESPRIT method are also presented by using the shift invariance structure in the spatial domain. Compared...... to the existing stateof-the-art algorithms, the proposed method based on ESPRIT without 2-D searching is computationally more efficient but performs similarly. An asymptotic performance analysis of the DOA and pitch estimation of the proposed method are also presented. Finally, the effectiveness of the proposed...

  18. Simplified methods for in vivo measurement of acetylcholinesterase activity in rodent brain

    International Nuclear Information System (INIS)

    Kilbourn, Michael R.; Sherman, Phillip S.; Snyder, Scott E.

    1999-01-01

    Simplified methods for in vivo studies of acetylcholinesterase (AChE) activity in rodent brain were evaluated using N-[ 11 C]methylpiperidinyl propionate ([ 11 C]PMP) as an enzyme substrate. Regional mouse brain distributions were determined at 1 min (representing initial brain uptake) and 30 min (representing trapped product) after intravenous [ 11 C]PMP administration. Single time point tissue concentrations (percent injected dose/gram at 30 min), tissue concentration ratios (striatum/cerebellum and striatum/cortex ratios at 30 min), and regional tissue retention fractions (defined as percent injected dose 30 min/percent injected dose 1 min) were evaluated as measures of AChE enzymatic activity in mouse brain. Studies were carried out in control animals and after dosing with phenserine, a selective centrally active AChE inhibitor; neostigmine, a peripheral cholinesterase inhibitor; and a combination of the two drugs. In control and phenserine-treated animals, absolute tissue concentrations and regional retention fractions provide good measures of dose-dependent inhibition of brain AChE; tissue concentration ratios, however, provide erroneous conclusions. Peripheral inhibition of cholinesterases, which changes the blood pharmacokinetics of the radiotracer, diminishes the sensitivity of all measures to detect changes in central inhibition of the enzyme. We conclude that certain simple measures of AChE hydrolysis rates for [ 11 C]PMP are suitable for studies where alterations of the peripheral blood metabolism of the tracer are kept to a minimum

  19. Simplified Models for LHC New Physics Searches

    International Nuclear Information System (INIS)

    Alves, Daniele; Arkani-Hamed, Nima; Arora, Sanjay; Bai, Yang; Baumgart, Matthew; Berger, Joshua; Butler, Bart; Chang, Spencer; Cheng, Hsin-Chia; Cheung, Clifford; Chivukula, R. Sekhar; Cho, Won Sang; Cotta, Randy; D'Alfonso, Mariarosaria; El Hedri, Sonia; Essig, Rouven; Fitzpatrick, Liam; Fox, Patrick; Franceschini, Roberto

    2012-01-01

    This document proposes a collection of simplified models relevant to the design of new-physics searches at the LHC and the characterization of their results. Both ATLAS and CMS have already presented some results in terms of simplified models, and we encourage them to continue and expand this effort, which supplements both signature-based results and benchmark model interpretations. A simplified model is defined by an effective Lagrangian describing the interactions of a small number of new particles. Simplified models can equally well be described by a small number of masses and cross-sections. These parameters are directly related to collider physics observables, making simplified models a particularly effective framework for evaluating searches and a useful starting point for characterizing positive signals of new physics. This document serves as an official summary of the results from the 'Topologies for Early LHC Searches' workshop, held at SLAC in September of 2010, the purpose of which was to develop a set of representative models that can be used to cover all relevant phase space in experimental searches. Particular emphasis is placed on searches relevant for the first ∼ 50-500 pb -1 of data and those motivated by supersymmetric models. This note largely summarizes material posted at http://lhcnewphysics.org/, which includes simplified model definitions, Monte Carlo material, and supporting contacts within the theory community. We also comment on future developments that may be useful as more data is gathered and analyzed by the experiments.

  20. Simplified Models for LHC New Physics Searches

    Energy Technology Data Exchange (ETDEWEB)

    Alves, Daniele; /SLAC; Arkani-Hamed, Nima; /Princeton, Inst. Advanced Study; Arora, Sanjay; /Rutgers U., Piscataway; Bai, Yang; /SLAC; Baumgart, Matthew; /Johns Hopkins U.; Berger, Joshua; /Cornell U., Phys. Dept.; Buckley, Matthew; /Fermilab; Butler, Bart; /SLAC; Chang, Spencer; /Oregon U. /UC, Davis; Cheng, Hsin-Chia; /UC, Davis; Cheung, Clifford; /UC, Berkeley; Chivukula, R.Sekhar; /Michigan State U.; Cho, Won Sang; /Tokyo U.; Cotta, Randy; /SLAC; D' Alfonso, Mariarosaria; /UC, Santa Barbara; El Hedri, Sonia; /SLAC; Essig, Rouven, (ed.); /SLAC; Evans, Jared A.; /UC, Davis; Fitzpatrick, Liam; /Boston U.; Fox, Patrick; /Fermilab; Franceschini, Roberto; /LPHE, Lausanne /Pittsburgh U. /Argonne /Northwestern U. /Rutgers U., Piscataway /Rutgers U., Piscataway /Carleton U. /CERN /UC, Davis /Wisconsin U., Madison /SLAC /SLAC /SLAC /Rutgers U., Piscataway /Syracuse U. /SLAC /SLAC /Boston U. /Rutgers U., Piscataway /Seoul Natl. U. /Tohoku U. /UC, Santa Barbara /Korea Inst. Advanced Study, Seoul /Harvard U., Phys. Dept. /Michigan U. /Wisconsin U., Madison /Princeton U. /UC, Santa Barbara /Wisconsin U., Madison /Michigan U. /UC, Davis /SUNY, Stony Brook /TRIUMF; /more authors..

    2012-06-01

    This document proposes a collection of simplified models relevant to the design of new-physics searches at the LHC and the characterization of their results. Both ATLAS and CMS have already presented some results in terms of simplified models, and we encourage them to continue and expand this effort, which supplements both signature-based results and benchmark model interpretations. A simplified model is defined by an effective Lagrangian describing the interactions of a small number of new particles. Simplified models can equally well be described by a small number of masses and cross-sections. These parameters are directly related to collider physics observables, making simplified models a particularly effective framework for evaluating searches and a useful starting point for characterizing positive signals of new physics. This document serves as an official summary of the results from the 'Topologies for Early LHC Searches' workshop, held at SLAC in September of 2010, the purpose of which was to develop a set of representative models that can be used to cover all relevant phase space in experimental searches. Particular emphasis is placed on searches relevant for the first {approx} 50-500 pb{sup -1} of data and those motivated by supersymmetric models. This note largely summarizes material posted at http://lhcnewphysics.org/, which includes simplified model definitions, Monte Carlo material, and supporting contacts within the theory community. We also comment on future developments that may be useful as more data is gathered and analyzed by the experiments.

  1. Fault diagnostics for turbo-shaft engine sensors based on a simplified on-board model.

    Science.gov (United States)

    Lu, Feng; Huang, Jinquan; Xing, Yaodong

    2012-01-01

    Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient.

  2. Fault Diagnostics for Turbo-Shaft Engine Sensors Based on a Simplified On-Board Model

    Directory of Open Access Journals (Sweden)

    Yaodong Xing

    2012-08-01

    Full Text Available Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can’t be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient.

  3. A new rapid method for rockfall energies and distances estimation

    Science.gov (United States)

    Giacomini, Anna; Ferrari, Federica; Thoeni, Klaus; Lambert, Cedric

    2016-04-01

    Rockfalls are characterized by long travel distances and significant energies. Over the last decades, three main methods have been proposed in the literature to assess the rockfall runout: empirical, process-based and GIS-based methods (Dorren, 2003). Process-based methods take into account the physics of rockfall by simulating the motion of a falling rock along a slope and they are generally based on a probabilistic rockfall modelling approach that allows for taking into account the uncertainties associated with the rockfall phenomenon. Their application has the advantage of evaluating the energies, bounce heights and distances along the path of a falling block, hence providing valuable information for the design of mitigation measures (Agliardi et al., 2009), however, the implementation of rockfall simulations can be time-consuming and data-demanding. This work focuses on the development of a new methodology for estimating the expected kinetic energies and distances of the first impact at the base of a rock cliff, subject to the conditions that the geometry of the cliff and the properties of the representative block are known. The method is based on an extensive two-dimensional sensitivity analysis, conducted by means of kinematic simulations based on probabilistic modelling of two-dimensional rockfall trajectories (Ferrari et al., 2016). To take into account for the uncertainty associated with the estimation of the input parameters, the study was based on 78400 rockfall scenarios performed by systematically varying the input parameters that are likely to affect the block trajectory, its energy and distance at the base of the rock wall. The variation of the geometry of the rock cliff (in terms of height and slope angle), the roughness of the rock surface and the properties of the outcropping material were considered. A simplified and idealized rock wall geometry was adopted. The analysis of the results allowed finding empirical laws that relate impact energies

  4. A Channelization-Based DOA Estimation Method for Wideband Signals

    Directory of Open Access Journals (Sweden)

    Rui Guo

    2016-07-01

    Full Text Available In this paper, we propose a novel direction of arrival (DOA estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR using direct wideband radio frequency (RF digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method.

  5. Simplified Stability Criteria for Delayed Neutral Systems

    Directory of Open Access Journals (Sweden)

    Xinghua Zhang

    2014-01-01

    Full Text Available For a class of linear time-invariant neutral systems with neutral and discrete constant delays, several existing asymptotic stability criteria in the form of linear matrix inequalities (LMIs are simplified by using matrix analysis techniques. Compared with the original stability criteria, the simplified ones include fewer LMI variables, which can obviously reduce computational complexity. Simultaneously, it is theoretically shown that the simplified stability criteria and original ones are equivalent; that is, they have the same conservativeness. Finally, a numerical example is employed to verify the theoretic results investigated in this paper.

  6. Simplified design of filter circuits

    CERN Document Server

    Lenk, John

    1999-01-01

    Simplified Design of Filter Circuits, the eighth book in this popular series, is a step-by-step guide to designing filters using off-the-shelf ICs. The book starts with the basic operating principles of filters and common applications, then moves on to describe how to design circuits by using and modifying chips available on the market today. Lenk's emphasis is on practical, simplified approaches to solving design problems.Contains practical designs using off-the-shelf ICsStraightforward, no-nonsense approachHighly illustrated with manufacturer's data sheets

  7. Application of the Monte Carlo method to estimate doses in a radioactive waste drum environment

    International Nuclear Information System (INIS)

    Rodenas, J.; Garcia, T.; Burgos, M.C.; Felipe, A.; Sanchez-Mayoral, M.L.

    2002-01-01

    During refuelling operation in a Nuclear Power Plant, filtration is used to remove non-soluble radionuclides contained in the water from reactor pool. Filter cartridges accumulate a high radioactivity, so that they are usually placed into a drum. When the operation ends up, the drum is filled with concrete and stored along with other drums containing radioactive wastes. Operators working in the refuelling plant near these radwaste drums can receive high dose rates. Therefore, it is convenient to estimate those doses to prevent risks in order to apply ALARA criterion for dose reduction to workers. The Monte Carlo method has been applied, using MCNP 4B code, to simulate the drum containing contaminated filters and estimate doses produced in the drum environment. In the paper, an analysis of the results obtained with the MCNP code has been performed. Thus, the influence on the evaluated doses of distance from drum and interposed shielding barriers has been studied. The source term has also been analysed to check the importance of the isotope composition. Two different geometric models have been considered in order to simplify calculations. Results have been compared with dose measurements in plant in order to validate the calculation procedure. This work has been developed at the Nuclear Engineering Department of the Polytechnic University of Valencia in collaboration with IBERINCO in the frame of an RD project sponsored by IBERINCO

  8. A Simplified Method to Estimate Sc-CO2 Extraction of Bioactive Compounds from Different Matrices: Chili Pepper vs. Tomato By-Products

    Directory of Open Access Journals (Sweden)

    Francesca Venturi

    2017-04-01

    Full Text Available In the last few decades, the search for bioactive compounds or “target molecules” from natural sources or their by-products has become the most important application of the supercritical fluid extraction (SFE process. In this context, the present research had two main objectives: (i to verify the effectiveness of a two-step SFE process (namely, a preliminary Sc-CO2 extraction of carotenoids followed by the recovery of polyphenols by ethanol coupled with Sc-CO2 in order to obtain bioactive extracts from two widespread different matrices (chili pepper and tomato by-products, and (ii to test the validity of the mathematical model proposed to describe the kinetics of SFE of carotenoids from different matrices, the knowledge of which is required also for the definition of the role played in the extraction process by the characteristics of the sample matrix. On the basis of the results obtained, it was possible to introduce a simplified kinetic model that was able to describe the time evolution of the extraction of bioactive compounds (mainly carotenoids and phenols from different substrates. In particular, while both chili pepper and tomato were confirmed to be good sources of bioactive antioxidant compounds, the extraction process from chili pepper was faster than from tomato under identical operating conditions.

  9. Uncertainty analysis of the Operational Simplified Surface Energy Balance (SSEBop) model at multiple flux tower sites

    Science.gov (United States)

    Chen, Mingshi; Senay, Gabriel B.; Singh, Ramesh K.; Verdin, James P.

    2016-01-01

    Evapotranspiration (ET) is an important component of the water cycle – ET from the land surface returns approximately 60% of the global precipitation back to the atmosphere. ET also plays an important role in energy transport among the biosphere, atmosphere, and hydrosphere. Current regional to global and daily to annual ET estimation relies mainly on surface energy balance (SEB) ET models or statistical and empirical methods driven by remote sensing data and various climatological databases. These models have uncertainties due to inevitable input errors, poorly defined parameters, and inadequate model structures. The eddy covariance measurements on water, energy, and carbon fluxes at the AmeriFlux tower sites provide an opportunity to assess the ET modeling uncertainties. In this study, we focused on uncertainty analysis of the Operational Simplified Surface Energy Balance (SSEBop) model for ET estimation at multiple AmeriFlux tower sites with diverse land cover characteristics and climatic conditions. The 8-day composite 1-km MODerate resolution Imaging Spectroradiometer (MODIS) land surface temperature (LST) was used as input land surface temperature for the SSEBop algorithms. The other input data were taken from the AmeriFlux database. Results of statistical analysis indicated that the SSEBop model performed well in estimating ET with an R2 of 0.86 between estimated ET and eddy covariance measurements at 42 AmeriFlux tower sites during 2001–2007. It was encouraging to see that the best performance was observed for croplands, where R2 was 0.92 with a root mean square error of 13 mm/month. The uncertainties or random errors from input variables and parameters of the SSEBop model led to monthly ET estimates with relative errors less than 20% across multiple flux tower sites distributed across different biomes. This uncertainty of the SSEBop model lies within the error range of other SEB models, suggesting systematic error or bias of the SSEBop model is within

  10. Estimation of pump operational state with model-based methods

    International Nuclear Information System (INIS)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha

    2010-01-01

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.

  11. Quantitative whole body scintigraphy - a simplified approach

    International Nuclear Information System (INIS)

    Marienhagen, J.; Maenner, P.; Bock, E.; Schoenberger, J.; Eilles, C.

    1996-01-01

    In this paper we present investigations on a simplified method of quantitative whole body scintigraphy by using a dual head LFOV-gamma camera and a calibration algorithm without the need of additional attenuation or scatter correction. Validation of this approach to the anthropomorphic phantom as well as in patient studies showed a high accuracy concerning quantification of whole body activity (102.8% and 97.72%, resp.), by contrast organ activities were recovered with an error range up to 12%. The described method can be easily performed using commercially available software packages and is recommendable especially for quantitative whole body scintigraphy in a clinical setting. (orig.) [de

  12. Modeling moisture ingress through simplified concrete crack geometries

    DEFF Research Database (Denmark)

    Pease, Bradley Justin; Michel, Alexander; Geiker, Mette Rica

    2011-01-01

    , considered to have two parts; 1) a coalesced crack length which behaves as a free-surface for moisture ingress, and 2) an isolated microcracking length which resists ingress similarly to the bulk material. Transport model results are compared to experimental results from steel fibre reinforced concrete wedge......This paper introduces a numerical model for ingress in cracked steel fibre reinforced concrete. Details of a simplified crack are preset in the model’s geometry using the cracked hinge model (CHM). The total crack length estimated using the CHM was, based on earlier work on conventional concrete...... on moisture ingress. Results from the transport model indicate the length of the isolated microcracks was approximately 19 mm for the investigated concrete composition....

  13. Population Estimation with Mark and Recapture Method Program

    International Nuclear Information System (INIS)

    Limohpasmanee, W.; Kaewchoung, W.

    1998-01-01

    Population estimation is the important information which required for the insect control planning especially the controlling with SIT. Moreover, It can be used to evaluate the efficiency of controlling method. Due to the complexity of calculation, the population estimation with mark and recapture methods were not used widely. So that, this program is developed with Qbasic on the purpose to make it accuracy and easier. The program evaluation consists with 6 methods; follow Seber's, Jolly-seber's, Jackson's Ito's, Hamada's and Yamamura's methods. The results are compared with the original methods, found that they are accuracy and more easier to applied

  14. New methods of testing nonlinear hypothesis using iterative NLLS estimator

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.

  15. S-Channel Dark Matter Simplified Models and Unitarity

    CERN Document Server

    Englert, Christoph; Spannowsky, Michael

    The ultraviolet structure of $s$-channel mediator dark matter simplified models at hadron colliders is considered. In terms of commonly studied $s$-channel mediator simplified models it is argued that at arbitrarily high energies the perturbative description of dark matter production in high energy scattering at hadron colliders will break down in a number of cases. This is analogous to the well documented breakdown of an EFT description of dark matter collider production. With this in mind, to diagnose whether or not the use of simplified models at the LHC is valid, perturbative unitarity of the scattering amplitude in the processes relevant to LHC dark matter searches is studied. The results are as one would expect: at the LHC and future proton colliders the simplified model descriptions of dark matter production are in general valid. As a result of the general discussion, a simple new class of previously unconsidered `Fermiophobic Scalar' simplified models is proposed, in which a scalar mediator couples to...

  16. A simplified approach for simulation of wake meandering

    Energy Technology Data Exchange (ETDEWEB)

    Thomsen, Kenneth; Aagaard Madsen, H.; Larsen, Gunner; Juul Larsen, T.

    2006-03-15

    This fact-sheet describes a simplified approach for a part of the recently developed dynamic wake model for aeroelastic simulations for wind turbines operating in wake. The part described in this fact-sheet concern the meandering process only, while the other part of the simplified approach the wake deficit profile is outside the scope of the present fact-sheet. Work on simplified models for the wake deficit profile is ongoing. (au)

  17. Simplified absolute phase retrieval of dual-frequency fringe patterns in fringe projection profilometry

    Science.gov (United States)

    Lu, Jin; Mo, Rong; Sun, Huibin; Chang, Zhiyong; Zhao, Xiaxia

    2016-04-01

    In fringe projection profilometry, a simplified method is proposed to recover absolute phase maps of two-frequency fringe patterns by using a unique mapping rule. The mapping rule is designed from the rounded phase values to the fringe order of each pixel. Absolute phase can be recovered by the fringe order maps. Unlike the existing techniques, where the lowest frequency of dual- or multiple-frequency fringe patterns must be single, the presented method breaks the limitation and simplifies the procedure of phase unwrapping. Additionally, due to many issues including ambient light, shadow, sharp edges, step height boundaries and surface reflectivity variations, a novel framework of automatically identifying and removing invalid phase values is also proposed. Simulations and experiments have been carried out to validate the performances of the proposed method.

  18. Kernel PLS Estimation of Single-trial Event-related Potentials

    Science.gov (United States)

    Rosipal, Roman; Trejo, Leonard J.

    2004-01-01

    Nonlinear kernel partial least squaes (KPLS) regressior, is a novel smoothing approach to nonparametric regression curve fitting. We have developed a KPLS approach to the estimation of single-trial event related potentials (ERPs). For improved accuracy of estimation, we also developed a local KPLS method for situations in which there exists prior knowledge about the approximate latency of individual ERP components. To assess the utility of the KPLS approach, we compared non-local KPLS and local KPLS smoothing with other nonparametric signal processing and smoothing methods. In particular, we examined wavelet denoising, smoothing splines, and localized smoothing splines. We applied these methods to the estimation of simulated mixtures of human ERPs and ongoing electroencephalogram (EEG) activity using a dipole simulator (BESA). In this scenario we considered ongoing EEG to represent spatially and temporally correlated noise added to the ERPs. This simulation provided a reasonable but simplified model of real-world ERP measurements. For estimation of the simulated single-trial ERPs, local KPLS provided a level of accuracy that was comparable with or better than the other methods. We also applied the local KPLS method to the estimation of human ERPs recorded in an experiment on co,onitive fatigue. For these data, the local KPLS method provided a clear improvement in visualization of single-trial ERPs as well as their averages. The local KPLS method may serve as a new alternative to the estimation of single-trial ERPs and improvement of ERP averages.

  19. A simplified in-situ electrochemical decontamination of lead from polluted soil (abstract)

    International Nuclear Information System (INIS)

    Ansari, T.M.; Ahmad, I.; Khan, Q.M.; Chaudhry, A.H.

    2011-01-01

    This paper reports a simplified In-Situ electrochemical method for remediation of field soil contaminated with lead. A series of electrochemical decontamination experiments including variable conditions such as operating duration and application of enhancement reagent were performed to demonstrate the efficiency of lead removal from spiked and polluted soil samples collected from Lahore, Pakistan. The results showed that the efficiency of lead removal from the contaminated soil increased with increasing the operating duration under a set of experimental conditions. The reagent used as complexing and solubilizing agent i.e. EDTA was found to be efficient in removing lead from the polluted soil. After 15 days duration, 85 % lead removal efficiency was observed in spiked soil under enhanced conditions , however, 63 % lead removal was achieved from the polluted soil samples by the simplified In-situ electrochemical decontamination method. The method is simple, rapid, cheaper and suitable for soil remediation purposes. (author)

  20. MAT-FLX: a simplified code for computing material balances in fuel cycle

    International Nuclear Information System (INIS)

    Pierantoni, F.; Piacentini, F.

    1983-01-01

    This work illustrates a calculation code designed to provide a materials balance for the electro nuclear fuel cycle. The calculation method is simplified but relatively precise and employs a progressive tabulated data approach

  1. Simplified calculation of investment costs involved in purifying industrial waste water. Calculo simplificado de los costes de inversion en la depuracion de aguas residuales industriales

    Energy Technology Data Exchange (ETDEWEB)

    Queralt, R. (Junta de Saneamientos. Generalidad de Catalua (Spain))

    1993-03-01

    The calculation of the investment involved in purifying industrial waste water poses certain problems since this is affected either by employing complicated methods which require a great deal of data or, as the sole alternative, through subjective estimates. The present article purposes an intermediate system based on simplified formulas for which it is only necessary to know three parameters, namely, (in the majority of cases) the industrial activity, the flow and the Q.O.D. (Author)

  2. Endodontics Simplified

    OpenAIRE

    Kansal, Rohit; Talwar, Sangeeta; Yadav, Seema; Chaudhary, Sarika; Nawal, Ruchika

    2014-01-01

    The preparation of the root canal system is essential for a successful outcome in root canal treatment. The development of rotary nickel titanium instruments is considered to be an important innovation in the field of endodontics. During few last years, several new instrument systems have been introduced but the quest for simplifying the endodontic instrumentation sequence has been ongoing for almost 20 years, resulting in more than 70 different engine-driven endodontic instrumentation system...

  3. Traveling Wave Resonance and Simplified Analysis Method for Long-Span Symmetrical Cable-Stayed Bridges under Seismic Traveling Wave Excitation

    Directory of Open Access Journals (Sweden)

    Zhong-ye Tian

    2014-01-01

    Full Text Available The seismic responses of a long-span cable-stayed bridge under uniform excitation and traveling wave excitation in the longitudinal direction are, respectively, computed. The numerical results show that the bridge’s peak seismic responses vary significantly as the apparent wave velocity decreases. Therefore, the traveling wave effect must be considered in the seismic design of long-span bridges. The bridge’s peak seismic responses do not vary monotonously with the apparent wave velocity due to the traveling wave resonance. A new traveling wave excitation method that can simplify the multisupport excitation process into a two-support excitation process is developed.

  4. Assessing the accuracy of a simplified building energy simulation model using BESTEST : the case study of Brazilian regulation

    NARCIS (Netherlands)

    Melo, A.P.; Cóstola, D.; Lamberts, R.; Hensen, J.L.M.

    2012-01-01

    This paper reports the use of an internationally recognized validation and diagnostics procedure to test the fidelity of a simplified calculation method. The case study is the simplified model for calculation of energy performance of building envelopes, introduced by the Brazilian regulation for

  5. Estimation methods for nonlinear state-space models in ecology

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro

    2011-01-01

    The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...

  6. Método simplificado para determinar el potencial de crecimiento en pacientes de Ortodoncia A simplified method to determine the potential growth in Orthodontics patients

    Directory of Open Access Journals (Sweden)

    Gladia Toledo Mayarí

    2010-06-01

    Full Text Available Se realizó una investigación de innovación tecnológica, de corte transversal, con el objetivo de presentar un método simplificado para determinar el potencial de crecimiento en pacientes tributarios de tratamiento ortodóncico, en una muestra de 150 pacientes entre 8 y 16 años, que ingresaron en la Clínica de Ortodoncia de la Facultad de Estomatología de la Habana, entre los años 2004 y 2006. A cada paciente se le realizó una radiografía de la mano izquierda y por primera vez en Cuba se estudiaron en la misma muestra, tres métodos de evaluación del potencial de crecimiento (métodos TW2, Grave y Brown, y determinación de los estadios de maduración de la falange media del tercer dedo. Una vez determinados éstos, se calcularon la correlación y la concordancia entre los mismos. Hubo altos coeficientes de correlación (hembras rho= 0,888 y varones rho= 0,921 y de concordancia (hembras Kappa= 1,000 y varones Kappa= 0,964. Se concluyó que la evaluación del potencial de crecimiento que presentaron los pacientes de Ortodoncia puede ser efectuada mediante la realización de una radiografía de la falange media del tercer dedo de la mano izquierda, lo cual constituye un útil método simplificado de evaluación.A cross-sectional technological innovation research was conducted to tender a simplified method to determine the potential growth of 150 Orthodontics patients aged between 8 and 16 admitted in the Orthodontics Clinic from the Stomatology of Ciudad de La Habana in 2004 and 2006. Each patient underwent left hand X-ray and for the first time in Cuba and in the same sample it was possible to study three assessment methods of potential growth (TW2 Method, Grave and Brown and stage determination of maturation of middle phalanx of third finger. After determination, we estimated the correlation and concordance among them, noting high correlation coefficients (rho females= 0,888 and rho= males 0,921 and of concordance (Kappa females= 1

  7. Simplified methods for in vivo measurement of acetylcholinesterase activity in rodent brain

    Energy Technology Data Exchange (ETDEWEB)

    Kilbourn, Michael R. E-mail: mkilbour@umich.edu; Sherman, Phillip S.; Snyder, Scott E

    1999-07-01

    Simplified methods for in vivo studies of acetylcholinesterase (AChE) activity in rodent brain were evaluated using N-[{sup 11}C]methylpiperidinyl propionate ([{sup 11}C]PMP) as an enzyme substrate. Regional mouse brain distributions were determined at 1 min (representing initial brain uptake) and 30 min (representing trapped product) after intravenous [{sup 11}C]PMP administration. Single time point tissue concentrations (percent injected dose/gram at 30 min), tissue concentration ratios (striatum/cerebellum and striatum/cortex ratios at 30 min), and regional tissue retention fractions (defined as percent injected dose 30 min/percent injected dose 1 min) were evaluated as measures of AChE enzymatic activity in mouse brain. Studies were carried out in control animals and after dosing with phenserine, a selective centrally active AChE inhibitor; neostigmine, a peripheral cholinesterase inhibitor; and a combination of the two drugs. In control and phenserine-treated animals, absolute tissue concentrations and regional retention fractions provide good measures of dose-dependent inhibition of brain AChE; tissue concentration ratios, however, provide erroneous conclusions. Peripheral inhibition of cholinesterases, which changes the blood pharmacokinetics of the radiotracer, diminishes the sensitivity of all measures to detect changes in central inhibition of the enzyme. We conclude that certain simple measures of AChE hydrolysis rates for [{sup 11}C]PMP are suitable for studies where alterations of the peripheral blood metabolism of the tracer are kept to a minimum.

  8. Thermodynamic method for obtaining the solubilities of complex medium-sized chemicals in pure and mixed solvents

    DEFF Research Database (Denmark)

    Abildskov, Jens; O'Connell, J.P.

    2005-01-01

    This paper extends our previous simplified approach to using group contribution methods and limited data to determine differences in solubility of sparingly soluble complex chemicals as the solvent is changed. New applications include estimating temperature dependence and the effect of adding cos....... Though we present no new solution theory, the paper shows an especially efficient use of thermodynamic models for solvent and cosolvent selection for product formulations. Examples and discussion of applications are given. (c) 2004 Elsevier B.V. All rights reserved.......This paper extends our previous simplified approach to using group contribution methods and limited data to determine differences in solubility of sparingly soluble complex chemicals as the solvent is changed. New applications include estimating temperature dependence and the effect of adding...

  9. Simplified design of flexible expansion anchored plates for nuclear structures

    International Nuclear Information System (INIS)

    Mehta, N.K.; Hingorani, N.V.; Longlais, T.G.; Sargent and Lundy, Chicago, IL)

    1984-01-01

    In nuclear power plant construction, expansion anchored plates are used to support pipe, cable tray and HVAC duct hangers, and various structural elements. The expansion anchored plates provide flexibility in the installation of field-routed lines where cast-in-place embedments are not available. General design requirements for expansion anchored plate assemblies are given in ACI 349, Appendix B (1). The manufacturers recommend installation procedures for their products. Recent field testing in response to NRC Bulletin 79-02 (2) indicates that anchors, installed in accordance with manufacturer's recommended procedures, perform satisfactorily under static and dynamic loading conditions. Finite element analysis is a useful tool to correctly analyze the expansion anchored plates subject to axial tension and biaxial moments, but it becomes expensive and time-consuming to apply this tool for a large number of plates. It is, therefore, advantageous to use a simplified method, even though it may be more conservative as compared to the exact method of analysis. This paper presents a design method referred to as the modified rigid plate analysis approach to simplify both the initial design and the review of as-built conditions

  10. Diagnosis of vertebral fractures in children: is a simplified algorithm-based qualitative technique reliable?

    Energy Technology Data Exchange (ETDEWEB)

    Adiotomre, E. [Sheffield Teaching Hospitals NHS Foundation Trust UK, Radiology Department, Sheffield (United Kingdom); Sheffield Children' s NHS Foundation Trust, Radiology Department, Sheffield (United Kingdom); Summers, L.; Digby, M. [University of Sheffield UK, Sheffield Medical School, Sheffield (United Kingdom); Allison, A.; Walters, S.J. [University of Sheffield UK, School of Health and Related Research, Sheffield (United Kingdom); Broadley, P.; Lang, I. [Sheffield Children' s NHS Foundation Trust, Radiology Department, Sheffield (United Kingdom); Offiah, A.C. [Sheffield Children' s NHS Foundation Trust, Radiology Department, Sheffield (United Kingdom); University of Sheffield UK, Academic Unit of Child Health, Sheffield (United Kingdom)

    2016-05-15

    Identification of osteoporotic vertebral fractures allows treatment opportunity reducing future risk. There is no agreed standardised method for diagnosing paediatric vertebral fractures. To evaluate the precision of a modified adult algorithm-based qualitative (ABQ) technique, applicable to children with primary or secondary osteoporosis. Three radiologists independently assessed lateral spine radiographs of 50 children with suspected reduction in bone mineral density using a modified ABQ scoring system and following simplification to include only clinically relevant parameters, a simplified ABQ score. A final consensus of all observers using simplified ABQ was performed as a reference standard for fracture characterisation. Kappa was calculated for interobserver agreement of the components of both scoring systems and intraobserver agreement of simplified ABQ based on a second read of 29 randomly selected images. Interobserver Kappa for modified ABQ scoring for fracture detection, severity and shape ranged from 0.34 to 0.49 Kappa for abnormal endplate and position assessment was 0.27 to 0.38. Inter- and intraobserver Kappa for simplified ABQ scoring for fracture detection and grade ranged from 0.37 to 0.46 and 0.45 to 0.56, respectively. Inter- and intraobserver Kappa for affected endplate ranged from 0.31 to 0.41 and 0.45 to 0.51, respectively. Subjectively, observers' felt simplified ABQ was easier and less time-consuming. Observer reliability of modified and simplified ABQ was similar, with slight to moderate agreement for fracture detection and grade/severity. Due to subjective preference for simplified ABQ, we suggest its use as a semi-objective measure of diagnosing paediatric vertebral fractures. (orig.)

  11. Diagnosis of vertebral fractures in children: is a simplified algorithm-based qualitative technique reliable?

    International Nuclear Information System (INIS)

    Adiotomre, E.; Summers, L.; Digby, M.; Allison, A.; Walters, S.J.; Broadley, P.; Lang, I.; Offiah, A.C.

    2016-01-01

    Identification of osteoporotic vertebral fractures allows treatment opportunity reducing future risk. There is no agreed standardised method for diagnosing paediatric vertebral fractures. To evaluate the precision of a modified adult algorithm-based qualitative (ABQ) technique, applicable to children with primary or secondary osteoporosis. Three radiologists independently assessed lateral spine radiographs of 50 children with suspected reduction in bone mineral density using a modified ABQ scoring system and following simplification to include only clinically relevant parameters, a simplified ABQ score. A final consensus of all observers using simplified ABQ was performed as a reference standard for fracture characterisation. Kappa was calculated for interobserver agreement of the components of both scoring systems and intraobserver agreement of simplified ABQ based on a second read of 29 randomly selected images. Interobserver Kappa for modified ABQ scoring for fracture detection, severity and shape ranged from 0.34 to 0.49 Kappa for abnormal endplate and position assessment was 0.27 to 0.38. Inter- and intraobserver Kappa for simplified ABQ scoring for fracture detection and grade ranged from 0.37 to 0.46 and 0.45 to 0.56, respectively. Inter- and intraobserver Kappa for affected endplate ranged from 0.31 to 0.41 and 0.45 to 0.51, respectively. Subjectively, observers' felt simplified ABQ was easier and less time-consuming. Observer reliability of modified and simplified ABQ was similar, with slight to moderate agreement for fracture detection and grade/severity. Due to subjective preference for simplified ABQ, we suggest its use as a semi-objective measure of diagnosing paediatric vertebral fractures. (orig.)

  12. Utilization of handheld computing to simplify compliance

    International Nuclear Information System (INIS)

    Galvin, G.; Rasmussen, J.; Haines, A.

    2008-01-01

    Monitoring job site performance and building a continually improving organization is an ongoing challenge for operators of process and power generation facilities. Stakeholders need to accurately capture records of quality and safety compliance, job progress, and operational experiences (OPEX). This paper explores the use of technology-enabled processes as a means for simplifying compliance to quality, safety, administrative, maintenance and operations activities. The discussion will explore a number of emerging technologies and their application to simplifying task execution and process compliance. This paper will further discuss methodologies to further refine processes through trending improvements in compliance and continually optimizing and simplifying through the use of technology. (author)

  13. Development of a simplified and dynamic method for double glazing façade with night insulation and validated by full-scale façade element

    DEFF Research Database (Denmark)

    Liu, Mingzhe; Wittchen, Kim Bjarne; Heiselberg, Per

    2013-01-01

    The study aims to develop a simplified calculation method to simulate the performance of double glazing fac¸ ade with night insulation. This paper describes the method to calculate the thermal properties (Uvalue) and comfort performance (internal surface temperature of glazing) of the double...... with night insulation is calculated and compared with that of the facade without the night insulation. Based on standards EN 410 and EN 673, the method takes the thermal mass of glazing and the infiltration between the insulation layer and glazing into account. Furthermore it is capable of implementing whole...

  14. Bin mode estimation methods for Compton camera imaging

    International Nuclear Information System (INIS)

    Ikeda, S.; Odaka, H.; Uemura, M.; Takahashi, T.; Watanabe, S.; Takeda, S.

    2014-01-01

    We study the image reconstruction problem of a Compton camera which consists of semiconductor detectors. The image reconstruction is formulated as a statistical estimation problem. We employ a bin-mode estimation (BME) and extend an existing framework to a Compton camera with multiple scatterers and absorbers. Two estimation algorithms are proposed: an accelerated EM algorithm for the maximum likelihood estimation (MLE) and a modified EM algorithm for the maximum a posteriori (MAP) estimation. Numerical simulations demonstrate the potential of the proposed methods

  15. Online Internal Temperature Estimation for Lithium-Ion Batteries Based on Kalman Filter

    Directory of Open Access Journals (Sweden)

    Jinlei Sun

    2015-05-01

    Full Text Available The battery internal temperature estimation is important for the thermal safety in applications, because the internal temperature is hard to measure directly. In this work, an online internal temperature estimation method based on a simplified thermal model using a Kalman filter is proposed. As an improvement, the influences of entropy change and overpotential on heat generation are analyzed quantitatively. The model parameters are identified through a current pulse test. The charge/discharge experiments under different current rates are carried out on the same battery to verify the estimation results. The internal and surface temperatures are measured with thermocouples for result validation and model construction. The accuracy of the estimated result is validated with a maximum estimation error of around 1 K.

  16. Methods for the estimation of uranium ore reserves

    International Nuclear Information System (INIS)

    1985-01-01

    The Manual is designed mainly to provide assistance in uranium ore reserve estimation methods to mining engineers and geologists with limited experience in estimating reserves, especially to those working in developing countries. This Manual deals with the general principles of evaluation of metalliferous deposits but also takes into account the radioactivity of uranium ores. The methods presented have been generally accepted in the international uranium industry

  17. Estimation of technetium 99m mercaptoacetyltriglycine plasma clearance by use of one single plasma sample

    International Nuclear Information System (INIS)

    Mueller-Suur, R.; Magnusson, G.; Karolinska Inst., Stockholm; Bois-Svensson, I.; Jansson, B.

    1991-01-01

    Recent studies have shown that technetium 99m mercaptoacetyltriglycine (MAG-3) is a suitable replacement for iodine 131 or 123 hippurate in gamma-camera renography. Also, the determination of its clearance is of value, since it correlates well with that of hippurate and thus may be an indirect measure of renal plasma flow. In order to simplify the clearance method we developed formulas for the estimation of plasma clearance of MAG-3 based on a single plasma sample and compared them with the multiple sample method based on 7 plasma samples. The correlation to effective renal plasma flow (ERPF) (according to Tauxe's method, using iodine 123 hippurate), which ranged from 75 to 654 ml/min per 1.73 m 2 , was determined in these patients. Using the developed regression equations the error of estimate for the simplified clearance method was acceptably low (18-14 ml/min), when the single plasma sample was taken 44-64 min post-injection. Formulas for different sampling times at 44, 48, 52, 56, 60 and 64 min are given, and we recommend 60 min as optimal, with an error of estimate of 15.5 ml/min. The correlation between the MAG-3 clearances and ERPF was high (r=0.90). Since normal values for MAG-3 clearance are not yet available, transformation to estimated ERPF values by the regression equation (ERPF=1.86xC MAG-3 +4.6) could be of clinical value in order to compare it with the normal values for ERPF given in the literature. (orig.)

  18. Explicit estimating equations for semiparametric generalized linear latent variable models

    KAUST Repository

    Ma, Yanyuan; Genton, Marc G.

    2010-01-01

    which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n

  19. A SOFTWARE RELIABILITY ESTIMATION METHOD TO NUCLEAR SAFETY SOFTWARE

    Directory of Open Access Journals (Sweden)

    GEE-YONG PARK

    2014-02-01

    Full Text Available A method for estimating software reliability for nuclear safety software is proposed in this paper. This method is based on the software reliability growth model (SRGM, where the behavior of software failure is assumed to follow a non-homogeneous Poisson process. Two types of modeling schemes based on a particular underlying method are proposed in order to more precisely estimate and predict the number of software defects based on very rare software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating software test cases as a covariate into the model. It was identified that these models are capable of reasonably estimating the remaining number of software defects which directly affects the reactor trip functions. The software reliability might be estimated from these modeling equations, and one approach of obtaining software reliability value is proposed in this paper.

  20. Uncertain quantities in estimating radiation exposure from former landfill sites: groundwater pathway

    International Nuclear Information System (INIS)

    Kistinger, S.

    2005-01-01

    With regard to the title of the closed meeting, ''Realistic determination of radiation exposure'', we state that generic estimates can by definition never be realistic, but that it is their purpose to be conservative. However this still leaves us with the question of how conservative a generic dose estimate must be and how the existing variability or indeterminacy of reality should be taken into account. This paper presents various methods for dealing with this indeterminacy in generic dose estimates. The example used for this purpose is a simplified model for the determination of the potential radiation exposure caused by a former landfill site via the water pathway

  1. Simplified Least Squares Shadowing sensitivity analysis for chaotic ODEs and PDEs

    Energy Technology Data Exchange (ETDEWEB)

    Chater, Mario, E-mail: chaterm@mit.edu; Ni, Angxiu, E-mail: niangxiu@mit.edu; Wang, Qiqi, E-mail: qiqi@mit.edu

    2017-01-15

    This paper develops a variant of the Least Squares Shadowing (LSS) method, which has successfully computed the derivative for several chaotic ODEs and PDEs. The development in this paper aims to simplify Least Squares Shadowing method by improving how time dilation is treated. Instead of adding an explicit time dilation term as in the original method, the new variant uses windowing, which can be more efficient and simpler to implement, especially for PDEs.

  2. Investigation on the optimal simplified model of BIW structure using FEM

    Directory of Open Access Journals (Sweden)

    Mohammad Hassan Shojaeefard

    Full Text Available Abstract At conceptual phases of designing a vehicle, engineers need simplified models to examine the structural and functional characteristics and apply custom modifications for achieving the best vehicle design. Using detailed finite-element (FE model of the vehicle at early steps can be very conducive; however, the drawbacks of being excessively time-consuming and expensive are encountered. This leads engineers to utilize trade-off simplified models of body-in-white (BIW, composed of only the most decisive structural elements that do not employ extensive prior knowledge of the vehicle dimensions and constitutive materials. However, the extent and type of simplification remain ambiguous. In fact during the procedure of simplification, one will be in the quandary over which kind of approach and what body elements should be regarded for simplification to optimize costs and time, while providing acceptable accuracy. Although different approaches for optimization of timeframe and achieving optimal designs of the BIW are proposed in the literature, a comparison between different simplification methods and accordingly introducing the best models, which is the main focus of this research, have not yet been done. In this paper, an industrial sedan vehicle has been simplified through four different simplified FE models, each of which examines the validity of the extent of simplification from different points of views. Bending and torsional stiffness are obtained for all models considering boundary conditions similar to experimental tests. The acquired values are then compared to that of target values from experimental tests for validation of the FE-modeling. Finally, the results are examined and taking efficacy and accuracy into account, the best trade-off simplified model is presented.

  3. Development of Simplified and Dynamic Model for Double Glazing Unit Validated with Full-Scale Facade Element

    DEFF Research Database (Denmark)

    Liu, Mingzhe; Wittchen, Kim Bjarne; Heiselberg, Per

    2012-01-01

    The project aims at developing simplified calculation methods for the different features that influence energy demand and indoor environment behind “intelligent” glazed façades. This paper describes how to set up simplified model to calculate the thermal and solar properties (U and g value......) together with comfort performance (internal surface temperature of the glazing) of a double glazing unit. Double glazing unit is defined as 1D model with nodes representing different layers of material. Several models with different number of nodes and position of these are compared and verified in order...... to find a simplified method which can calculate the performance as accurately as possible. The calculated performance in terms of internal surface temperature is verified with experimental data collected in a full-scale façade element test facility at Aalborg University (DK). The advantage...

  4. Evaluation and reliability of bone histological age estimation methods

    African Journals Online (AJOL)

    Human age estimation at death plays a vital role in forensic anthropology and bioarchaeology. Researchers used morphological and histological methods to estimate human age from their skeletal remains. This paper discussed different histological methods that used human long bones and ribs to determine age ...

  5. Simplified human thermoregulatory model for designing wearable thermoelectric devices

    Science.gov (United States)

    Wijethunge, Dimuthu; Kim, Donggyu; Kim, Woochul

    2018-02-01

    Research on wearable and implantable devices have become popular with the strong need in market. A precise understanding of the thermal properties of human skin, which are not constant values but vary depending on ambient condition, is required for the development of such devices. In this paper, we present simplified human thermoregulatory model for accurately estimating the thermal properties of the skin without applying rigorous calculations. The proposed model considers a variable blood flow rate through the skin, evaporation functions, and a variable convection heat transfer from the skin surface. In addition, wearable thermoelectric generation (TEG) and refrigeration devices were simulated. We found that deviations of 10-60% can be resulted in estimating TEG performance without considering human thermoregulatory model owing to the fact that thermal resistance of human skin is adapted to ambient condition. Simplicity of the modeling procedure presented in this work could be beneficial for optimizing and predicting the performance of any applications that are directly coupled with skin thermal properties.

  6. Higher-order Multivariable Polynomial Regression to Estimate Human Affective States

    Science.gov (United States)

    Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin

    2016-03-01

    From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states.

  7. Qualitative methods in theoretical physics

    CERN Document Server

    Maslov, Dmitrii

    2018-01-01

    This book comprises a set of tools which allow researchers and students to arrive at a qualitatively correct answer without undertaking lengthy calculations. In general, Qualitative Methods in Theoretical Physics is about combining approximate mathematical methods with fundamental principles of physics: conservation laws and symmetries. Readers will learn how to simplify problems, how to estimate results, and how to apply symmetry arguments and conduct dimensional analysis. A comprehensive problem set is included. The book will appeal to a wide range of students and researchers.

  8. Development and sensitivity study of a simplified and dynamic method for double glazing facade and verified by a full-scale façade element

    DEFF Research Database (Denmark)

    Liu, Mingzhe; Wittchen, Kim Bjarne; Heiselberg, Per

    2014-01-01

    The research aims to develop a simplified calculation method for double glazing facade to calculate its thermal and solar properties (U and g value) together with comfort performance (internal surface temperature of the glazing). Double glazing is defined as 1D model with nodes representing......, taking the thermal mass of the glazing into account. In addition, angle and spectral dependency of solar characteristic is also considered during the calculation. By using the method, it is possible to calculate whole year performance at different time steps, which makes it a time economical and accurate...

  9. 48 CFR 1552.232-74 - Payments-simplified acquisition procedures financing.

    Science.gov (United States)

    2010-10-01

    ... acquisition procedures financing. 1552.232-74 Section 1552.232-74 Federal Acquisition Regulations System... Provisions and Clauses 1552.232-74 Payments—simplified acquisition procedures financing. As prescribed in... acquisition procedures financing. Payments—Simplified Acquisition Procedures Financing (JUN 2006) Simplified...

  10. Improved cosine similarity measures of simplified neutrosophic sets for medical diagnoses.

    Science.gov (United States)

    Ye, Jun

    2015-03-01

    In pattern recognition and medical diagnosis, similarity measure is an important mathematical tool. To overcome some disadvantages of existing cosine similarity measures of simplified neutrosophic sets (SNSs) in vector space, this paper proposed improved cosine similarity measures of SNSs based on cosine function, including single valued neutrosophic cosine similarity measures and interval neutrosophic cosine similarity measures. Then, weighted cosine similarity measures of SNSs were introduced by taking into account the importance of each element. Further, a medical diagnosis method using the improved cosine similarity measures was proposed to solve medical diagnosis problems with simplified neutrosophic information. The improved cosine similarity measures between SNSs were introduced based on cosine function. Then, we compared the improved cosine similarity measures of SNSs with existing cosine similarity measures of SNSs by numerical examples to demonstrate their effectiveness and rationality for overcoming some shortcomings of existing cosine similarity measures of SNSs in some cases. In the medical diagnosis method, we can find a proper diagnosis by the cosine similarity measures between the symptoms and considered diseases which are represented by SNSs. Then, the medical diagnosis method based on the improved cosine similarity measures was applied to two medical diagnosis problems to show the applications and effectiveness of the proposed method. Two numerical examples all demonstrated that the improved cosine similarity measures of SNSs based on the cosine function can overcome the shortcomings of the existing cosine similarity measures between two vectors in some cases. By two medical diagnoses problems, the medical diagnoses using various similarity measures of SNSs indicated the identical diagnosis results and demonstrated the effectiveness and rationality of the diagnosis method proposed in this paper. The improved cosine measures of SNSs based on cosine

  11. Fading Kalman filter-based real-time state of charge estimation in LiFePO_4 battery-powered electric vehicles

    International Nuclear Information System (INIS)

    Lim, KaiChin; Bastawrous, Hany Ayad; Duong, Van-Huan; See, Khay Wai; Zhang, Peng; Dou, Shi Xue

    2016-01-01

    Highlights: • Real-time battery model parameters and SoC estimation with novel method is proposed. • Cascading filtering stages are used for parameters identification and SoC estimation. • Optimized fading Kalman filter is implemented for SoC estimation. • Accurate SoC estimation is validated in UDDS load profile experiment. • This approach is suitable for BMS in EV applications due to its simplicity. - Abstract: A novel online estimation technique for estimating the state of charge (SoC) of a lithium iron phosphate (LiFePO_4) battery has been developed. Based on a simplified model, the open circuit voltage (OCV) of the battery is estimated through two cascaded linear filtering stages. A recursive least squares filter is employed in the first stage to dynamically estimate the battery model parameters in real-time, and then, a fading Kalman filter (FKF) is used to estimate the OCV from these parameters. FKF can avoid the possibility of large estimation errors, which may occur with a conventional Kalman filter, due to its capability to compensate any modeling error through a fading factor. By optimizing the value of the fading factor in the set of recursion equations of FKF with genetic algorithms, the errors in estimating the battery’s SoC in urban dynamometer driving schedules-based experiments and real vehicle driving cycle experiments were below 3% compared to more than 9% in the case of using an ordinary Kalman filter. The proposed method with its simplified model provides the simplicity and feasibility required for real-time application with highly accurate SoC estimation.

  12. Study on Top-Down Estimation Method of Software Project Planning

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jun-guang; L(U) Ting-jie; ZHAO Yu-mei

    2006-01-01

    This paper studies a new software project planning method under some actual project data in order to make software project plans more effective. From the perspective of system theory, our new method regards a software project plan as an associative unit for study. During a top-down estimation of a software project, Program Evaluation and Review Technique (PERT) method and analogy method are combined to estimate its size, then effort estimation and specific schedules are obtained according to distributions of the phase effort. This allows a set of practical and feasible planning methods to be constructed. Actual data indicate that this set of methods can lead to effective software project planning.

  13. A simplified method for generation of pseudo natural colours from colour infrared aerial photos

    DEFF Research Database (Denmark)

    Knudsen, Thomas; Olsen, Brian Pilemann

    mapping methods. The method presented is a dramatic simplification of a recently published method, going from a 7 step to a 2 step procedure. The first step is a classification of the input image into 4 domains, based on simple thresholding of a vegetation index and a saturation measure for each pixel....... In the second step the blue colour component is estimated using tailored models for each domain. Green and red colour components are taken directly fron the CIR photo. The visual impression of the results from the 2 step method is only slightly inferior to the original 7 step method. The implementation, however...

  14. A simplified digital lock-in amplifier for the scanning grating spectrometer.

    Science.gov (United States)

    Wang, Jingru; Wang, Zhihong; Ji, Xufei; Liu, Jie; Liu, Guangda

    2017-02-01

    For the common measurement and control system of a scanning grating spectrometer, the use of an analog lock-in amplifier requires complex circuitry and sophisticated debugging, whereas the use of a digital lock-in amplifier places a high demand on the calculation capability and storage space. In this paper, a simplified digital lock-in amplifier based on averaging the absolute values within a complete period is presented and applied to a scanning grating spectrometer. The simplified digital lock-in amplifier was implemented on a low-cost microcontroller without multipliers, and got rid of the reference signal and specific configuration of the sampling frequency. Two positive zero-crossing detections were used to lock the phase of the measured signal. However, measurement method errors were introduced by the following factors: frequency fluctuation, sampling interval, and integer restriction of the sampling number. The theoretical calculation and experimental results of the signal-to-noise ratio of the proposed measurement method were 2055 and 2403, respectively.

  15. Developing simplified Regional Potential Evapotranspiration (PET ...

    African Journals Online (AJOL)

    Regional Potential Evapotranspiration (PET) estimation method was developed to estimate the potential evapotranspiration (reference evapotranspiration) over Abbay Basin as a function of basin maximum and minimum temperature, and modulated by site specific elevation data. The method is intended to estimate PET in ...

  16. A simplified, data-constrained approach to estimate the permafrost carbon-climate feedback: The PCN Incubation-Panarctic Thermal (PInc-PanTher) Scaling Approach

    Science.gov (United States)

    Koven, C. D.; Schuur, E.; Schaedel, C.; Bohn, T. J.; Burke, E.; Chen, G.; Chen, X.; Ciais, P.; Grosse, G.; Harden, J. W.; Hayes, D. J.; Hugelius, G.; Jafarov, E. E.; Krinner, G.; Kuhry, P.; Lawrence, D. M.; MacDougall, A.; Marchenko, S. S.; McGuire, A. D.; Natali, S.; Nicolsky, D.; Olefeldt, D.; Peng, S.; Romanovsky, V. E.; Schaefer, K. M.; Strauss, J.; Treat, C. C.; Turetsky, M. R.

    2015-12-01

    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation-Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a 3-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100.

  17. A simplified model of aerosol removal by containment sprays

    Energy Technology Data Exchange (ETDEWEB)

    Powers, D.A. (Sandia National Labs., Albuquerque, NM (United States)); Burson, S.B. (Nuclear Regulatory Commission, Washington, DC (United States). Div. of Safety Issue Resolution)

    1993-06-01

    Spray systems in nuclear reactor containments are described. The scrubbing of aerosols from containment atmospheres by spray droplets is discussed. Uncertainties are identified in the prediction of spray performance when the sprays are used as a means for decontaminating containment atmospheres. A mechanistic model based on current knowledge of the physical phenomena involved in spray performance is developed. With this model, a quantitative uncertainty analysis of spray performance is conducted using a Monte Carlo method to sample 20 uncertain quantities related to phenomena of spray droplet behavior as well as the initial and boundary conditions expected to be associated with severe reactor accidents. Results of the uncertainty analysis are used to construct simplified expressions for spray decontamination coefficients. Two variables that affect aerosol capture by water droplets are not treated as uncertain; they are (1) [open quote]Q[close quote], spray water flux into the containment, and (2) [open quote]H[close quote], the total fall distance of spray droplets. The choice of values of these variables is left to the user since they are plant and accident specific. Also, they can usually be ascertained with some degree of certainty. The spray decontamination coefficients are found to be sufficiently dependent on the extent of decontamination that the fraction of the initial aerosol remaining in the atmosphere, m[sub f], is explicitly treated in the simplified expressions. The simplified expressions for the spray decontamination coefficient are given. Parametric values for these expressions are found for median, 10 percentile, and 90 percentile values in the uncertainty distribution for the spray decontamination coefficient. Examples are given to illustrate the utility of the simplified expressions to predict spray decontamination of an aerosol-laden atmosphere.

  18. Wafer Surface Charge Reversal as a Method of Simplifying Nanosphere Lithography for Reactive Ion Etch Texturing of Solar Cells

    Directory of Open Access Journals (Sweden)

    Daniel Inns

    2007-01-01

    Full Text Available A simplified nanosphere lithography process has been developed which allows fast and low-waste maskings of Si surfaces for subsequent reactive ion etching (RIE texturing. Initially, a positive surface charge is applied to a wafer surface by dipping in a solution of aluminum nitrate. Dipping the positive-coated wafer into a solution of negatively charged silica beads (nanospheres results in the spheres becoming electrostatically attracted to the wafer surface. These nanospheres form an etch mask for RIE. After RIE texturing, the reflection of the surface is reduced as effectively as any other nanosphere lithography method, while this batch process used for masking is much faster, making it more industrially relevant.

  19. An improved method for estimating the frequency correlation function

    KAUST Repository

    Chelli, Ali; Pä tzold, Matthias

    2012-01-01

    For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.

  20. An improved method for estimating the frequency correlation function

    KAUST Repository

    Chelli, Ali

    2012-04-01

    For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.

  1. A Simplified Technique for Evaluating Human "CCR5" Genetic Polymorphism

    Science.gov (United States)

    Falteisek, Lukáš; Cerný, Jan; Janštová, Vanda

    2013-01-01

    To involve students in thinking about the problem of AIDS (which is important in the view of nondecreasing infection rates), we established a practical lab using a simplified adaptation of Thomas's (2004) method to determine the polymorphism of HIV co-receptor CCR5 from students' own epithelial cells. CCR5 is a receptor involved in inflammatory…

  2. Methodology for estimating realistic responses of buildings and components under earthquake motion and its application

    International Nuclear Information System (INIS)

    Ebisawa, Katsumi; Abe, Kiyoharu; Kohno, Kunihiko; Nakamura, Hidetaka; Itoh, Mamoru.

    1996-11-01

    Failure probabilities of buildings and components under earthquake motion are estimated as conditional probabilities that their realistic responses exceed their capacities. Two methods for estimating their failure probabilities have already been developed. One is a detailed method developed in the Seismic Safety margins Research Program of Lawrence Livermore National Laboratory in U.S.A., which is called 'SSMRP method'. The other is a simplified method proposed by Kennedy et al., which is called 'Zion method'. The Zion method is sometimes called 'response factor method'. The authors adopted the response factor method. In order to enhance the estimation accuracy of failure probabilities of buildings and components, however, a new methodology for improving the response factor method was proposed. Based on the improved method, response factors of buildings and components designed to seismic design standard in Japan were estimated, and their realistic responses were also calculated. By using their realistic responses and capacities, the failure probabilities of a reactor building and relays were estimated. In order to identify the difference between new method, SSMRP method and original response factor method, the failure probabilities were compared estimated by these three methods. A similar method of SSMRP was used instead of the original SSMRP for saving time and labor. The viewpoints for selecting the methods to estimate failure probabilities of buildings and components were also proposed. (author). 55 refs

  3. A comparison of analysis methods to estimate contingency strength.

    Science.gov (United States)

    Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T

    2018-05-09

    To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.

  4. Cosmological helium production simplified

    International Nuclear Information System (INIS)

    Bernstein, J.; Brown, L.S.; Feinberg, G.

    1988-01-01

    We present a simplified model of helium synthesis in the early universe. The purpose of the model is to explain clearly the physical ideas relevant to the cosmological helium synthesis, in a manner that does not overlay these ideas with complex computer calculations. The model closely follows the standard calculation, except that it neglects the small effect of Fermi-Dirac statistics for the leptons. We also neglect the temperature difference between photons and neutrinos during the period in which neutrons and protons interconvert. These approximations allow us to express the neutron-proton conversion rates in a closed form, which agrees to 10% accuracy or better with the exact rates. Using these analytic expressions for the rates, we reduce the calculation of the neutron-proton ratio as a function of temperature to a simple numerical integral. We also estimate the effect of neutron decay on the helium abundance. Our result for this quantity agrees well with precise computer calculations. We use our semi-analytic formulas to determine how the predicted helium abundance varies with such parameters as the neutron life-time, the baryon to photon ratio, the number of neutrino species, and a possible electron-neutrino chemical potential. 19 refs., 1 fig., 1 tab

  5. Plant-available soil water capacity: estimation methods and implications

    Directory of Open Access Journals (Sweden)

    Bruno Montoani Silva

    2014-04-01

    Full Text Available The plant-available water capacity of the soil is defined as the water content between field capacity and wilting point, and has wide practical application in planning the land use. In a representative profile of the Cerrado Oxisol, methods for estimating the wilting point were studied and compared, using a WP4-T psychrometer and Richards chamber for undisturbed and disturbed samples. In addition, the field capacity was estimated by the water content at 6, 10, 33 kPa and by the inflection point of the water retention curve, calculated by the van Genuchten and cubic polynomial models. We found that the field capacity moisture determined at the inflection point was higher than by the other methods, and that even at the inflection point the estimates differed, according to the model used. By the WP4-T psychrometer, the water content was significantly lower found the estimate of the permanent wilting point. We concluded that the estimation of the available water holding capacity is markedly influenced by the estimation methods, which has to be taken into consideration because of the practical importance of this parameter.

  6. Generic simplified simulation model for DFIG with active crowbar

    Energy Technology Data Exchange (ETDEWEB)

    Buendia, Francisco Jimenez [Gamesa Innovation and Technology, Sarriguren, Navarra (Spain). Technology Dept.; Barrasa Gordo, Borja [Assystem Iberia, Bilbao, Vizcaya (Spain)

    2012-07-01

    Simplified models for transient stability studies are a general requirement for transmission system operators to wind turbine (WTG) manufacturers. Those models must represent the performance of the WTGs for transient stability studies, mainly voltage dips originated by short circuits in the electrical network. Those models are implemented in simulation software as PSS/E, DigSilent or PSLF. Those software platforms allow simulation of transients in large electrical networks with thousands of busses, generators and loads. The high complexity of the grid requires that the models inserted into the grid should be simplified in order to allow the simulations being executed as fast as possible. The development of a model which is simplified enough to be integrated in those complex grids and represent the performance of WTG is a challenge. The IEC TC88 working group has developed generic models for different types of generators, among others for WTGs using doubly fed induction generators (DFIG). This paper will focus in an extension of the models for DFIG WTGs developed in IEC in order to be able to represent the simplified model of DFIG with an active crowbar, which is required to withstand voltage dips without disconnecting from the grid. This paper improves current generic model of Type 3 for DFIG adding a simplified version of the generator including crowbar functionality and a simplified version of the crowbar firing. In addition, this simplified model is validated by correlation with voltage dip field test from a real wind turbine. (orig.)

  7. Simplified Dark Matter Models

    OpenAIRE

    Morgante, Enrico

    2018-01-01

    I review the construction of Simplified Models for Dark Matter searches. After discussing the philosophy and some simple examples, I turn the attention to the aspect of the theoretical consistency and to the implications of the necessary extensions of these models.

  8. Nonparametric methods for volatility density estimation

    NARCIS (Netherlands)

    Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.

    2009-01-01

    Stochastic volatility modelling of financial processes has become increasingly popular. The proposed models usually contain a stationary volatility process. We will motivate and review several nonparametric methods for estimation of the density of the volatility process. Both models based on

  9. Fusion rule estimation using vector space methods

    International Nuclear Information System (INIS)

    Rao, N.S.V.

    1997-01-01

    In a system of N sensors, the sensor S j , j = 1, 2 .... N, outputs Y (j) element-of Re, according to an unknown probability distribution P (Y(j) /X) , corresponding to input X element-of [0, 1]. A training n-sample (X 1 , Y 1 ), (X 2 , Y 2 ), ..., (X n , Y n ) is given where Y i = (Y i (1) , Y i (2) , . . . , Y i N ) such that Y i (j) is the output of S j in response to input X i . The problem is to estimate a fusion rule f : Re N → [0, 1], based on the sample, such that the expected square error is minimized over a family of functions Y that constitute a vector space. The function f* that minimizes the expected error cannot be computed since the underlying densities are unknown, and only an approximation f to f* is feasible. We estimate the sample size sufficient to ensure that f provides a close approximation to f* with a high probability. The advantages of vector space methods are two-fold: (a) the sample size estimate is a simple function of the dimensionality of F, and (b) the estimate f can be easily computed by well-known least square methods in polynomial time. The results are applicable to the classical potential function methods and also (to a recently proposed) special class of sigmoidal feedforward neural networks

  10. A Benchmark Estimate for the Capital Stock. An Optimal Consistency Method

    OpenAIRE

    Jose Miguel Albala-Bertrand

    2001-01-01

    There are alternative methods to estimate a capital stock for a benchmark year. These methods, however, do not allow for an independent check, which could establish whether the estimated benchmark level is too high or too low. I propose here an optimal consistency method (OCM), which may allow estimating a capital stock level for a benchmark year and/or checking the consistency of alternative estimates of a benchmark capital stock.

  11. Hypersonic Vehicle Propulsion System Simplified Model Development

    Science.gov (United States)

    Stueber, Thomas J.; Raitano, Paul; Le, Dzu K.; Ouzts, Peter

    2007-01-01

    This document addresses the modeling task plan for the hypersonic GN&C GRC team members. The overall propulsion system modeling task plan is a multi-step process and the task plan identified in this document addresses the first steps (short term modeling goals). The procedures and tools produced from this effort will be useful for creating simplified dynamic models applicable to a hypersonic vehicle propulsion system. The document continues with the GRC short term modeling goal. Next, a general description of the desired simplified model is presented along with simulations that are available to varying degrees. The simulations may be available in electronic form (FORTRAN, CFD, MatLab,...) or in paper form in published documents. Finally, roadmaps outlining possible avenues towards realizing simplified model are presented.

  12. Lithium-Ion Battery Online Rapid State-of-Power Estimation under Multiple Constraints

    Directory of Open Access Journals (Sweden)

    Shun Xiang

    2018-01-01

    Full Text Available The paper aims to realize a rapid online estimation of the state-of-power (SOP with multiple constraints of a lithium-ion battery. Firstly, based on the improved first-order resistance-capacitance (RC model with one-state hysteresis, a linear state-space battery model is built; then, using the dual extended Kalman filtering (DEKF method, the battery parameters and states, including open-circuit voltage (OCV, are estimated. Secondly, by employing the estimated OCV as the observed value to build the second dual Kalman filters, the battery SOC is estimated. Thirdly, a novel rapid-calculating peak power/SOP method with multiple constraints is proposed in which, according to the bisection judgment method, the battery’s peak state is determined; then, one or two instantaneous peak powers are used to determine the peak power during T seconds. In addition, in the battery operating process, the actual constraint that the battery is under is analyzed specifically. Finally, three simplified versions of the Federal Urban Driving Schedule (SFUDS with inserted pulse experiments are conducted to verify the effectiveness and accuracy of the proposed online SOP estimation method.

  13. Thermodynamic properties of organic compounds estimation methods, principles and practice

    CERN Document Server

    Janz, George J

    1967-01-01

    Thermodynamic Properties of Organic Compounds: Estimation Methods, Principles and Practice, Revised Edition focuses on the progression of practical methods in computing the thermodynamic characteristics of organic compounds. Divided into two parts with eight chapters, the book concentrates first on the methods of estimation. Topics presented are statistical and combined thermodynamic functions; free energy change and equilibrium conversions; and estimation of thermodynamic properties. The next discussions focus on the thermodynamic properties of simple polyatomic systems by statistical the

  14. A Group Contribution Method for Estimating Cetane and Octane Numbers

    Energy Technology Data Exchange (ETDEWEB)

    Kubic, William Louis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Process Modeling and Analysis Group

    2016-07-28

    Much of the research on advanced biofuels is devoted to the study of novel chemical pathways for converting nonfood biomass into liquid fuels that can be blended with existing transportation fuels. Many compounds under consideration are not found in the existing fuel supplies. Often, the physical properties needed to assess the viability of a potential biofuel are not available. The only reliable information available may be the molecular structure. Group contribution methods for estimating physical properties from molecular structure have been used for more than 60 years. The most common application is estimation of thermodynamic properties. More recently, group contribution methods have been developed for estimating rate dependent properties including cetane and octane numbers. Often, published group contribution methods are limited in terms of types of function groups and range of applicability. In this study, a new, broadly-applicable group contribution method based on an artificial neural network was developed to estimate cetane number research octane number, and motor octane numbers of hydrocarbons and oxygenated hydrocarbons. The new method is more accurate over a greater range molecular weights and structural complexity than existing group contribution methods for estimating cetane and octane numbers.

  15. 48 CFR 3032.003 - Simplified acquisition procedures financing.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Simplified acquisition procedures financing. 3032.003 Section 3032.003 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND... FINANCING Scope of Part 3032.003 Simplified acquisition procedures financing. Contract financing may be...

  16. Estimating Green Water Footprints in a Temperate Environment

    Directory of Open Access Journals (Sweden)

    Tim Hess

    2010-07-01

    Full Text Available The “green” water footprint (GWF of a product is often considered less important than the “blue” water footprint (BWF as “green” water generally has a low, or even negligible, opportunity cost. However, when considering food, fibre and tree products, is not only a useful indicator of the total appropriation of a natural resource, but from a methodological perspective, blue water footprints are frequently estimated as the residual after green water is subtracted from total crop water use. In most published studies, green water use (ETgreen has been estimated from the FAO CROPWAT model using the USDA method for effective rainfall. In this study, four methods for the estimation of the ETgreen of pasture were compared. Two were based on effective rainfall estimated from monthly rainfall and potential evapotranspiration, and two were based on a simulated water balance using long-term daily, or average monthly, weather data from 11 stations in England. The results show that the effective rainfall methods significantly underestimate the annual ETgreen in all cases, as they do not adequately account for the depletion of stored soil water during the summer. A simplified model, based on annual rainfall and reference evapotranspiration (ETo has been tested and used to map the average annual ETgreen of pasture in England.

  17. Site characterization: a spatial estimation approach

    International Nuclear Information System (INIS)

    Candy, J.V.; Mao, N.

    1980-10-01

    In this report the application of spatial estimation techniques or kriging to groundwater aquifers and geological borehole data is considered. The adequacy of these techniques to reliably develop contour maps from various data sets is investigated. The estimator is developed theoretically in a simplified fashion using vector-matrix calculus. The practice of spatial estimation is discussed and the estimator is then applied to two groundwater aquifer systems and used also to investigate geological formations from borehole data. It is shown that the estimator can provide reasonable results when designed properly

  18. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    Science.gov (United States)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been

  19. Knee Kinematic Improvement After Total Knee Replacement Using a Simplified Quantitative Gait Analysis Method

    Directory of Open Access Journals (Sweden)

    Hassan Sarailoo

    2013-10-01

    Full Text Available Objectives: The aim of this study was to extract suitable spatiotemporal and kinematic parameters to determine how Total Knee Replacement (TKR alters patients’ knee kinematics during gait, using a rapid and simplified quantitative two-dimensional gait analysis procedure. Methods: Two-dimensional kinematic gait pattern of 10 participants were collected before and after the TKR surgery, using a 60 Hz camcorder in sagittal plane. Then, the kinematic parameters were extracted using the gait data. A student t-test was used to compare the group-average of spatiotemporal and peak kinematic characteristics in the sagittal plane. The knee condition was also evaluated using the Oxford Knee Score (OKS Questionnaire to ensure thateach subject was placed in the right group. Results: The results showed a significant improvement in knee flexion during stance and swing phases after TKR surgery. The walking speed was increased as a result of stride length and cadence improvement, but this increment was not statistically significant. Both post-TKR and control groups showed an increment in spatiotemporal and peak kinematic characteristics between comfortable and fast walking speeds. Discussion: The objective kinematic parameters extracted from 2D gait data were able to show significant improvements of the knee joint after TKR surgery. The patients with TKR surgery were also able to improve their knee kinematics during fast walking speed equal to the control group. These results provide a good insight into the capabilities of the presented method to evaluate knee functionality before and after TKR surgery and to define a more effective rehabilitation program.

  20. An energy estimation framework for event-based methods in Non-Intrusive Load Monitoring

    International Nuclear Information System (INIS)

    Giri, Suman; Bergés, Mario

    2015-01-01

    Highlights: • Energy estimation is NILM has not yet accounted for complexity of appliance models. • We present a data-driven framework for appliance modeling in supervised NILM. • We test the framework on 3 houses and report average accuracies of 5.9–22.4%. • Appliance models facilitate the estimation of energy consumed by the appliance. - Abstract: Non-Intrusive Load Monitoring (NILM) is a set of techniques used to estimate the electricity consumed by individual appliances in a building from measurements of the total electrical consumption. Most commonly, NILM works by first attributing any significant change in the total power consumption (also known as an event) to a specific load and subsequently using these attributions (i.e. the labels for the events) to estimate energy for each load. For this last step, most published work in the field makes simplifying assumptions to make the problem more tractable. In this paper, we present a framework for creating appliance models based on classification labels and aggregate power measurements that can help to relax many of these assumptions. Our framework automatically builds models for appliances to perform energy estimation. The model relies on feature extraction, clustering via affinity propagation, perturbation of extracted states to ensure that they mimic appliance behavior, creation of finite state models, correction of any errors in classification that might violate the model, and estimation of energy based on corrected labels. We evaluate our framework on 3 houses from standard datasets in the field and show that the framework can learn data-driven models based on event labels and use that to estimate energy with lower error margins (e.g., 1.1–42.3%) than when using the heuristic models used by others

  1. Digital arteriography of kidney arteries by intraveinous route. Simplified technique

    International Nuclear Information System (INIS)

    Guisgand, M.; Dardenne, A.N.

    1989-01-01

    Of the 1,000 patients addressed to us for intravenous digital angiography (IVDA) of the renal arteries for arterial hypertension, for control of the artery of a transplanted kidney or for preoperative check-up prior to transplantation of a kidney, 738 were examined by a simplified technique. Compared to the standard practice this method simply consists of a manual injection of a standard ionic contrast medium via an antecubital vein punctured with a large catheter needle (caliber 14 G), without preparatory injection of an intestinal antispasmodic. This method has produced a satisfactory arterial opacification in 96 % of the cases. The advantages and disadvantages of the technique are discussed. Of the 262 remaining patients, 250 were also examined by the peripheral venous mode, but the technique had to be modified in at least one of its aspects for one reason or another. Only 12 patients were not examined by the peripheral venous mode (7 puncture failures, 4 permanent venous accesses already installed). The IVDA simplified technique appears to be reliable for detecting reno-vascular arterial hyper-tension and, with certain limitations, for the control of kidney grafts. With regard to the preoperative check-up before kidney transplantation, IVDA still does not seem a suitable replacement for the traditional method of angiography [fr

  2. Motion estimation using point cluster method and Kalman filter.

    Science.gov (United States)

    Senesh, M; Wolf, A

    2009-05-01

    The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal

  3. Automatic estimation of elasticity parameters in breast tissue

    Science.gov (United States)

    Skerl, Katrin; Cochran, Sandy; Evans, Andrew

    2014-03-01

    Shear wave elastography (SWE), a novel ultrasound imaging technique, can provide unique information about cancerous tissue. To estimate elasticity parameters, a region of interest (ROI) is manually positioned over the stiffest part of the shear wave image (SWI). The aim of this work is to estimate the elasticity parameters i.e. mean elasticity, maximal elasticity and standard deviation, fully automatically. Ultrasonic SWI of a breast elastography phantom and breast tissue in vivo were acquired using the Aixplorer system (SuperSonic Imagine, Aix-en-Provence, France). First, the SWI within the ultrasonic B-mode image was detected using MATLAB then the elasticity values were extracted. The ROI was automatically positioned over the stiffest part of the SWI and the elasticity parameters were calculated. Finally all values were saved in a spreadsheet which also contains the patient's study ID. This spreadsheet is easily available for physicians and clinical staff for further evaluation and so increase efficiency. Therewith the efficiency is increased. This algorithm simplifies the handling, especially for the performance and evaluation of clinical trials. The SWE processing method allows physicians easy access to the elasticity parameters of the examinations from their own and other institutions. This reduces clinical time and effort and simplifies evaluation of data in clinical trials. Furthermore, reproducibility will be improved.

  4. Microtensile bond strength of three simplified adhesive systems to caries-affected dentin

    NARCIS (Netherlands)

    Scholtanus, J.D.; Purwanta, K.; Dogan, N.; Kleverlaan, C.J.; Feilzer, A.J.

    2010-01-01

    Purpose: The purpose of the study was to determine the microtensile bond strength of three different simplified adhesive systems to caries-affected dentin. Materials and Methods: Fifteen extracted human molars with primary carious lesions were ground flat until dentin was exposed. Soft

  5. Microtensile Bond Strength of Three Simplified Adhesive Systems to Caries-affected Dentin

    NARCIS (Netherlands)

    Scholtanus, Johannes; Purwanta, Kenny; Dogan, Nilgun; Kleverlaan, Cees J.; Feilzer, Albert J.

    Purpose: The purpose of the study was to determine the microtensile bond strength of three different simplified adhesive systems to caries-affected dentin. Materials and Methods: Fifteen extracted human molars with primary carious lesions were ground flat until dentin was exposed. Soft

  6. A simplified model for tritium permeation transient predictions when trapping is active

    International Nuclear Information System (INIS)

    Longhurst, G.R.

    1994-01-01

    This report describes a simplified one-dimensional tritium permeation and retention model. The model makes use of the same physical mechanisms as more sophisticated, time-transient codes such as implantation, recombination, diffusion, trapping and thermal gradient effects. It takes advantage of a number of simplifications and approximations to solve the steady-state problem and then provides interpolating functions to make estimates of intermediate states based on the steady-state solution. Comparison calculations with the verified and validated TMAP4 transient code show good agreement. ((orig.))

  7. A simplified model for tritium permeation transient predictions when trapping is active

    Energy Technology Data Exchange (ETDEWEB)

    Longhurst, G.R. (Fusion Safety Program, Idaho National Engineering Laboratory, P.O. Box 1625, Idaho Falls, ID 83415 (United States))

    1994-09-01

    This report describes a simplified one-dimensional tritium permeation and retention model. The model makes use of the same physical mechanisms as more sophisticated, time-transient codes such as implantation, recombination, diffusion, trapping and thermal gradient effects. It takes advantage of a number of simplifications and approximations to solve the steady-state problem and then provides interpolating functions to make estimates of intermediate states based on the steady-state solution. Comparison calculations with the verified and validated TMAP4 transient code show good agreement. ((orig.))

  8. An Estimation Method for number of carrier frequency

    Directory of Open Access Journals (Sweden)

    Xiong Peng

    2015-01-01

    Full Text Available This paper proposes a method that utilizes AR model power spectrum estimation based on Burg algorithm to estimate the number of carrier frequency in single pulse. In the modern electronic and information warfare, the pulse signal form of radar is complex and changeable, among which single pulse with multi-carrier frequencies is the most typical one, such as the frequency shift keying (FSK signal, the frequency shift keying with linear frequency (FSK-LFM hybrid modulation signal and the frequency shift keying with bi-phase shift keying (FSK-BPSK hybrid modulation signal. In view of this kind of single pulse which has multi-carrier frequencies, this paper adopts a method which transforms the complex signal into AR model, then takes power spectrum based on Burg algorithm to show the effect. Experimental results show that the estimation method still can determine the number of carrier frequencies accurately even when the signal noise ratio (SNR is very low.

  9. Hydrological model uncertainty due to spatial evapotranspiration estimation methods

    Science.gov (United States)

    Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub

    2016-05-01

    Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.

  10. Simplified design of switching power supplies

    CERN Document Server

    Lenk, John

    1995-01-01

    * Describes the operation of each circuit in detail * Examines a wide selection of external components that modify the IC package characteristics * Provides hands-on, essential information for designing a switching power supply Simplified Design of Switching Power Supplies is an all-inclusive, one-stop guide to switching power-supply design. Step-by-step instructions and diagrams render this book essential for the student and the experimenter, as well as the design professional. Simplified Design of Switching Power Supplies concentrates on the use of IC regulators. All popular forms of swit

  11. Consumptive use of upland rice as estimated by different methods

    International Nuclear Information System (INIS)

    Chhabda, P.R.; Varade, S.B.

    1985-01-01

    The consumptive use of upland rice (Oryza sativa Linn.) grown during the wet season (kharif) as estimated by modified Penman, radiation, pan-evaporation and Hargreaves methods showed a variation from computed consumptive use estimated by the gravimetric method. The variability increased with an increase in the irrigation interval, and decreased with an increase in the level of N applied. The average variability was less in pan-evaporation method, which could reliably be used for estimating water requirement of upland rice if percolation losses are considered

  12. Study of a simplified method of evaluating the economic maintenance importance of components in nuclear power plant system

    International Nuclear Information System (INIS)

    Aoki, Takayuki; Takagi, Toshiyuki; Kodama, Noriko

    2014-01-01

    Safety risk importance of components in nuclear power plants has been evaluated based on the probabilistic risk assessment and used for the decisions in various plant managements. But economic risk importance of the components has not been discussed very much. Therefore, this paper discusses risk importance of the components from the viewpoint of plant economic efficiency and proposes a simplified evaluation method of the economic risk importance (or economic maintenance importance). As a result of consideration, the followings were obtained. (1) A unit cost of power generation is selected as a performance indicator and can be related to a failure rate of components in nuclear power plant which is a result of maintenance. (2) The economic maintenance importance has to major factors, i.e. repair cost at component failure and production loss associated with plant outage due to component failure. (3) The developed method enables easy understanding of economic impacts of plant shutdown or power reduction due to component failures on the plane which adopts the repair cost in vertical axis and the production loss in horizontal axis. (author)

  13. A Simplified Model to Estimate the Concentration of Inorganic Ions and Heavy Metals in Rivers

    Directory of Open Access Journals (Sweden)

    Clemêncio Nhantumbo

    2016-10-01

    Full Text Available This paper presents a model that uses only pH, alkalinity, and temperature to estimate the concentrations of major ions in rivers (Na+, K+, Mg2+, Ca2+, HCO3−, SO42−, Cl−, and NO3− together with the equilibrium concentrations of minor ions and heavy metals (Fe3+, Mn2+, Cd2+, Cu2+, Al3+, Pb2+, and Zn2+. Mining operations have been increasing, which has led to changes in the pollution loads to receiving water systems, meanwhile most developing countries cannot afford water quality monitoring. A possible solution is to implement less resource-demanding monitoring programs, supported by mathematical models that minimize the required sampling and analysis, while still being able to detect water quality changes, thereby allowing implementation of measures to protect the water resources. The present model was developed using existing theories for: (i carbonate equilibrium; (ii total alkalinity; (iii statistics of major ions; (iv solubility of minerals; and (v conductivity of salts in water. The model includes two options to estimate the concentrations of major ions: (1 a generalized method, which employs standard values from a world-wide data base; and (2 a customized method, which requires specific baseline data for the river of interest. The model was tested using data from four monitoring stations in Swedish rivers with satisfactory results.

  14. Studies and research concerning BNFP. Identification and simplified modeling of economically important radwaste variables

    International Nuclear Information System (INIS)

    Ebel, P.E.; Godfrey, W.L.; Henry, J.L.; Postles, R.L.

    1983-09-01

    An extensive computer model describing the mass balance and economic characteristics of radioactive waste disposal systems was exercised in a series of runs designed using linear statistical methods. The most economically important variables were identified, their behavior characterized, and a simplified computer model prepared which runs on desk-top minicomputers. This simplified model allows the investigation of the effects of the seven most significant variables in each of four waste areas: Liquid Waste Storage, Liquid Waste Solidification, General Process Trash Handling, and Hulls Handling. 8 references, 1 figure, 12 tables

  15. Methods for estimating low-flow statistics for Massachusetts streams

    Science.gov (United States)

    Ries, Kernell G.; Friesz, Paul J.

    2000-01-01

    Methods and computer software are described in this report for determining flow duration, low-flow frequency statistics, and August median flows. These low-flow statistics can be estimated for unregulated streams in Massachusetts using different methods depending on whether the location of interest is at a streamgaging station, a low-flow partial-record station, or an ungaged site where no data are available. Low-flow statistics for streamgaging stations can be estimated using standard U.S. Geological Survey methods described in the report. The MOVE.1 mathematical method and a graphical correlation method can be used to estimate low-flow statistics for low-flow partial-record stations. The MOVE.1 method is recommended when the relation between measured flows at a partial-record station and daily mean flows at a nearby, hydrologically similar streamgaging station is linear, and the graphical method is recommended when the relation is curved. Equations are presented for computing the variance and equivalent years of record for estimates of low-flow statistics for low-flow partial-record stations when either a single or multiple index stations are used to determine the estimates. The drainage-area ratio method or regression equations can be used to estimate low-flow statistics for ungaged sites where no data are available. The drainage-area ratio method is generally as accurate as or more accurate than regression estimates when the drainage-area ratio for an ungaged site is between 0.3 and 1.5 times the drainage area of the index data-collection site. Regression equations were developed to estimate the natural, long-term 99-, 98-, 95-, 90-, 85-, 80-, 75-, 70-, 60-, and 50-percent duration flows; the 7-day, 2-year and the 7-day, 10-year low flows; and the August median flow for ungaged sites in Massachusetts. Streamflow statistics and basin characteristics for 87 to 133 streamgaging stations and low-flow partial-record stations were used to develop the equations. The

  16. Comparing Methods for Estimating Direct Costs of Adverse Drug Events.

    Science.gov (United States)

    Gyllensten, Hanna; Jönsson, Anna K; Hakkarainen, Katja M; Svensson, Staffan; Hägg, Staffan; Rehnberg, Clas

    2017-12-01

    To estimate how direct health care costs resulting from adverse drug events (ADEs) and cost distribution are affected by methodological decisions regarding identification of ADEs, assigning relevant resource use to ADEs, and estimating costs for the assigned resources. ADEs were identified from medical records and diagnostic codes for a random sample of 4970 Swedish adults during a 3-month study period in 2008 and were assessed for causality. Results were compared for five cost evaluation methods, including different methods for identifying ADEs, assigning resource use to ADEs, and for estimating costs for the assigned resources (resource use method, proportion of registered cost method, unit cost method, diagnostic code method, and main diagnosis method). Different levels of causality for ADEs and ADEs' contribution to health care resource use were considered. Using the five methods, the maximum estimated overall direct health care costs resulting from ADEs ranged from Sk10,000 (Sk = Swedish krona; ~€1,500 in 2016 values) using the diagnostic code method to more than Sk3,000,000 (~€414,000) using the unit cost method in our study population. The most conservative definitions for ADEs' contribution to health care resource use and the causality of ADEs resulted in average costs per patient ranging from Sk0 using the diagnostic code method to Sk4066 (~€500) using the unit cost method. The estimated costs resulting from ADEs varied considerably depending on the methodological choices. The results indicate that costs for ADEs need to be identified through medical record review and by using detailed unit cost data. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  17. Phase difference estimation method based on data extension and Hilbert transform

    International Nuclear Information System (INIS)

    Shen, Yan-lin; Tu, Ya-qing; Chen, Lin-jun; Shen, Ting-ao

    2015-01-01

    To improve the precision and anti-interference performance of phase difference estimation for non-integer periods of sampling signals, a phase difference estimation method based on data extension and Hilbert transform is proposed. Estimated phase difference is obtained by means of data extension, Hilbert transform, cross-correlation, auto-correlation, and weighted phase average. Theoretical analysis shows that the proposed method suppresses the end effects of Hilbert transform effectively. The results of simulations and field experiments demonstrate that the proposed method improves the anti-interference performance of phase difference estimation and has better performance of phase difference estimation than the correlation, Hilbert transform, and data extension-based correlation methods, which contribute to improving the measurement precision of the Coriolis mass flowmeter. (paper)

  18. Gray bootstrap method for estimating frequency-varying random vibration signals with small samples

    Directory of Open Access Journals (Sweden)

    Wang Yanqing

    2014-04-01

    Full Text Available During environment testing, the estimation of random vibration signals (RVS is an important technique for the airborne platform safety and reliability. However, the available methods including extreme value envelope method (EVEM, statistical tolerances method (STM and improved statistical tolerance method (ISTM require large samples and typical probability distribution. Moreover, the frequency-varying characteristic of RVS is usually not taken into account. Gray bootstrap method (GBM is proposed to solve the problem of estimating frequency-varying RVS with small samples. Firstly, the estimated indexes are obtained including the estimated interval, the estimated uncertainty, the estimated value, the estimated error and estimated reliability. In addition, GBM is applied to estimating the single flight testing of certain aircraft. At last, in order to evaluate the estimated performance, GBM is compared with bootstrap method (BM and gray method (GM in testing analysis. The result shows that GBM has superiority for estimating dynamic signals with small samples and estimated reliability is proved to be 100% at the given confidence level.

  19. Simple method for the estimation of glomerular filtration rate

    Energy Technology Data Exchange (ETDEWEB)

    Groth, T [Group for Biomedical Informatics, Uppsala Univ. Data Center, Uppsala (Sweden); Tengstroem, B [District General Hospital, Skoevde (Sweden)

    1977-02-01

    A simple method is presented for indirect estimation of the glomerular filtration rate from two venous blood samples, drawn after a single injection of a small dose of (/sup 125/I)sodium iothalamate (10 ..mu..Ci). The method does not require exact dosage, as the first sample, taken after a few minutes (t=5 min) after injection, is used to normilize the value of the second sample, which should be taken in between 2 to 4 h after injection. The glomerular filtration rate, as measured by standard insulin clearance, may then be predicted from the logarithm of the normalized value and linear regression formulas with a standard error of estimate of the order of 1 to 2 ml/min/1.73 m/sup 2/. The slope-intercept method for direct estimation of glomerular filtration rate is also evaluated and found to significantly underestimate standard insulin clearance. The normalized 'single-point' method is concluded to be superior to the slope-intercept method and more sophisticated methods using curve fitting technique, with regard to predictive force and clinical applicability.

  20. Comparison of methods for estimating premorbid intelligence

    OpenAIRE

    Bright, Peter; van der Linde, Ian

    2018-01-01

    To evaluate impact of neurological injury on cognitive performance it is typically necessary to derive a baseline (or ‘premorbid’) estimate of a patient’s general cognitive ability prior to the onset of impairment. In this paper, we consider a range of common methods for producing this estimate, including those based on current best performance, embedded ‘hold/no hold’ tests, demographic information, and word reading ability. Ninety-two neurologically healthy adult participants were assessed ...

  1. ABOUT THE DEVELOPMENT OF SPATIAL METHODS OF CALCULATING SPANS OF HIGHWAY BRIDGES

    Directory of Open Access Journals (Sweden)

    V. P. Kozhushko

    2007-10-01

    Full Text Available The problem of methods of spatial computation of spans of different structures has been considered. The possibility of application of simplified computation methods allowing to take into account both resilient and non-linear deformations as well as to estimate other factors affecting the stressed-and-strained state of a system without accuracy loss for results obtained has been presented.

  2. International piping benchmarks: Use of simplified code PACE 2

    Energy Technology Data Exchange (ETDEWEB)

    Boyle, J; Spence, J [University of Strathclyde (United Kingdom); Blundell, C [Risley Nuclear Power Development Establishment, Central Technical Services, Risley, Warrington (United Kingdom)

    1979-06-01

    This report compares the results obtained using the code PACE 2 with the International Working Group on Fast Reactors (IWGFR) International Piping Benchmark solutions. PACE 2 is designed to analyse systems of pipework using a simplified method which is economical of computer time and hence inexpensive. This low cost is not achieved without some loss of accuracy in the solution, but for most parts of a system this inaccuracy is acceptable and those sections of particular importance may be reanalysed using more precise methods in order to produce a satisfactory analysis of the complete system at reasonable cost. (author)

  3. International piping benchmarks: Use of simplified code PACE 2

    International Nuclear Information System (INIS)

    Boyle, J.; Spence, J.; Blundell, C.

    1979-01-01

    This report compares the results obtained using the code PACE 2 with the International Working Group on Fast Reactors (IWGFR) International Piping Benchmark solutions. PACE 2 is designed to analyse systems of pipework using a simplified method which is economical of computer time and hence inexpensive. This low cost is not achieved without some loss of accuracy in the solution, but for most parts of a system this inaccuracy is acceptable and those sections of particular importance may be reanalysed using more precise methods in order to produce a satisfactory analysis of the complete system at reasonable cost. (author)

  4. A simplified suite of methods to evaluate chelator conjugation of antibodies: effects on hydrodynamic radius and biodistribution

    International Nuclear Information System (INIS)

    Al-Ejeh, Fares; Darby, Jocelyn M.; Thierry, Benjamin; Brown, Michael P.

    2009-01-01

    Introduction: Antibodies covalently conjugated with chelators such as 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA) are required for radioimmunoscintigraphy and radioimmunotherapy, which are of growing importance in cancer medicine. Method: Here, we report a suite of simple methods that provide a preclinical assessment package for evaluating the effects of DOTA conjugation on the in vitro and in vivo performance of monoclonal antibodies. We exemplify the use of these methods by investigating the effects of DOTA conjugation on the biochemical properties of the DAB4 clone of the La/SSB-specific murine monoclonal autoantibody, APOMAB (registered) , which is a novel malignant cell death ligand. Results: We have developed a 96-well microtiter-plate assay to measure directly the concentration of DOTA and other chelators in antibody-chelator conjugate solutions. Coupled with a commercial assay for measuring protein concentration, the dual microtiter-plate method can rapidly determine chelator/antibody ratios in the same plate. The biochemical properties of DAB4 immunoconjugates were altered as the DOTA/Ab ratio increased so that: (i) mass/charge ratio decreased; (ii) hydrodynamic radius increased; (iii) antibody immunoactivity decreased; (iv) rate of chelation of metal ions and specific radioactivity both increased and in vivo, (v) tumor uptake decreased as nonspecific uptake by liver and spleen increased. Conclusion: This simplified suite of methods readily identifies biochemical characteristics of the DOTA-immunoconjugates such as hydrodynamic diameter and decreased mass/charge ratio associated with compromised immunotargeting efficiency and, thus, may prove useful for optimizing conjugation procedures in order to maximize immunoconjugate-mediated radioimmunoscintigraphy and radioimmunotherapy.

  5. A numerical integration-based yield estimation method for integrated circuits

    International Nuclear Information System (INIS)

    Liang Tao; Jia Xinzhang

    2011-01-01

    A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

  6. A numerical integration-based yield estimation method for integrated circuits

    Energy Technology Data Exchange (ETDEWEB)

    Liang Tao; Jia Xinzhang, E-mail: tliang@yahoo.cn [Key Laboratory of Ministry of Education for Wide Bandgap Semiconductor Materials and Devices, School of Microelectronics, Xidian University, Xi' an 710071 (China)

    2011-04-15

    A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

  7. Correction of Misclassifications Using a Proximity-Based Estimation Method

    Directory of Open Access Journals (Sweden)

    Shmulevich Ilya

    2004-01-01

    Full Text Available An estimation method for correcting misclassifications in signal and image processing is presented. The method is based on the use of context-based (temporal or spatial information in a sliding-window fashion. The classes can be purely nominal, that is, an ordering of the classes is not required. The method employs nonlinear operations based on class proximities defined by a proximity matrix. Two case studies are presented. In the first, the proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm. In the second, the estimation method is applied to two-dimensional signals for correction of misclassifications in images. In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework. Simulation results are presented in both case studies and the degree of improvement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.

  8. Stock price estimation using ensemble Kalman Filter square root method

    Science.gov (United States)

    Karya, D. F.; Katias, P.; Herlambang, T.

    2018-04-01

    Shares are securities as the possession or equity evidence of an individual or corporation over an enterprise, especially public companies whose activity is stock trading. Investment in stocks trading is most likely to be the option of investors as stocks trading offers attractive profits. In determining a choice of safe investment in the stocks, the investors require a way of assessing the stock prices to buy so as to help optimize their profits. An effective method of analysis which will reduce the risk the investors may bear is by predicting or estimating the stock price. Estimation is carried out as a problem sometimes can be solved by using previous information or data related or relevant to the problem. The contribution of this paper is that the estimates of stock prices in high, low, and close categorycan be utilized as investors’ consideration for decision making in investment. In this paper, stock price estimation was made by using the Ensemble Kalman Filter Square Root method (EnKF-SR) and Ensemble Kalman Filter method (EnKF). The simulation results showed that the resulted estimation by applying EnKF method was more accurate than that by the EnKF-SR, with an estimation error of about 0.2 % by EnKF and an estimation error of 2.6 % by EnKF-SR.

  9. An Iterative Adaptive Approach for Blood Velocity Estimation Using Ultrasound

    DEFF Research Database (Denmark)

    Gudmundson, Erik; Jakobsson, Andreas; Jensen, Jørgen Arendt

    2010-01-01

    This paper proposes a novel iterative data-adaptive spectral estimation technique for blood velocity estimation using medical ultrasound scanners. The technique makes no assumption on the sampling pattern of the slow-time or the fast-time samples, allowing for duplex mode transmissions where B......-mode images are interleaved with the Doppler emissions. Furthermore, the technique is shown, using both simplified and more realistic Field II simulations, to outperform current state-of-the-art techniques, allowing for accurate estimation of the blood velocity spectrum using only 30% of the transmissions......, thereby allowing for the examination of two separate vessel regions while retaining an adequate updating rate of the B-mode images. In addition, the proposed method also allows for more flexible transmission patterns, as well as exhibits fewer spectral artifacts as compared to earlier techniques....

  10. Improvement of Accuracy for Background Noise Estimation Method Based on TPE-AE

    Science.gov (United States)

    Itai, Akitoshi; Yasukawa, Hiroshi

    This paper proposes a method of a background noise estimation based on the tensor product expansion with a median and a Monte carlo simulation. We have shown that a tensor product expansion with absolute error method is effective to estimate a background noise, however, a background noise might not be estimated by using conventional method properly. In this paper, it is shown that the estimate accuracy can be improved by using proposed methods.

  11. The cost of simplifying air travel when modeling disease spread.

    Directory of Open Access Journals (Sweden)

    Justin Lessler

    Full Text Available BACKGROUND: Air travel plays a key role in the spread of many pathogens. Modeling the long distance spread of infectious disease in these cases requires an air travel model. Highly detailed air transportation models can be over determined and computationally problematic. We compared the predictions of a simplified air transport model with those of a model of all routes and assessed the impact of differences on models of infectious disease. METHODOLOGY/PRINCIPAL FINDINGS: Using U.S. ticket data from 2007, we compared a simplified "pipe" model, in which individuals flow in and out of the air transport system based on the number of arrivals and departures from a given airport, to a fully saturated model where all routes are modeled individually. We also compared the pipe model to a "gravity" model where the probability of travel is scaled by physical distance; the gravity model did not differ significantly from the pipe model. The pipe model roughly approximated actual air travel, but tended to overestimate the number of trips between small airports and underestimate travel between major east and west coast airports. For most routes, the maximum number of false (or missed introductions of disease is small (<1 per day but for a few routes this rate is greatly underestimated by the pipe model. CONCLUSIONS/SIGNIFICANCE: If our interest is in large scale regional and national effects of disease, the simplified pipe model may be adequate. If we are interested in specific effects of interventions on particular air routes or the time for the disease to reach a particular location, a more complex point-to-point model will be more accurate. For many problems a hybrid model that independently models some frequently traveled routes may be the best choice. Regardless of the model used, the effect of simplifications and sensitivity to errors in parameter estimation should be analyzed.

  12. Simplified sample preparation method for protein identification by matrix-assisted laser desorption/ionization mass spectrometry: in-gel digestion on the probe surface

    DEFF Research Database (Denmark)

    Stensballe, A; Jensen, Ole Nørregaard

    2001-01-01

    /ionization-time of flight mass spectrometry (MALDI-TOF-MS) is used as the first protein screening method in many laboratories because of its inherent simplicity, mass accuracy, sensitivity and relatively high sample throughput. We present a simplified sample preparation method for MALDI-MS that enables in-gel digestion...... for protein identification similar to that obtained by the traditional protocols for in-gel digestion and MALDI peptide mass mapping of human proteins, i.e. approximately 60%. The overall performance of the novel on-probe digestion method is comparable with that of the standard in-gel sample preparation...... protocol while being less labour intensive and more cost-effective due to minimal consumption of reagents, enzymes and consumables. Preliminary data obtained on a MALDI quadrupole-TOF tandem mass spectrometer demonstrated the utility of the on-probe digestion protocol for peptide mass mapping and peptide...

  13. Methods for risk estimation in nuclear energy

    Energy Technology Data Exchange (ETDEWEB)

    Gauvenet, A [CEA, 75 - Paris (France)

    1979-01-01

    The author presents methods for estimating the different risks related to nuclear energy: immediate or delayed risks, individual or collective risks, risks of accidents and long-term risks. These methods have attained a highly valid level of elaboration and their application to other industrial or human problems is currently under way, especially in English-speaking countries.

  14. Simplified Model and Response Analysis for Crankshaft of Air Compressor

    Science.gov (United States)

    Chao-bo, Li; Jing-jun, Lou; Zhen-hai, Zhang

    2017-11-01

    The original model of crankshaft is simplified to the appropriateness to balance the calculation precision and calculation speed, and then the finite element method is used to analyse the vibration response of the structure. In order to study the simplification and stress concentration for crankshaft of air compressor, this paper compares calculative mode frequency and experimental mode frequency of the air compressor crankshaft before and after the simplification, the vibration response of reference point constraint conditions is calculated by using the simplified model, and the stress distribution of the original model is calculated. The results show that the error between calculative mode frequency and experimental mode frequency is controlled in less than 7%, the constraint will change the model density of the system, the position between the crank arm and the shaft appeared stress concentration, so the part of the crankshaft should be treated in the process of manufacture.

  15. Study on Comparison of Bidding and Pricing Behavior Distinction between Estimate Methods

    Science.gov (United States)

    Morimoto, Emi; Namerikawa, Susumu

    The most characteristic trend on bidding and pricing behavior distinction in recent years is the increasing number of bidders just above the criteria for low-price bidding investigations. The contractor's markup is the difference between the bidding price and the execution price. Therefore, the contractor's markup is the difference between criteria for low-price bidding investigations price and the execution price in the public works bid in Japan. Virtually, bidder's strategies and behavior have been controlled by public engineer's budgets. Estimation and bid are inseparably linked in the Japanese public works procurement system. The trial of the unit price-type estimation method begins in 2004. On another front, accumulated estimation method is one of the general methods in public works. So, there are two types of standard estimation methods in Japan. In this study, we did a statistical analysis on the bid information of civil engineering works for the Ministry of Land, Infrastructure, and Transportation in 2008. It presents several issues that bidding and pricing behavior is related to an estimation method (several estimation methods) for public works bid in Japan. The two types of standard estimation methods produce different results that number of bidders (decide on bid-no bid strategy) and distribution of bid price (decide on mark-up strategy).The comparison on the distribution of bid prices showed that the percentage of the bid concentrated on the criteria for low-price bidding investigations have had a tendency to get higher in the large-sized public works by the unit price-type estimation method, comparing with the accumulated estimation method. On one hand, the number of bidders who bids for public works estimated unit-price tends to increase significantly Public works estimated unit-price is likely to have been one of the factors for the construction companies to decide if they participate in the biddings.

  16. A different approach to estimate nonlinear regression model using numerical methods

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].

  17. A comparison of the test tube and the dialysis tubing in vitro methods for estimating the bioavailability of phosphorus in feed ingredients for swine.

    Science.gov (United States)

    Bollinger, David W; Tsunoda, Atsushi; Ledoux, David R; Ellersieck, Mark R; Veum, Trygve L

    2005-05-04

    The validity of a simplified in vitro test tube (TT) method was compared with a more complicated dialysis tubing (DT) method to estimate the percentage of available phosphorus (P) in 41 plant origin and five animal origin feed ingredients for swine. The TT method using 1.0 or 0.25 g samples was compared with the DT method using 1.0 g samples at two pancreatic incubation times (2 vs 4 h) in a 3 x 2 factorial arrangement of treatments. Each DT and TT method treatment was replicated three and six times, respectively. Both methods utilize three enzymatic digestions: (i) predigestion with endoxylanase and beta-glucanase for 1 h, (ii) pepsin digestion for 2 h, and (iii) pancreatin digestion for 2 or 4 h. For the TT method, the entire procedure was conducted in a 50 mL conical centrifuge tube and replicated six times. For the DT method, the first two digestions were conducted in a 10 mL plastic syringe before the contents were quantitatively transferred into a segment of DT for the pancreatic digestion. The percentages of hydrolyzed P for plant origin ingredients measured by the DT method using 1.0 g samples and the TT method using 0.25 g samples were highly correlated (r = 0.94-0.97, P or = 0.4) with published in vivo available P values. In conclusion, the accuracy and validity of the TT method using 0.25 g samples with a 2 h pancreatic digestion time was equal to or superior to the DT method using 1.0 g samples with a 4 h pancreatic digestion time for estimating P availability in plant origin feed ingredients.

  18. Ore reserve estimation: a summary of principles and methods

    International Nuclear Information System (INIS)

    Marques, J.P.M.

    1985-01-01

    The mining industry has experienced substantial improvements with the increasing utilization of computerized and electronic devices throughout the last few years. In the ore reserve estimation field the main methods have undergone recent advances in order to improve their overall efficiency. This paper presents the three main groups of ore reserve estimation methods presently used worldwide: Conventional, Statistical and Geostatistical, and elaborates a detaited description and comparative analysis of each. The Conventional Methods are the oldest, less complex and most employed ones. The Geostatistical Methods are the most recent precise and more complex ones. The Statistical Methods are intermediate to the others in complexity, diffusion and chronological order. (D.J.M.) [pt

  19. The Impact of Statistical Leakage Models on Design Yield Estimation

    Directory of Open Access Journals (Sweden)

    Rouwaida Kanj

    2011-01-01

    Full Text Available Device mismatch and process variation models play a key role in determining the functionality and yield of sub-100 nm design. Average characteristics are often of interest, such as the average leakage current or the average read delay. However, detecting rare functional fails is critical for memory design and designers often seek techniques that enable accurately modeling such events. Extremely leaky devices can inflict functionality fails. The plurality of leaky devices on a bitline increase the dimensionality of the yield estimation problem. Simplified models are possible by adopting approximations to the underlying sum of lognormals. The implications of such approximations on tail probabilities may in turn bias the yield estimate. We review different closed form approximations and compare against the CDF matching method, which is shown to be most effective method for accurate statistical leakage modeling.

  20. Global parameter estimation for thermodynamic models of transcriptional regulation.

    Science.gov (United States)

    Suleimenov, Yerzhan; Ay, Ahmet; Samee, Md Abul Hassan; Dresch, Jacqueline M; Sinha, Saurabh; Arnosti, David N

    2013-07-15

    Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. A Bayes linear Bayes method for estimation of correlated event rates.

    Science.gov (United States)

    Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim

    2013-12-01

    Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.

  2. Development of a method for estimating total CH{sub 4} emission from rice paddies in Japan using the DNDC-Rice model

    Energy Technology Data Exchange (ETDEWEB)

    Katayanagi, Nobuko [National Institute for Agro-Environmental Sciences, 3-1-3 Kannondai, Tsukuba, Ibaraki 305-8604 (Japan); Fumoto, Tamon, E-mail: tamon@affrc.go.jp [National Institute for Agro-Environmental Sciences, 3-1-3 Kannondai, Tsukuba, Ibaraki 305-8604 (Japan); Hayano, Michiko [National Institute for Agro-Environmental Sciences, 3-1-3 Kannondai, Tsukuba, Ibaraki 305-8604 (Japan); Kyushu Okinawa Agricultural Research Center, National Agriculture and Food Research Organization, Anno 1742-1, Nishinoomote, Kagoshima 891-3102 (Japan); Takata, Yusuke; Kuwagata, Tsuneo; Shirato, Yasuhito [National Institute for Agro-Environmental Sciences, 3-1-3 Kannondai, Tsukuba, Ibaraki 305-8604 (Japan); Sawano, Shinji [Forestry and Forest Products Research Institute (FFPRI), 1 Matsunosato, Tsukuba, Ibaraki 305-8687 (Japan); Kajiura, Masako; Sudo, Shigeto; Ishigooka, Yasushi; Yagi, Kazuyuki [National Institute for Agro-Environmental Sciences, 3-1-3 Kannondai, Tsukuba, Ibaraki 305-8604 (Japan)

    2016-03-15

    Methane (CH{sub 4}) is a greenhouse gas, and paddy fields are one of its main anthropogenic emission sources. To mitigate this emission based on effective management measures, CH{sub 4} emission from paddy fields must be quantified at a national scale. In Japan, country-specific emission factors have been applied since 2003 to estimate national CH{sub 4} emission from paddy fields. However, this method cannot account for the effects of weather conditions and temporal variability of nitrogen fertilizer and organic matter application rates; thus, the estimated emission is highly uncertain. To improve the accuracy of national-scale estimates, we calculated country-specific emission factors using the DeNitrification–DeComposition-Rice (DNDC-Rice) model. First, we calculated CH{sub 4} emission from 1981 to 2010 using 986 datasets that included soil properties, meteorological data, and field management data. Using the simulated site-specific emission, we calculated annual mean emission for each of Japan's seven administrative regions, two water management regimes (continuous flooding and conventional mid-season drainage), and three soil drainage rates (slow, moderate, and fast). The mean emission was positively correlated with organic carbon input to the field, and we developed linear regressions for the relationships among the regions, water management regimes, and drainage rates. The regression results were within the range of published observation values for site-specific relationships between CH{sub 4} emission and organic carbon input rates. This suggests that the regressions provide a simplified method for estimating CH{sub 4} emission from Japanese paddy fields, though some modifications can further improve the estimation accuracy. - Highlights: • DNDC-Rice is a process-based model to simulate rice CH{sub 4} emission from rice paddies. • We simulated annual CH{sub 4} emissions from 986 paddy fields in Japan by DNDC-Rice. • Regional means of CH{sub 4

  3. Application of sensitivity analysis to a simplified coupled neutronic thermal-hydraulics transient in a fast reactor using Adjoint techniques

    International Nuclear Information System (INIS)

    Gilli, L.; Lathouwers, D.; Kloosterman, J.L.; Van der Hagen, T.H.J.J.

    2011-01-01

    In this paper a method to perform sensitivity analysis for a simplified multi-physics problem is presented. The method is based on the Adjoint Sensitivity Analysis Procedure which is used to apply first order perturbation theory to linear and nonlinear problems using adjoint techniques. The multi-physics problem considered includes a neutronic, a thermo-kinetics, and a thermal-hydraulics part and it is used to model the time dependent behavior of a sodium cooled fast reactor. The adjoint procedure is applied to calculate the sensitivity coefficients with respect to the kinetic parameters of the problem for two reference transients using two different model responses, the results obtained are then compared with the values given by a direct sampling of the forward nonlinear problem. Our first results show that, thanks to modern numerical techniques, the procedure is relatively easy to implement and provides good estimation for most perturbations, making the method appealing for more detailed problems. (author)

  4. Dual ant colony operational modal analysis parameter estimation method

    Science.gov (United States)

    Sitarz, Piotr; Powałka, Bartosz

    2018-01-01

    Operational Modal Analysis (OMA) is a common technique used to examine the dynamic properties of a system. Contrary to experimental modal analysis, the input signal is generated in object ambient environment. Operational modal analysis mainly aims at determining the number of pole pairs and at estimating modal parameters. Many methods are used for parameter identification. Some methods operate in time while others in frequency domain. The former use correlation functions, the latter - spectral density functions. However, while some methods require the user to select poles from a stabilisation diagram, others try to automate the selection process. Dual ant colony operational modal analysis parameter estimation method (DAC-OMA) presents a new approach to the problem, avoiding issues involved in the stabilisation diagram. The presented algorithm is fully automated. It uses deterministic methods to define the interval of estimated parameters, thus reducing the problem to optimisation task which is conducted with dedicated software based on ant colony optimisation algorithm. The combination of deterministic methods restricting parameter intervals and artificial intelligence yields very good results, also for closely spaced modes and significantly varied mode shapes within one measurement point.

  5. A simplified method of power calibration

    International Nuclear Information System (INIS)

    Jones, M.; Elliott, A.

    1974-01-01

    The Nuclear Reactor Facility, University of Missouri Rolla, has developed a unique method of power calibration for pool type reactors. Since water is incompressible it can be assumed that a rise in the water level of the pool while operating at power can be attributed to the heat input from the reactor core. Water level changes of a small magnitude are easily detectable. This method has proven to be less costly, less time consuming, and more reproducible than the conventional gold foil calibration, and has proven to be more accurate than a heat balance because several problems with heat flow through the walls and to the atmosphere are automatically compensated for with this method. The accuracy of this means of calibration depends upon the accuracy of the measurement of the water level and can normally be expected to be two to four percent. (author)

  6. A simplified method of power calibration

    Energy Technology Data Exchange (ETDEWEB)

    Jones, M; Elliott, A [University of Missouri-Rolla (United States)

    1974-07-01

    The Nuclear Reactor Facility, University of Missouri Rolla, has developed a unique method of power calibration for pool type reactors. Since water is incompressible it can be assumed that a rise in the water level of the pool while operating at power can be attributed to the heat input from the reactor core. Water level changes of a small magnitude are easily detectable. This method has proven to be less costly, less time consuming, and more reproducible than the conventional gold foil calibration, and has proven to be more accurate than a heat balance because several problems with heat flow through the walls and to the atmosphere are automatically compensated for with this method. The accuracy of this means of calibration depends upon the accuracy of the measurement of the water level and can normally be expected to be two to four percent. (author)

  7. Estimation of water percolation by different methods using TDR

    Directory of Open Access Journals (Sweden)

    Alisson Jadavi Pereira da Silva

    2014-02-01

    Full Text Available Detailed knowledge on water percolation into the soil in irrigated areas is fundamental for solving problems of drainage, pollution and the recharge of underground aquifers. The aim of this study was to evaluate the percolation estimated by time-domain-reflectometry (TDR in a drainage lysimeter. We used Darcy's law with K(θ functions determined by field and laboratory methods and by the change in water storage in the soil profile at 16 points of moisture measurement at different time intervals. A sandy clay soil was saturated and covered with plastic sheet to prevent evaporation and an internal drainage trial in a drainage lysimeter was installed. The relationship between the observed and estimated percolation values was evaluated by linear regression analysis. The results suggest that percolation in the field or laboratory can be estimated based on continuous monitoring with TDR, and at short time intervals, of the variations in soil water storage. The precision and accuracy of this approach are similar to those of the lysimeter and it has advantages over the other evaluated methods, of which the most relevant are the possibility of estimating percolation in short time intervals and exemption from the predetermination of soil hydraulic properties such as water retention and hydraulic conductivity. The estimates obtained by the Darcy-Buckingham equation for percolation levels using function K(θ predicted by the method of Hillel et al. (1972 provided compatible water percolation estimates with those obtained in the lysimeter at time intervals greater than 1 h. The methods of Libardi et al. (1980, Sisson et al. (1980 and van Genuchten (1980 underestimated water percolation.

  8. Climate reconstruction analysis using coexistence likelihood estimation (CRACLE): a method for the estimation of climate using vegetation.

    Science.gov (United States)

    Harbert, Robert S; Nixon, Kevin C

    2015-08-01

    • Plant distributions have long been understood to be correlated with the environmental conditions to which species are adapted. Climate is one of the major components driving species distributions. Therefore, it is expected that the plants coexisting in a community are reflective of the local environment, particularly climate.• Presented here is a method for the estimation of climate from local plant species coexistence data. The method, Climate Reconstruction Analysis using Coexistence Likelihood Estimation (CRACLE), is a likelihood-based method that employs specimen collection data at a global scale for the inference of species climate tolerance. CRACLE calculates the maximum joint likelihood of coexistence given individual species climate tolerance characterization to estimate the expected climate.• Plant distribution data for more than 4000 species were used to show that this method accurately infers expected climate profiles for 165 sites with diverse climatic conditions. Estimates differ from the WorldClim global climate model by less than 1.5°C on average for mean annual temperature and less than ∼250 mm for mean annual precipitation. This is a significant improvement upon other plant-based climate-proxy methods.• CRACLE validates long hypothesized interactions between climate and local associations of plant species. Furthermore, CRACLE successfully estimates climate that is consistent with the widely used WorldClim model and therefore may be applied to the quantitative estimation of paleoclimate in future studies. © 2015 Botanical Society of America, Inc.

  9. Setting limits on supersymmetry using simplified models

    CERN Document Server

    Gutschow, C.

    2012-01-01

    Experimental limits on supersymmetry and similar theories are difficult to set because of the enormous available parameter space and difficult to generalize because of the complexity of single points. Therefore, more phenomenological, simplified models are becoming popular for setting experimental limits, as they have clearer physical implications. The use of these simplified model limits to set a real limit on a concrete theory has not, however, been demonstrated. This paper recasts simplified model limits into limits on a specific and complete supersymmetry model, minimal supergravity. Limits obtained under various physical assumptions are comparable to those produced by directed searches. A prescription is provided for calculating conservative and aggressive limits on additional theories. Using acceptance and efficiency tables along with the expected and observed numbers of events in various signal regions, LHC experimental results can be re-cast in this manner into almost any theoretical framework, includ...

  10. Development of long operating cycle simplified BWR

    International Nuclear Information System (INIS)

    Heki, H.; Nakamaru, M.; Maruya, T.; Hiraiwa, K.; Arai, K.; Narabayash, T.; Aritomi, M.

    2002-01-01

    This paper describes an innovative plant concept for long operating cycle simplified BWR (LSBWR) In this plant concept, 1) Long operating cycle ( 3 to 15 years), 2) Simplified systems and building, 3) Factory fabrication in module are discussed. Designing long operating core is based on medium enriched U-235 with burnable poison. Simplified systems and building are realized by using natural circulation with bottom located core, internal CRD and PCV with passive system and an integrated reactor and turbine building. This LSBWR concept will have make high degree of safety by IVR (In Vessel Retention) capability, large water inventory above the core region and no PCV vent to the environment due to PCCS (Passive Containment Cooling System) and internal vent tank. Integrated building concept could realize highly modular arrangement in hull structure (ship frame structure), ease of seismic isolation capability and high applicability of standardization and factory fabrication. (authors)

  11. A New Method for Estimation of Velocity Vectors

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Munk, Peter

    1998-01-01

    The paper describes a new method for determining the velocity vector of a remotely sensed object using either sound or electromagnetic radiation. The movement of the object is determined from a field with spatial oscillations in both the axial direction of the transducer and in one or two...... directions transverse to the axial direction. By using a number of pulse emissions, the inter-pulse movement can be estimated and the velocity found from the estimated movement and the time between pulses. The method is based on the principle of using transverse spatial modulation for making the received...

  12. Comparison of methods used for estimating pharmacist counseling behaviors.

    Science.gov (United States)

    Schommer, J C; Sullivan, D L; Wiederholt, J B

    1994-01-01

    To compare the rates reported for provision of types of information conveyed by pharmacists among studies for which different methods of estimation were used and different dispensing situations were studied. Empiric studies conducted in the US, reported from 1982 through 1992, were selected from International Pharmaceutical Abstracts, MEDLINE, and noncomputerized sources. Empiric studies were selected for review if they reported the provision of at least three types of counseling information. Four components of methods used for estimating pharmacist counseling behaviors were extracted and summarized in a table: (1) sample type and area, (2) sampling unit, (3) sample size, and (4) data collection method. In addition, situations that were investigated in each study were compiled. Twelve studies met our inclusion criteria. Patients were interviewed via telephone in four studies and were surveyed via mail in two studies. Pharmacists were interviewed via telephone in one study and surveyed via mail in two studies. For three studies, researchers visited pharmacy sites for data collection using the shopper method or observation method. Studies with similar methods and situations provided similar results. Data collected by using patient surveys, pharmacist surveys, and observation methods can provide useful estimations of pharmacist counseling behaviors if researchers measure counseling for specific, well-defined dispensing situations.

  13. Conservatism inherent to simplified qualification techniques used for piping steady state vibration

    International Nuclear Information System (INIS)

    Olson, D.E.; Smetters, J.L.

    1983-01-01

    This paper examines some of the qualification techniques currently used by the power industry, including the techniques specified in a recently issued standard related to this subject (ANSI/ASME OM-3, Requirements for Preoperational and Initial Startup Vibration Testing of Nuclear Power Plant Piping Systems). Several methods are used to demonstrate the amount of conservatism inherent in these techniques. Allowable limits calculated by the use of simplified techniques are compared to limits calculated by more detailed computer analysis. A portion of a reactor feedwater piping system along with the results of a piping vibration monitoring program recently completed in a nuclear power plant are used as case studies. The limits determined by the use of simplified criteria are also compared to limits determined empirically through the use of strain gauges. The simple beam analogies that use vibrational displacement as acceptance criteria were found to be conservative for all the examples studied. However, when velocity was used as a criterion, it was not always conservative. Simplified techniques that result in displacement allowables appear to be the most viable method of qualifying piping vibrations. Quantities referred to in the paper are cited in British units throughout. These may be converted to the International System of Units (SI) as follows: 1 foot=0.3048 meter; 1 inch=0.0254 meter=1,000 mils; 1 psi=6,894 pascals; and 1 inch/second=0.0254 meter/second. (orig.)

  14. Ridge regression estimator: combining unbiased and ordinary ridge regression methods of estimation

    Directory of Open Access Journals (Sweden)

    Sharad Damodar Gore

    2009-10-01

    Full Text Available Statistical literature has several methods for coping with multicollinearity. This paper introduces a new shrinkage estimator, called modified unbiased ridge (MUR. This estimator is obtained from unbiased ridge regression (URR in the same way that ordinary ridge regression (ORR is obtained from ordinary least squares (OLS. Properties of MUR are derived. Results on its matrix mean squared error (MMSE are obtained. MUR is compared with ORR and URR in terms of MMSE. These results are illustrated with an example based on data generated by Hoerl and Kennard (1975.

  15. An accurate calibration method for accelerometer nonlinear scale factor on a low-cost three-axis turntable

    International Nuclear Information System (INIS)

    Pan, Jianye; Zhang, Chunxi; Cai, Qingzhong

    2014-01-01

    Strapdown inertial navigation system (SINS) requirements are very demanding on gyroscopes and accelerometers as well as on calibration. To improve the accuracy of SINS, high-accuracy calibration is needed. Adding the accelerometer nonlinear scale factor into the model and reducing estimation errors is essential for improving calibration methods. In this paper, the inertial navigation error model is simplified, including only velocity and tilt errors. Based on the simplified error model, the relationship between the navigation errors (the rates of change of velocity errors) and the inertial measurement unit (IMU) calibration parameters is presented. A tracking model is designed to estimate the rates of change of velocity errors. With a special calibration procedure consisting of six rotation sequences, the accelerometer nonlinear scale factor errors can be computed by the estimates of the rates of change of velocity errors. Simulation and laboratory test results show that the accelerometer nonlinear scale factor can be calibrated with satisfactory accuracy on a low-cost three-axis turntable in several minutes. The comparison with the traditional calibration method highlights the superior performance of the proposed calibration method without precise orientation control. In addition, the proposed calibration method saves a lot of time in comparison with the multi-position calibration method. (paper)

  16. A simplified model for tritium permeation transient predictions when trapping is active*1

    Science.gov (United States)

    Longhurst, G. R.

    1994-09-01

    This report describes a simplified one-dimensional tritium permeation and retention model. The model makes use of the same physical mechanisms as more sophisticated, time-transient codes such as implantation, recombination, diffusion, trapping and thermal gradient effects. It takes advantage of a number of simplifications and approximations to solve the steady-state problem and then provides interpolating functions to make estimates of intermediate states based on the steady-state solution. Comparison calculations with the verified and validated TMAP4 transient code show good agreement.

  17. Order Tracking Based on Robust Peak Search Instantaneous Frequency Estimation

    International Nuclear Information System (INIS)

    Gao, Y; Guo, Y; Chi, Y L; Qin, S R

    2006-01-01

    Order tracking plays an important role in non-stationary vibration analysis of rotating machinery, especially to run-up or coast down. An instantaneous frequency estimation (IFE) based order tracking of rotating machinery is introduced. In which, a peak search algorithms of spectrogram of time-frequency analysis is employed to obtain IFE of vibrations. An improvement to peak search is proposed, which can avoid strong non-order components or noises disturbing to the peak search work. Compared with traditional methods of order tracking, IFE based order tracking is simplified in application and only software depended. Testing testify the validity of the method. This method is an effective supplement to traditional methods, and the application in condition monitoring and diagnosis of rotating machinery is imaginable

  18. Four-jet impingement: Noise characteristics and simplified acoustic model

    International Nuclear Information System (INIS)

    Brehm, C.; Housman, J.A.; Kiris, C.C.; Barad, M.F.; Hutcheson, F.V.

    2017-01-01

    Highlights: • Large eddy simulation of unique four jet impingement configuration. • Characterization of flow features using POD, FFT, and wavelet decomposition. • Noise source identification utilizing causality method. • Development of simplified acoustic model utilizing equivalent source method. • Comparison with experimental data from BENS experiment. - Abstract: The noise generation mechanisms for four directly impinging supersonic jets are investigated employing implicit large eddy simulations with a higher-order weighted essentially non-oscillatory scheme. Although these types of impinging jet configurations have been used in many experiments, a detailed investigation of the noise generation mechanisms has not been conducted before. The flow field is highly complex and contains a wide range of temporal and spatial scales relevant for noise generation. Proper orthogonal decomposition is utilized to characterize the unsteady nature of the flow field involving unsteady shock oscillations, large coherent turbulent flow structures, and the sporadic appearance of vortical flow structures in the center of the four-jet impingement region. The causality method based on Lighthills acoustic analogy is applied to link fluctuations of flow quantities inside the source region to the acoustic pressure in the far field. It will be demonstrated that the entropy fluctuation term plays a vital role in the noise generation process. Consequently, the understanding of the noise generation mechanisms is employed to develop a simplified acoustic model of the four-jet impingement device by utilizing the equivalent source method. Finally, three linear acoustic four-jet impingement models of the four-jet impingement device are used as broadband noise sources inside an engine nacelle and the acoustic scattering results are validated against far-field acoustic experimental data.

  19. A simple method to estimate interwell autocorrelation

    Energy Technology Data Exchange (ETDEWEB)

    Pizarro, J.O.S.; Lake, L.W. [Univ. of Texas, Austin, TX (United States)

    1997-08-01

    The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.

  20. RADTRAD: A simplified model for RADionuclide Transport and Removal And Dose estimation

    International Nuclear Information System (INIS)

    Humphreys, S.L.; Miller, L.A.; Monroe, D.K.; Heames, T.J.

    1998-04-01

    This report documents the RADTRAD computer code developed for the U.S. Nuclear Regulatory Commission (NRC) Office of Nuclear Reactor Regulation (NRR) to estimate transport and removal of radionuclides and dose at selected receptors. The document includes a users' guide to the code, a description of the technical basis for the code, the quality assurance and code acceptance testing documentation, and a programmers' guide. The RADTRAD code can be used to estimate the containment release using either the NRC TID-14844 or NUREG-1465 source terms and assumptions, or a user-specified table. In addition, the code can account for a reduction in the quantity of radioactive material due to containment sprays, natural deposition, filters, and other natural and engineered safety features. The RADTRAD code uses a combination of tables and/or numerical models of source term reduction phenomena to determine the time-dependent dose at user-specified locations for a given accident scenario. The code system also provides the inventory, decay chain, and dose conversion factor tables needed for the dose calculation. The RADTRAD code can be used to assess occupational radiation exposures, typically in the control room; to estimate site boundary doses; and to estimate dose attenuation due to modification of a facility or accident sequence

  1. BED estimates of HIV incidence: resolving the differences, making things simpler.

    Directory of Open Access Journals (Sweden)

    John Hargrove

    Full Text Available Develop a simple method for optimal estimation of HIV incidence using the BED capture enzyme immunoassay.Use existing BED data to estimate mean recency duration, false recency rates and HIV incidence with reference to a fixed time period, T.Compare BED and cohort estimates of incidence referring to identical time frames. Generalize this approach to suggest a method for estimating HIV incidence from any cross-sectional survey.Follow-up and BED analyses of the same, initially HIV negative, cases followed over the same set time period T, produce estimates of the same HIV incidence, permitting the estimation of the BED mean recency period for cases who have been HIV positive for less than T. Follow-up of HIV positive cases over T, similarly, provides estimates of the false-recent rate appropriate for T. Knowledge of these two parameters for a given population allows the estimation of HIV incidence during T by applying the BED method to samples from cross-sectional surveys. An algorithm is derived for providing these estimates, adjusted for the false-recent rate. The resulting estimator is identical to one derived independently using a more formal mathematical analysis. Adjustments improve the accuracy of HIV incidence estimates. Negative incidence estimates result from the use of inappropriate estimates of the false-recent rate and/or from sampling error, not from any error in the adjustment procedure.Referring all estimates of mean recency periods, false-recent rates and incidence estimates to a fixed period T simplifies estimation procedures and allows the development of a consistent method for producing adjusted estimates of HIV incidence of improved accuracy. Unadjusted BED estimates of incidence, based on life-time recency periods, would be both extremely difficult to produce and of doubtful value.

  2. THE METHODS FOR ESTIMATING REGIONAL PROFESSIONAL MOBILE RADIO MARKET POTENTIAL

    Directory of Open Access Journals (Sweden)

    Y.À. Korobeynikov

    2008-12-01

    Full Text Available The paper represents the author’s methods of estimating regional professional mobile radio market potential, that belongs to high-tech b2b markets. These methods take into consideration such market peculiarities as great range and complexity of products, technological constraints and infrastructure development for the technological systems operation. The paper gives an estimation of professional mobile radio potential in Perm region. This estimation is already used by one of the systems integrator for its strategy development.

  3. Comparative study of the geostatistical ore reserve estimation method over the conventional methods

    International Nuclear Information System (INIS)

    Kim, Y.C.; Knudsen, H.P.

    1975-01-01

    Part I contains a comprehensive treatment of the comparative study of the geostatistical ore reserve estimation method over the conventional methods. The conventional methods chosen for comparison were: (a) the polygon method, (b) the inverse of the distance squared method, and (c) a method similar to (b) but allowing different weights in different directions. Briefly, the overall result from this comparative study is in favor of the use of geostatistics in most cases because the method has lived up to its theoretical claims. A good exposition on the theory of geostatistics, the adopted study procedures, conclusions and recommended future research are given in Part I. Part II of this report contains the results of the second and the third study objectives, which are to assess the potential benefits that can be derived by the introduction of the geostatistical method to the current state-of-the-art in uranium reserve estimation method and to be instrumental in generating the acceptance of the new method by practitioners through illustrative examples, assuming its superiority and practicality. These are given in the form of illustrative examples on the use of geostatistics and the accompanying computer program user's guide

  4. Estimating misclassification error: a closer look at cross-validation based methods

    Directory of Open Access Journals (Sweden)

    Ounpraseuth Songthip

    2012-11-01

    Full Text Available Abstract Background To estimate a classifier’s error in predicting future observations, bootstrap methods have been proposed as reduced-variation alternatives to traditional cross-validation (CV methods based on sampling without replacement. Monte Carlo (MC simulation studies aimed at estimating the true misclassification error conditional on the training set are commonly used to compare CV methods. We conducted an MC simulation study to compare a new method of bootstrap CV (BCV to k-fold CV for estimating clasification error. Findings For the low-dimensional conditions simulated, the modest positive bias of k-fold CV contrasted sharply with the substantial negative bias of the new BCV method. This behavior was corroborated using a real-world dataset of prognostic gene-expression profiles in breast cancer patients. Our simulation results demonstrate some extreme characteristics of variance and bias that can occur due to a fault in the design of CV exercises aimed at estimating the true conditional error of a classifier, and that appear not to have been fully appreciated in previous studies. Although CV is a sound practice for estimating a classifier’s generalization error, using CV to estimate the fixed misclassification error of a trained classifier conditional on the training set is problematic. While MC simulation of this estimation exercise can correctly represent the average bias of a classifier, it will overstate the between-run variance of the bias. Conclusions We recommend k-fold CV over the new BCV method for estimating a classifier’s generalization error. The extreme negative bias of BCV is too high a price to pay for its reduced variance.

  5. Maximum Likelihood-Based Methods for Target Velocity Estimation with Distributed MIMO Radar

    Directory of Open Access Journals (Sweden)

    Zhenxin Cao

    2018-02-01

    Full Text Available The estimation problem for target velocity is addressed in this in the scenario with a distributed multi-input multi-out (MIMO radar system. A maximum likelihood (ML-based estimation method is derived with the knowledge of target position. Then, in the scenario without the knowledge of target position, an iterative method is proposed to estimate the target velocity by updating the position information iteratively. Moreover, the Carmér-Rao Lower Bounds (CRLBs for both scenarios are derived, and the performance degradation of velocity estimation without the position information is also expressed. Simulation results show that the proposed estimation methods can approach the CRLBs, and the velocity estimation performance can be further improved by increasing either the number of radar antennas or the information accuracy of the target position. Furthermore, compared with the existing methods, a better estimation performance can be achieved.

  6. A simplified method for random vibration analysis of structures with random parameters

    International Nuclear Information System (INIS)

    Ghienne, Martin; Blanzé, Claude

    2016-01-01

    Piezoelectric patches with adapted electrical circuits or viscoelastic dissipative materials are two solutions particularly adapted to reduce vibration of light structures. To accurately design these solutions, it is necessary to describe precisely the dynamical behaviour of the structure. It may quickly become computationally intensive to describe robustly this behaviour for a structure with nonlinear phenomena, such as contact or friction for bolted structures, and uncertain variations of its parameters. The aim of this work is to propose a non-intrusive reduced stochastic method to characterize robustly the vibrational response of a structure with random parameters. Our goal is to characterize the eigenspace of linear systems with dynamic properties considered as random variables. This method is based on a separation of random aspects from deterministic aspects and allows us to estimate the first central moments of each random eigenfrequency with a single deterministic finite elements computation. The method is applied to a frame with several Young's moduli modeled as random variables. This example could be expanded to a bolted structure including piezoelectric devices. The method needs to be enhanced when random eigenvalues are closely spaced. An indicator with no additional computational cost is proposed to characterize the ’’proximity” of two random eigenvalues. (paper)

  7. Evaluation of quantitative imaging methods for organ activity and residence time estimation using a population of phantoms having realistic variations in anatomy and uptake

    International Nuclear Information System (INIS)

    He Bin; Du Yong; Segars, W. Paul; Wahl, Richard L.; Sgouros, George; Jacene, Heather; Frey, Eric C.

    2009-01-01

    relative error in the residence time estimates taken over the phantom population. The mean errors in the residence time estimates over all the organs were <9.9% (pure QSPECT), <13.2% (pure QPLanar), <7.2% (hybrid QPlanar/QSPECT), <19.2% (hybrid CPlanar/QSPECT), and 7%-159% (pure CPlanar). The standard deviations of the errors for all the organs over all the phantoms were <9.9%, <11.9%, <10.8%, <22.0%, and <107.9% for the same methods, respectively. The processing methods differed both in terms of their average accuracy and the variation of the accuracy over the population of phantoms, thus demonstrating the importance of using a phantom population in evaluating quantitative imaging methods. Hybrid CPlanar/QSPECT provided improved accuracy compared to pure CPlanar and required the addition of only a single SPECT acquisition. The QPlanar or hybrid QPlanar/QSPECT methods had mean errors and standard deviations of errors that approached those of pure QSPECT while providing simplified image acquisition protocols, and thus may be more clinically practical.

  8. Simplified Dynamic Analysis of Grinders Spindle Node

    Science.gov (United States)

    Demec, Peter

    2014-12-01

    The contribution deals with the simplified dynamic analysis of surface grinding machine spindle node. Dynamic analysis is based on the use of the transfer matrix method, which is essentially a matrix form of method of initial parameters. The advantage of the described method, despite the seemingly complex mathematical apparatus, is primarily, that it does not require for solve the problem of costly commercial software using finite element method. All calculations can be made for example in MS Excel, which is advantageous especially in the initial stages of constructing of spindle node for the rapid assessment of the suitability its design. After detailing the entire structure of spindle node is then also necessary to perform the refined dynamic analysis in the environment of FEM, which it requires the necessary skills and experience and it is therefore economically difficult. This work was developed within grant project KEGA No. 023TUKE-4/2012 Creation of a comprehensive educational - teaching material for the article Production technique using a combination of traditional and modern information technology and e-learning.

  9. A simplified transient three-dimensional model for estimating the thermal performance of the vapor chambers

    International Nuclear Information System (INIS)

    Chen, Y.-S.; Chien, K.-H.; Wang, C.-C.; Hung, T.-C.; Pei, B.-S.

    2006-01-01

    The vapor chambers (flat plate heat pipes) have been applied on the electronic cooling recently. To satisfy the quick-response requirement of the industries, a simplified transient three-dimensional linear model has been developed and tested in this study. In the proposed model, the vapor is assumed as a single interface between the evaporator and condenser wicks, and this assumption enables the vapor chamber to be analyzed by being split into small control volumes. Comparing with the previous available results, the calculated transient responses have shown good agreements with the existing results. For further validation of the proposed model, a water-cooling experiment was conducted. In addition to the vapor chamber, the heating block is also taken into account in the simulation. It is found that the inclusion of the capacitance of heating block shows a better agreement with the measurements

  10. Simplified models for dark matter face their consistent completions

    Energy Technology Data Exchange (ETDEWEB)

    Gonçalves, Dorival; Machado, Pedro A. N.; No, Jose Miguel

    2017-03-01

    Simplified dark matter models have been recently advocated as a powerful tool to exploit the complementarity between dark matter direct detection, indirect detection and LHC experimental probes. Focusing on pseudoscalar mediators between the dark and visible sectors, we show that the simplified dark matter model phenomenology departs significantly from that of consistent ${SU(2)_{\\mathrm{L}} \\times U(1)_{\\mathrm{Y}}}$ gauge invariant completions. We discuss the key physics simplified models fail to capture, and its impact on LHC searches. Notably, we show that resonant mono-Z searches provide competitive sensitivities to standard mono-jet analyses at $13$ TeV LHC.

  11. Benchmarking Foot Trajectory Estimation Methods for Mobile Gait Analysis

    Directory of Open Access Journals (Sweden)

    Julius Hannink

    2017-08-01

    Full Text Available Mobile gait analysis systems based on inertial sensing on the shoe are applied in a wide range of applications. Especially for medical applications, they can give new insights into motor impairment in, e.g., neurodegenerative disease and help objectify patient assessment. One key component in these systems is the reconstruction of the foot trajectories from inertial data. In literature, various methods for this task have been proposed. However, performance is evaluated on a variety of datasets due to the lack of large, generally accepted benchmark datasets. This hinders a fair comparison of methods. In this work, we implement three orientation estimation and three double integration schemes for use in a foot trajectory estimation pipeline. All methods are drawn from literature and evaluated against a marker-based motion capture reference. We provide a fair comparison on the same dataset consisting of 735 strides from 16 healthy subjects. As a result, the implemented methods are ranked and we identify the most suitable processing pipeline for foot trajectory estimation in the context of mobile gait analysis.

  12. From LCAs to simplified models: a generic methodology applied to wind power electricity.

    Science.gov (United States)

    Padey, Pierryves; Girard, Robin; le Boulch, Denis; Blanc, Isabelle

    2013-02-05

    This study presents a generic methodology to produce simplified models able to provide a comprehensive life cycle impact assessment of energy pathways. The methodology relies on the application of global sensitivity analysis to identify key parameters explaining the impact variability of systems over their life cycle. Simplified models are built upon the identification of such key parameters. The methodology is applied to one energy pathway: onshore wind turbines of medium size considering a large sample of possible configurations representative of European conditions. Among several technological, geographical, and methodological parameters, we identified the turbine load factor and the wind turbine lifetime as the most influent parameters. Greenhouse Gas (GHG) performances have been plotted as a function of these key parameters identified. Using these curves, GHG performances of a specific wind turbine can be estimated, thus avoiding the undertaking of an extensive Life Cycle Assessment (LCA). This methodology should be useful for decisions makers, providing them a robust but simple support tool for assessing the environmental performance of energy systems.

  13. Depletion velocities for atmospheric pollutants oriented To improve the simplified regional dispersion modelling

    International Nuclear Information System (INIS)

    Sanchez Gacita, Madeleine; Turtos Carbonell, Leonor; Rivero Oliva, Jose de Jesus

    2005-01-01

    The present work is aimed to improve externalities assessment using Simplified Methodologies, through the obtaining of depletion velocities for primary pollutants SO 2 , NO X and TSP (Total Suspended Particles) and for sulfate and nitrate aerosols, the secondary pollutants created from the first ones. The main goal proposed was to estimate these values for different cases, in order to have an ensemble of values for the geographic area, among which the most representative could be selected for using it in future studies that appeal to a simplified methodology for the regional dispersion assessment, taking into account the requirements of data, qualified manpower and time for a detailed approach. The results where obtained using detailed studies of the regional dispersion that were conduced for six power facilities, three from Cuba (at the localities of Mariel, Santa Cruz and Tallapiedra) and three from Mexico (at the localities of Tuxpan, Tula and Manzanillo). The depletion velocity for SO 2 was similar for all cases. Results obtained for Tallapiedra, Santa Cruz, Mariel and Manzanillo were similar. For Tula and Tuxpan a high uncertainty was found

  14. Simplified High-Power Inverter

    Science.gov (United States)

    Edwards, D. B.; Rippel, W. E.

    1984-01-01

    Solid-state inverter simplified by use of single gate-turnoff device (GTO) to commutate multiple silicon controlled rectifiers (SCR's). By eliminating conventional commutation circuitry, GTO reduces cost, size and weight. GTO commutation applicable to inverters of greater than 1-kilowatt capacity. Applications include emergency power, load leveling, drives for traction and stationary polyphase motors, and photovoltaic-power conditioning.

  15. Statistical methods of parameter estimation for deterministically chaotic time series

    Science.gov (United States)

    Pisarenko, V. F.; Sornette, D.

    2004-03-01

    We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A “segmentation fitting” maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x1 considered as an additional unknown parameter. The segmentation fitting method, called “piece-wise” ML, is similar in spirit but simpler and has smaller bias than the “multiple shooting” previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically).

  16. A comparison study of size-specific dose estimate calculation methods

    Energy Technology Data Exchange (ETDEWEB)

    Parikh, Roshni A. [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); University of Michigan Health System, Department of Radiology, Ann Arbor, MI (United States); Wien, Michael A.; Jordan, David W.; Ciancibello, Leslie; Berlin, Sheila C. [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); Novak, Ronald D. [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); Rebecca D. Considine Research Institute, Children' s Hospital Medical Center of Akron, Center for Mitochondrial Medicine Research, Akron, OH (United States); Klahr, Paul [CT Clinical Science, Philips Healthcare, Highland Heights, OH (United States); Soriano, Stephanie [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); University of Washington, Department of Radiology, Seattle, WA (United States)

    2018-01-15

    The size-specific dose estimate (SSDE) has emerged as an improved metric for use by medical physicists and radiologists for estimating individual patient dose. Several methods of calculating SSDE have been described, ranging from patient thickness or attenuation-based (automated and manual) measurements to weight-based techniques. To compare the accuracy of thickness vs. weight measurement of body size to allow for the calculation of the size-specific dose estimate (SSDE) in pediatric body CT. We retrospectively identified 109 pediatric body CT examinations for SSDE calculation. We examined two automated methods measuring a series of level-specific diameters of the patient's body: method A used the effective diameter and method B used the water-equivalent diameter. Two manual methods measured patient diameter at two predetermined levels: the superior endplate of L2, where body width is typically most thin, and the superior femoral head or iliac crest (for scans that did not include the pelvis), where body width is typically most thick; method C averaged lateral measurements at these two levels from the CT projection scan, and method D averaged lateral and anteroposterior measurements at the same two levels from the axial CT images. Finally, we used body weight to characterize patient size, method E, and compared this with the various other measurement methods. Methods were compared across the entire population as well as by subgroup based on body width. Concordance correlation (ρ{sub c}) between each of the SSDE calculation methods (methods A-E) was greater than 0.92 across the entire population, although the range was wider when analyzed by subgroup (0.42-0.99). When we compared each SSDE measurement method with CTDI{sub vol,} there was poor correlation, ρ{sub c}<0.77, with percentage differences between 20.8% and 51.0%. Automated computer algorithms are accurate and efficient in the calculation of SSDE. Manual methods based on patient thickness provide

  17. On the estimation method of compressed air consumption during pneumatic caisson sinking

    OpenAIRE

    平川, 修治; ヒラカワ, シュウジ; Shuji, HIRAKAWA

    1990-01-01

    There are several methods in estimation of compressed air consumption during pneumatic caisson sinking. It is re uired in the estimation of compressed air consumption by the methods under the same conditions. In this paper, it is proposed the methods which is able to estimate accurately the compressed air consumption during pnbumatic caissons sinking at this moment.

  18. Stable crack growth behaviors in welded CT specimens -- finite element analyses and simplified assessments

    International Nuclear Information System (INIS)

    Yagawa, Genki; Yoshimura, Shinobu; Aoki, Shigeru; Kikuchi, Masanori; Arai, Yoshio; Kashima, Koichi; Watanabe, Takayuki; Shimakawa, Takashi

    1993-01-01

    The paper describes stable crack growth behaviors in welded CT specimens made of nuclear pressure vessel A533B class 1 steel, in which initial cracks are placed to be normal to fusion line. At first, using the relations between the load-line displacement (δ) and the crack extension amount (Δa) measured in experiments, the generation phase finite element crack growth analyses are performed, calculating the applied load (P) and various kinds of J-integrals. Next, the simplified crack growth analyses based on the GE/EPRI method and the reference stress method are performed using the same experimental results. Some modification procedures of the two simplified assessment schemes are discussed to make them applicable to inhomogeneous materials. Finally, a neural network approach is proposed to optimize the above modification procedures. 20 refs., 13 figs., 1 tab

  19. Optical Method for Estimating the Chlorophyll Contents in Plant Leaves.

    Science.gov (United States)

    Pérez-Patricio, Madaín; Camas-Anzueto, Jorge Luis; Sanchez-Alegría, Avisaí; Aguilar-González, Abiel; Gutiérrez-Miceli, Federico; Escobar-Gómez, Elías; Voisin, Yvon; Rios-Rojas, Carlos; Grajales-Coutiño, Ruben

    2018-02-22

    This work introduces a new vision-based approach for estimating chlorophyll contents in a plant leaf using reflectance and transmittance as base parameters. Images of the top and underside of the leaf are captured. To estimate the base parameters (reflectance/transmittance), a novel optical arrangement is proposed. The chlorophyll content is then estimated by using linear regression where the inputs are the reflectance and transmittance of the leaf. Performance of the proposed method for chlorophyll content estimation was compared with a spectrophotometer and a Soil Plant Analysis Development (SPAD) meter. Chlorophyll content estimation was realized for Lactuca sativa L., Azadirachta indica , Canavalia ensiforme , and Lycopersicon esculentum . Experimental results showed that-in terms of accuracy and processing speed-the proposed algorithm outperformed many of the previous vision-based approach methods that have used SPAD as a reference device. On the other hand, the accuracy reached is 91% for crops such as Azadirachta indica , where the chlorophyll value was obtained using the spectrophotometer. Additionally, it was possible to achieve an estimation of the chlorophyll content in the leaf every 200 ms with a low-cost camera and a simple optical arrangement. This non-destructive method increased accuracy in the chlorophyll content estimation by using an optical arrangement that yielded both the reflectance and transmittance information, while the required hardware is cheap.

  20. Optical Method for Estimating the Chlorophyll Contents in Plant Leaves

    Directory of Open Access Journals (Sweden)

    Madaín Pérez-Patricio

    2018-02-01

    Full Text Available This work introduces a new vision-based approach for estimating chlorophyll contents in a plant leaf using reflectance and transmittance as base parameters. Images of the top and underside of the leaf are captured. To estimate the base parameters (reflectance/transmittance, a novel optical arrangement is proposed. The chlorophyll content is then estimated by using linear regression where the inputs are the reflectance and transmittance of the leaf. Performance of the proposed method for chlorophyll content estimation was compared with a spectrophotometer and a Soil Plant Analysis Development (SPAD meter. Chlorophyll content estimation was realized for Lactuca sativa L., Azadirachta indica, Canavalia ensiforme, and Lycopersicon esculentum. Experimental results showed that—in terms of accuracy and processing speed—the proposed algorithm outperformed many of the previous vision-based approach methods that have used SPAD as a reference device. On the other hand, the accuracy reached is 91% for crops such as Azadirachta indica, where the chlorophyll value was obtained using the spectrophotometer. Additionally, it was possible to achieve an estimation of the chlorophyll content in the leaf every 200 ms with a low-cost camera and a simple optical arrangement. This non-destructive method increased accuracy in the chlorophyll content estimation by using an optical arrangement that yielded both the reflectance and transmittance information, while the required hardware is cheap.

  1. Joint Spatio-Temporal Filtering Methods for DOA and Fundamental Frequency Estimation

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Benesty, Jacob

    2015-01-01

    some attention in the community and is quite promising for several applications. The proposed methods are based on optimal, adaptive filters that leave the desired signal, having a certain DOA and fundamental frequency, undistorted and suppress everything else. The filtering methods simultaneously...... operate in space and time, whereby it is possible resolve cases that are otherwise problematic for pitch estimators or DOA estimators based on beamforming. Several special cases and improvements are considered, including a method for estimating the covariance matrix based on the recently proposed...

  2. Temperature distribution of a simplified rotor due to a uniform heat source

    Science.gov (United States)

    Welzenbach, Sarah; Fischer, Tim; Meier, Felix; Werner, Ewald; kyzy, Sonun Ulan; Munz, Oliver

    2018-03-01

    In gas turbines, high combustion efficiency as well as operational safety are required. Thus, labyrinth seal systems with honeycomb liners are commonly used. In the case of rubbing events in the seal system, the components can be damaged due to cyclic thermal and mechanical loads. Temperature differences occurring at labyrinth seal fins during rubbing events can be determined by considering a single heat source acting periodically on the surface of a rotating cylinder. Existing literature analysing the temperature distribution on rotating cylindrical bodies due to a stationary heat source is reviewed. The temperature distribution on the circumference of a simplified labyrinth seal fin is calculated using an available and easy to implement analytical approach. A finite element model of the simplified labyrinth seal fin is created and the numerical results are compared to the analytical results. The temperature distributions calculated by the analytical and the numerical approaches coincide for low sliding velocities, while there are discrepancies of the calculated maximum temperatures for higher sliding velocities. The use of the analytical approach allows the conservative estimation of the maximum temperatures arising in labyrinth seal fins during rubbing events. At the same time, high calculation costs can be avoided.

  3. Accurate Lithium-ion battery parameter estimation with continuous-time system identification methods

    International Nuclear Information System (INIS)

    Xia, Bing; Zhao, Xin; Callafon, Raymond de; Garnier, Hugues; Nguyen, Truong; Mi, Chris

    2016-01-01

    Highlights: • Continuous-time system identification is applied in Lithium-ion battery modeling. • Continuous-time and discrete-time identification methods are compared in detail. • The instrumental variable method is employed to further improve the estimation. • Simulations and experiments validate the advantages of continuous-time methods. - Abstract: The modeling of Lithium-ion batteries usually utilizes discrete-time system identification methods to estimate parameters of discrete models. However, in real applications, there is a fundamental limitation of the discrete-time methods in dealing with sensitivity when the system is stiff and the storage resolutions are limited. To overcome this problem, this paper adopts direct continuous-time system identification methods to estimate the parameters of equivalent circuit models for Lithium-ion batteries. Compared with discrete-time system identification methods, the continuous-time system identification methods provide more accurate estimates to both fast and slow dynamics in battery systems and are less sensitive to disturbances. A case of a 2"n"d-order equivalent circuit model is studied which shows that the continuous-time estimates are more robust to high sampling rates, measurement noises and rounding errors. In addition, the estimation by the conventional continuous-time least squares method is further improved in the case of noisy output measurement by introducing the instrumental variable method. Simulation and experiment results validate the analysis and demonstrate the advantages of the continuous-time system identification methods in battery applications.

  4. Adaptive Methods for Permeability Estimation and Smart Well Management

    Energy Technology Data Exchange (ETDEWEB)

    Lien, Martha Oekland

    2005-04-01

    The main focus of this thesis is on adaptive regularization methods. We consider two different applications, the inverse problem of absolute permeability estimation and the optimal control problem of estimating smart well management. Reliable estimates of absolute permeability are crucial in order to develop a mathematical description of an oil reservoir. Due to the nature of most oil reservoirs, mainly indirect measurements are available. In this work, dynamic production data from wells are considered. More specifically, we have investigated into the resolution power of pressure data for permeability estimation. The inversion of production data into permeability estimates constitutes a severely ill-posed problem. Hence, regularization techniques are required. In this work, deterministic regularization based on adaptive zonation is considered, i.e. a solution approach with adaptive multiscale estimation in conjunction with level set estimation is developed for coarse scale permeability estimation. A good mathematical reservoir model is a valuable tool for future production planning. Recent developments within well technology have given us smart wells, which yield increased flexibility in the reservoir management. In this work, we investigate into the problem of finding the optimal smart well management by means of hierarchical regularization techniques based on multiscale parameterization and refinement indicators. The thesis is divided into two main parts, where Part I gives a theoretical background for a collection of research papers that has been written by the candidate in collaboration with others. These constitutes the most important part of the thesis, and are presented in Part II. A brief outline of the thesis follows below. Numerical aspects concerning calculations of derivatives will also be discussed. Based on the introduction to regularization given in Chapter 2, methods for multiscale zonation, i.e. adaptive multiscale estimation and refinement

  5. Simplified pipe gun

    International Nuclear Information System (INIS)

    Sorensen, H.; Nordskov, A.; Sass, B.; Visler, T.

    1987-01-01

    A simplified version of a deuterium pellet gun based on the pipe gun principle is described. The pipe gun is made from a continuous tube of stainless steel and gas is fed in from the muzzle end only. It is indicated that the pellet length is determined by the temperature gradient along the barrel right outside the freezing cell. Velocities of around 1000 m/s with a scatter of +- 2% are obtained with a propellant gas pressure of 40 bar

  6. Dual respiratory and cardiac motion estimation in PET imaging: Methods design and quantitative evaluation.

    Science.gov (United States)

    Feng, Tao; Wang, Jizhe; Tsui, Benjamin M W

    2018-04-01

    The goal of this study was to develop and evaluate four post-reconstruction respiratory and cardiac (R&C) motion vector field (MVF) estimation methods for cardiac 4D PET data. In Method 1, the dual R&C motions were estimated directly from the dual R&C gated images. In Method 2, respiratory motion (RM) and cardiac motion (CM) were separately estimated from the respiratory gated only and cardiac gated only images. The effects of RM on CM estimation were modeled in Method 3 by applying an image-based RM correction on the cardiac gated images before CM estimation, the effects of CM on RM estimation were neglected. Method 4 iteratively models the mutual effects of RM and CM during dual R&C motion estimations. Realistic simulation data were generated for quantitative evaluation of four methods. Almost noise-free PET projection data were generated from the 4D XCAT phantom with realistic R&C MVF using Monte Carlo simulation. Poisson noise was added to the scaled projection data to generate additional datasets of two more different noise levels. All the projection data were reconstructed using a 4D image reconstruction method to obtain dual R&C gated images. The four dual R&C MVF estimation methods were applied to the dual R&C gated images and the accuracy of motion estimation was quantitatively evaluated using the root mean square error (RMSE) of the estimated MVFs. Results show that among the four estimation methods, Methods 2 performed the worst for noise-free case while Method 1 performed the worst for noisy cases in terms of quantitative accuracy of the estimated MVF. Methods 4 and 3 showed comparable results and achieved RMSE lower by up to 35% than that in Method 1 for noisy cases. In conclusion, we have developed and evaluated 4 different post-reconstruction R&C MVF estimation methods for use in 4D PET imaging. Comparison of the performance of four methods on simulated data indicates separate R&C estimation with modeling of RM before CM estimation (Method 3) to be

  7. Theoretical Derivation of Simplified Evaluation Models for the First Peak of a Criticality Accident in Nuclear Fuel Solution

    International Nuclear Information System (INIS)

    Nomura, Yasushi

    2000-01-01

    In a reprocessing facility where nuclear fuel solutions are processed, one could observe a series of power peaks, with the highest peak right after a criticality accident. The criticality alarm system (CAS) is designed to detect the first power peak and warn workers near the reacting material by sounding alarms immediately. Consequently, exposure of the workers would be minimized by an immediate and effective evacuation. Therefore, in the design and installation of a CAS, it is necessary to estimate the magnitude of the first power peak and to set up the threshold point where the CAS initiates the alarm. Furthermore, it is necessary to estimate the level of potential exposure of workers in the case of accidents so as to decide the appropriateness of installing a CAS for a given compartment.A simplified evaluation model to estimate the minimum scale of the first power peak during a criticality accident is derived by theoretical considerations only for use in the design of a CAS to set up the threshold point triggering the alarm signal. Another simplified evaluation model is derived in the same way to estimate the maximum scale of the first power peak for use in judging the appropriateness for installing a CAS. Both models are shown to have adequate margin in predicting the minimum and maximum scale of criticality accidents by comparing their results with French CRiticality occurring ACcidentally (CRAC) experimental data

  8. Improved method for estimating particle scattering probabilities to finite detectors for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Mickael, M.; Gardner, R.P.; Verghese, K.

    1988-01-01

    An improved method for calculating the total probability of particle scattering within the solid angle subtended by finite detectors is developed, presented, and tested. The limiting polar and azimuthal angles subtended by the detector are measured from the direction that most simplifies their calculation rather than from the incident particle direction. A transformation of the particle scattering probability distribution function (pdf) is made to match the transformation of the direction from which the limiting angles are measured. The particle scattering probability to the detector is estimated by evaluating the integral of the transformed pdf over the range of the limiting angles measured from the preferred direction. A general formula for transforming the particle scattering pdf is derived from basic principles and applied to four important scattering pdf's; namely, isotropic scattering in the Lab system, isotropic neutron scattering in the center-of-mass system, thermal neutron scattering by the free gas model, and gamma-ray Klein-Nishina scattering. Some approximations have been made to these pdf's to enable analytical evaluations of the final integrals. These approximations are shown to be valid over a wide range of energies and for most elements. The particle scattering probability to spherical, planar circular, and right circular cylindrical detectors has been calculated using the new and previously reported direct approach. Results indicate that the new approach is valid and is computationally faster by orders of magnitude

  9. Geometric estimation method for x-ray digital intraoral tomosynthesis

    Science.gov (United States)

    Li, Liang; Yang, Yao; Chen, Zhiqiang

    2016-06-01

    It is essential for accurate image reconstruction to obtain a set of parameters that describes the x-ray scanning geometry. A geometric estimation method is presented for x-ray digital intraoral tomosynthesis (DIT) in which the detector remains stationary while the x-ray source rotates. The main idea is to estimate the three-dimensional (3-D) coordinates of each shot position using at least two small opaque balls adhering to the detector surface as the positioning markers. From the radiographs containing these balls, the position of each x-ray focal spot can be calculated independently relative to the detector center no matter what kind of scanning trajectory is used. A 3-D phantom which roughly simulates DIT was designed to evaluate the performance of this method both quantitatively and qualitatively in the sense of mean square error and structural similarity. Results are also presented for real data acquired with a DIT experimental system. These results prove the validity of this geometric estimation method.

  10. Simplified Aeroelastic Model for Fluid Structure Interaction between Microcantilever Sensors and Fluid Surroundings.

    Directory of Open Access Journals (Sweden)

    Fei Wang

    Full Text Available Fluid-structural coupling occurs when microcantilever sensors vibrate in a fluid. Due to the complexity of the mechanical characteristics of microcantilevers and lack of high-precision microscopic mechanical testing instruments, effective methods for studying the fluid-structural coupling of microcantilevers are lacking, especially for non-rectangular microcantilevers. Here, we report fluid-structure interactions (FSI of the cable-membrane structure via a macroscopic study. The simplified aeroelastic model was introduced into the microscopic field to establish a fluid-structure coupling vibration model for microcantilever sensors. We used the finite element method to solve the coupled FSI system. Based on the simplified aeroelastic model, simulation analysis of the effects of the air environment on the vibration of the commonly used rectangular microcantilever was also performed. The obtained results are consistent with the literature. The proposed model can also be applied to the auxiliary design of rectangular and non-rectangular sensors used in fluid environments.

  11. Vegetation index methods for estimating evapotranspiration by remote sensing

    Science.gov (United States)

    Glenn, Edward P.; Nagler, Pamela L.; Huete, Alfredo R.

    2010-01-01

    Evapotranspiration (ET) is the largest term after precipitation in terrestrial water budgets. Accurate estimates of ET are needed for numerous agricultural and natural resource management tasks and to project changes in hydrological cycles due to potential climate change. We explore recent methods that combine vegetation indices (VI) from satellites with ground measurements of actual ET (ETa) and meteorological data to project ETa over a wide range of biome types and scales of measurement, from local to global estimates. The majority of these use time-series imagery from the Moderate Resolution Imaging Spectrometer on the Terra satellite to project ET over seasons and years. The review explores the theoretical basis for the methods, the types of ancillary data needed, and their accuracy and limitations. Coefficients of determination between modeled ETa and measured ETa are in the range of 0.45–0.95, and root mean square errors are in the range of 10–30% of mean ETa values across biomes, similar to methods that use thermal infrared bands to estimate ETa and within the range of accuracy of the ground measurements by which they are calibrated or validated. The advent of frequent-return satellites such as Terra and planed replacement platforms, and the increasing number of moisture and carbon flux tower sites over the globe, have made these methods feasible. Examples of operational algorithms for ET in agricultural and natural ecosystems are presented. The goal of the review is to enable potential end-users from different disciplines to adapt these methods to new applications that require spatially-distributed ET estimates.

  12. Simple method for quick estimation of aquifer hydrogeological parameters

    Science.gov (United States)

    Ma, C.; Li, Y. Y.

    2017-08-01

    Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.

  13. QUANTITATIVE ESTIMATION OF VOLUMETRIC ICE CONTENT IN FROZEN GROUND BY DIPOLE ELECTROMAGNETIC PROFILING METHOD

    Directory of Open Access Journals (Sweden)

    L. G. Neradovskiy

    2018-01-01

    Full Text Available Volumetric estimation of the ice content in frozen soils is known as one of the main problems in the engineering geocryology and the permafrost geophysics. A new way to use the known method of dipole electromagnetic profiling for the quantitative estimation of the volumetric ice content in frozen soils is discussed. Investigations of foundation of the railroad in Yakutia (i.e. in the permafrost zone were used as an example for this new approach. Unlike the conventional way, in which the permafrost is investigated by its resistivity and constructing of geo-electrical cross-sections, the new approach is aimed at the study of the dynamics of the process of attenuation in the layer of annual heat cycle in the field of high-frequency vertical magnetic dipole. This task is simplified if not all the characteristics of the polarization ellipse are measured but the only one which is the vertical component of the dipole field and can be the most easily measured. Collected data of the measurements were used to analyze the computational errors of the average values of the volumetric ice content from the amplitude attenuation of the vertical component of the dipole field. Note that the volumetric ice content is very important for construction. It is shown that usually the relative error of computation of this characteristic of a frozen soil does not exceed 20% if the works are performed by the above procedure using the key-site methodology. This level of accuracy meets requirements of the design-and-survey works for quick, inexpensive, and environmentally friendly zoning of built-up remote and sparsely populated territories of the Russian permafrost zone according to a category of a degree of the ice content in frozen foundations of engineering constructions.

  14. An assessment of simplified methods to determine damage from ship-to-ship collisions

    International Nuclear Information System (INIS)

    Parks, M.B.; Ammerman, D.J.

    1996-01-01

    Sandia National Laboratories (SNL) is studying the safety of shipping, radioactive materials (RAM) by sea, the SeaRAM project (McConnell, et al. 1995), which is sponsored by the US Department of Energy (DOE). The project is concerned with the potential effects of ship collisions and fires on onboard RAM packages. Existing methodologies are being assessed to determine their adequacy to predict the effect of ship collisions and fires on RAM packages and to estimate whether or not a given accident might lead to a release of radioactivity. The eventual goal is to develop a set of validated methods, which have been checked by comparison with test data and/or detailed finite element analyses, for predicting the consequences of ship collisions and fires. These methods could then be used to provide input for overall risk assessments of RAM sea transport. The emphasis of this paper is on methods for predicting- effects of ship collisions

  15. Simplified Models for Dark Matter Searches at the LHC

    CERN Document Server

    Abdallah, Jalal; Arbey, Alexandre; Ashkenazi, Adi; Belyaev, Alexander; Berger, Joshua; Boehm, Celine; Boveia, Antonio; Brennan, Amelia; Brooke, Jim; Buchmueller, Oliver; Buckley, Matthew; Busoni, Giorgio; Calibbi, Lorenzo; Chauhan, Sushil; Daci, Nadir; Davies, Gavin; De Bruyn, Isabelle; de Jong, Paul; De Roeck, Albert; de Vries, Kees; del Re, Daniele; De Simone, Andrea; Di Simone, Andrea; Doglioni, Caterina; Dolan, Matthew; Dreiner, Herbi K.; Ellis, John; Eno, Sarah; Etzion, Erez; Fairbairn, Malcolm; Feldstein, Brian; Flaecher, Henning; Feng, Eric; Fox, Patrick; Genest, Marie-Hélène; Gouskos, Loukas; Gramling, Johanna; Haisch, Ulrich; Harnik, Roni; Hibbs, Anthony; Hoh, Siewyan; Hopkins, Walter; Ippolito, Valerio; Jacques, Thomas; Kahlhoefer, Felix; Khoze, Valentin V.; Kirk, Russell; Korn, Andreas; Kotov, Khristian; Kunori, Shuichi; Landsberg, Greg; Liem, Sebastian; Lin, Tongyan; Lowette, Steven; Lucas, Robyn; Malgeri, Luca; Malik, Sarah; McCabe, Christopher; Mete, Alaettin Serhan; Morgante, Enrico; Mrenna, Stephen; Nakahama, Yu; Newbold, Dave; Nordstrom, Karl; Pani, Priscilla; Papucci, Michele; Pataraia, Sophio; Penning, Bjoern; Pinna, Deborah; Polesello, Giacomo; Racco, Davide; Re, Emanuele; Riotto, Antonio Walter; Rizzo, Thomas; Salek, David; Sarkar, Subir; Schramm, Steven; Skubic, Patrick; Slone, Oren; Smirnov, Juri; Soreq, Yotam; Sumner, Timothy; Tait, Tim M.P.; Thomas, Marc; Tomalin, Ian; Tunnell, Christopher; Vichi, Alessandro; Volansky, Tomer; Weiner, Neal; West, Stephen M.; Wielers, Monika; Worm, Steven; Yavin, Itay; Zaldivar, Bryan; Zhou, Ning; Zurek, Kathryn

    2015-01-01

    This document outlines a set of simplified models for dark matter and its interactions with Standard Model particles. It is intended to summarize the main characteristics that these simplified models have when applied to dark matter searches at the LHC, and to provide a number of useful expressions for reference. The list of models includes both s-channel and t-channel scenarios. For s-channel, spin-0 and spin-1 mediation is discussed, and also realizations where the Higgs particle provides a portal between the dark and visible sectors. The guiding principles underpinning the proposed simplified models are spelled out, and some suggestions for implementation are presented.

  16. Simplified models for dark matter searches at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Abdallah, Jalal; Araujo, Henrique; Arbey, Alexandre; Ashkenazi, Adi; Belyaev, Alexander; Berger, Joshua; Boehm, Celine; Boveia, Antonio; Brennan, Amelia; Brooke, Jim; Buchmueller, Oliver; Buckley, Matthew; Busoni, Giorgio; Calibbi, Lorenzo; Chauhan, Sushil; Daci, Nadir; Davies, Gavin; De Bruyn, Isabelle; De Jong, Paul; De Roeck, Albert; de Vries, Kees; Del Re, Daniele; De Simone, Andrea; Di Simone, Andrea; Doglioni, Caterina; Dolan, Matthew; Dreiner, Herbi K.; Ellis, John; Eno, Sarah; Etzion, Erez; Fairbairn, Malcolm; Feldstein, Brian; Flaecher, Henning; Feng, Eric; Fox, Patrick; Genest, Marie-Hélène; Gouskos, Loukas; Gramling, Johanna; Haisch, Ulrich; Harnik, Roni; Hibbs, Anthony; Hoh, Siewyan; Hopkins, Walter; Ippolito, Valerio; Jacques, Thomas; Kahlhoefer, Felix; Khoze, Valentin V.; Kirk, Russell; Korn, Andreas; Kotov, Khristian; Kunori, Shuichi; Landsberg, Greg; Liem, Sebastian; Lin, Tongyan; Lowette, Steven; Lucas, Robyn; Malgeri, Luca; Malik, Sarah; McCabe, Christopher; Mete, Alaettin Serhan; Morgante, Enrico; Mrenna, Stephen; Nakahama, Yu; Newbold, Dave; Nordstrom, Karl; Pani, Priscilla; Papucci, Michele; Pataraia, Sophio; Penning, Bjoern; Pinna, Deborah; Polesello, Giacomo; Racco, Davide; Re, Emanuele; Riotto, Antonio Walter; Rizzo, Thomas; Salek, David; Sarkar, Subir; Schramm, Steven; Skubic, Patrick; Slone, Oren; Smirnov, Juri; Soreq, Yotam; Sumner, Timothy; Tait, Tim M. P.; Thomas, Marc; Tomalin, Ian; Tunnell, Christopher; Vichi, Alessandro; Volansky, Tomer; Weiner, Neal; West, Stephen M.; Wielers, Monika; Worm, Steven; Yavin, Itay; Zaldivar, Bryan; Zhou, Ning; Zurek, Kathryn

    2015-09-01

    This document a outlines a set of simplified models for dark matter and its interactions with Standard Model particles. It is intended to summarize the main characteristics that these simplified models have when applied to dark matter searches at the LHC, and to provide a number of useful expressions for reference. The list of models includes both s-channel and t-channel scenarios. For s-channel, spin-0 and spin-1 mediations are discussed, and also realizations where the Higgs particle provides a portal between the dark and visible sectors. The guiding principles underpinning the proposed simplified models are spelled out, and some suggestions for implementation are presented.

  17. Advances in Time Estimation Methods for Molecular Data.

    Science.gov (United States)

    Kumar, Sudhir; Hedges, S Blair

    2016-04-01

    Molecular dating has become central to placing a temporal dimension on the tree of life. Methods for estimating divergence times have been developed for over 50 years, beginning with the proposal of molecular clock in 1962. We categorize the chronological development of these methods into four generations based on the timing of their origin. In the first generation approaches (1960s-1980s), a strict molecular clock was assumed to date divergences. In the second generation approaches (1990s), the equality of evolutionary rates between species was first tested and then a strict molecular clock applied to estimate divergence times. The third generation approaches (since ∼2000) account for differences in evolutionary rates across the tree by using a statistical model, obviating the need to assume a clock or to test the equality of evolutionary rates among species. Bayesian methods in the third generation require a specific or uniform prior on the speciation-process and enable the inclusion of uncertainty in clock calibrations. The fourth generation approaches (since 2012) allow rates to vary from branch to branch, but do not need prior selection of a statistical model to describe the rate variation or the specification of speciation model. With high accuracy, comparable to Bayesian approaches, and speeds that are orders of magnitude faster, fourth generation methods are able to produce reliable timetrees of thousands of species using genome scale data. We found that early time estimates from second generation studies are similar to those of third and fourth generation studies, indicating that methodological advances have not fundamentally altered the timetree of life, but rather have facilitated time estimation by enabling the inclusion of more species. Nonetheless, we feel an urgent need for testing the accuracy and precision of third and fourth generation methods, including their robustness to misspecification of priors in the analysis of large phylogenies and data

  18. Estimating incidence from prevalence in generalised HIV epidemics: methods and validation.

    Directory of Open Access Journals (Sweden)

    Timothy B Hallett

    2008-04-01

    Full Text Available HIV surveillance of generalised epidemics in Africa primarily relies on prevalence at antenatal clinics, but estimates of incidence in the general population would be more useful. Repeated cross-sectional measures of HIV prevalence are now becoming available for general populations in many countries, and we aim to develop and validate methods that use these data to estimate HIV incidence.Two methods were developed that decompose observed changes in prevalence between two serosurveys into the contributions of new infections and mortality. Method 1 uses cohort mortality rates, and method 2 uses information on survival after infection. The performance of these two methods was assessed using simulated data from a mathematical model and actual data from three community-based cohort studies in Africa. Comparison with simulated data indicated that these methods can accurately estimates incidence rates and changes in incidence in a variety of epidemic conditions. Method 1 is simple to implement but relies on locally appropriate mortality data, whilst method 2 can make use of the same survival distribution in a wide range of scenarios. The estimates from both methods are within the 95% confidence intervals of almost all actual measurements of HIV incidence in adults and young people, and the patterns of incidence over age are correctly captured.It is possible to estimate incidence from cross-sectional prevalence data with sufficient accuracy to monitor the HIV epidemic. Although these methods will theoretically work in any context, we have able to test them only in southern and eastern Africa, where HIV epidemics are mature and generalised. The choice of method will depend on the local availability of HIV mortality data.

  19. Creep-fatigue evaluation method for type 304 and 316FR SS

    International Nuclear Information System (INIS)

    Wada, Y.; Aoto, K.; Ueno, F.

    1997-01-01

    For long-term creep-fatigue of Type 304SS, intergranular failure is dominant in the case of significant life reduction. It is considered that this phenomenon has its origin in the grain boundary sliding as observed in cavity-type creep-rupture. Accordingly a simplified procedure to estimate intergranular damages caused by the grain boundary sliding is presented in connection with the secondary creep. In the conventional ductility exhaustion method, failure ductility includes plastic strain, and damage estimation is based on the primary creep strain, which is recoverable during strain cycling. Therefore the accumulated creep strain becomes a very large value, and quite different from grain boundary sliding strain. As a new concept on ductility exhaustion, the product of secondary creep rate and time to rupture (Monkman-Grant product) is applied to fracture ductility, and grain boundary sliding strain is approximately estimated using the accumulated secondary creep strain. From the new concept it was shown that the time fraction rule and the conventional ductility exhaustion method can be derived analytically. Furthermore an advanced method on cyclic stress relaxation was examined. If cyclic plastic strain hardening is softened thermally during strain hold, cyclic creep strain behaviour is also softened. An unrecoverable accumulated primary creep strain causes hardening of the primary creep, and the reduction of deformation resistance to the secondary creep caused by thermal softening accelerates grain boundary sliding rate. As the results creep damages depend not on applied stress but on effective stress. The new concept ductility exhaustion method based on the above consideration leads up to simplified time fraction estimation method only by continuous cycling fatigue and monotonic creep which was already developed in PNC for Monju design guide. This method gave good life prediction for the intergranular failure mode and is convenient for design use on the elastic

  20. Estimating building energy consumption using extreme learning machine method

    International Nuclear Information System (INIS)

    Naji, Sareh; Keivani, Afram; Shamshirband, Shahaboddin; Alengaram, U. Johnson; Jumaat, Mohd Zamin; Mansor, Zulkefli; Lee, Malrey

    2016-01-01

    The current energy requirements of buildings comprise a large percentage of the total energy consumed around the world. The demand of energy, as well as the construction materials used in buildings, are becoming increasingly problematic for the earth's sustainable future, and thus have led to alarming concern. The energy efficiency of buildings can be improved, and in order to do so, their operational energy usage should be estimated early in the design phase, so that buildings are as sustainable as possible. An early energy estimate can greatly help architects and engineers create sustainable structures. This study proposes a novel method to estimate building energy consumption based on the ELM (Extreme Learning Machine) method. This method is applied to building material thicknesses and their thermal insulation capability (K-value). For this purpose up to 180 simulations are carried out for different material thicknesses and insulation properties, using the EnergyPlus software application. The estimation and prediction obtained by the ELM model are compared with GP (genetic programming) and ANNs (artificial neural network) models for accuracy. The simulation results indicate that an improvement in predictive accuracy is achievable with the ELM approach in comparison with GP and ANN. - Highlights: • Buildings consume huge amounts of energy for operation. • Envelope materials and insulation influence building energy consumption. • Extreme learning machine is used to estimate energy usage of a sample building. • The key effective factors in this study are insulation thickness and K-value.

  1. Conventional estimating method of earthquake response of mechanical appendage system

    International Nuclear Information System (INIS)

    Aoki, Shigeru; Suzuki, Kohei

    1981-01-01

    Generally, for the estimation of the earthquake response of appendage structure system installed in main structure system, the method of floor response analysis using the response spectra at the point of installing the appendage system has been used. On the other hand, the research on the estimation of the earthquake response of appendage system by the statistical procedure based on probability process theory has been reported. The development of a practical method for simply estimating the response is an important subject in aseismatic engineering. In this study, the method of estimating the earthquake response of appendage system in the general case that the natural frequencies of both structure systems were different was investigated. First, it was shown that floor response amplification factor was able to be estimated simply by giving the ratio of the natural frequencies of both structure systems, and its statistical property was clarified. Next, it was elucidated that the procedure of expressing acceleration, velocity and displacement responses with tri-axial response spectra simultaneously was able to be applied to the expression of FRAF. The applicability of this procedure to nonlinear system was examined. (Kako, I.)

  2. Efficient Methods of Estimating Switchgrass Biomass Supplies

    Science.gov (United States)

    Switchgrass (Panicum virgatum L.) is being developed as a biofuel feedstock for the United States. Efficient and accurate methods to estimate switchgrass biomass feedstock supply within a production area will be required by biorefineries. Our main objective was to determine the effectiveness of in...

  3. Methods of albumin estimation in clinical biochemistry: Past, present, and future.

    Science.gov (United States)

    Kumar, Deepak; Banerjee, Dibyajyoti

    2017-06-01

    Estimation of serum and urinary albumin is routinely performed in clinical biochemistry laboratories. In the past, precipitation-based methods were popular for estimation of human serum albumin (HSA). Currently, dye-binding or immunochemical methods are widely practiced. Each of these methods has its limitations. Research endeavors to overcome such limitations are on-going. The current trends in methodological aspects of albumin estimation guiding the field have not been reviewed. Therefore, it is the need of the hour to review several aspects of albumin estimation. The present review focuses on the modern trends of research from a conceptual point of view and gives an overview of recent developments to offer the readers a comprehensive understanding of the subject. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Simplified Qualitative Discrete Numerical Model to Determine Cracking Pattern in Brittle Materials by Means of Finite Element Method

    Directory of Open Access Journals (Sweden)

    J. Ochoa-Avendaño

    2017-01-01

    Full Text Available This paper presents the formulation, implementation, and validation of a simplified qualitative model to determine the crack path of solids considering static loads, infinitesimal strain, and plane stress condition. This model is based on finite element method with a special meshing technique, where nonlinear link elements are included between the faces of the linear triangular elements. The stiffness loss of some link elements represents the crack opening. Three experimental tests of bending beams are simulated, where the cracking pattern calculated with the proposed numerical model is similar to experimental result. The advantages of the proposed model compared to discrete crack approaches with interface elements can be the implementation simplicity, the numerical stability, and the very low computational cost. The simulation with greater values of the initial stiffness of the link elements does not affect the discontinuity path and the stability of the numerical solution. The exploded mesh procedure presented in this model avoids a complex nonlinear analysis and regenerative or adaptive meshes.

  5. Nose-to-tail analysis of an airbreathing hypersonic vehicle using an in-house simplified tool

    Science.gov (United States)

    Piscitelli, Filomena; Cutrone, Luigi; Pezzella, Giuseppe; Roncioni, Pietro; Marini, Marco

    2017-07-01

    SPREAD (Scramjet PREliminary Aerothermodynamic Design) is a simplified, in-house method developed by CIRA (Italian Aerospace Research Centre), able to provide a preliminary estimation of the performance of engine/aeroshape for airbreathing configurations. It is especially useful for scramjet engines, for which the strong coupling between the aerothermodynamic (external) and propulsive (internal) flow fields requires real-time screening of several engine/aeroshape configurations and the identification of the most promising one/s with respect to user-defined constraints and requirements. The outcome of this tool defines the base-line configuration for further design analyses with more accurate tools, e.g., CFD simulations and wind tunnel testing. SPREAD tool has been used to perform the nose-to-tail analysis of the LAPCAT-II Mach 8 MR2.4 vehicle configuration. The numerical results demonstrate SPREAD capability to quickly predict reliable values of aero-propulsive balance (i.e., net-thrust) and aerodynamic efficiency in a pre-design phase.

  6. Improvement of Source Number Estimation Method for Single Channel Signal.

    Directory of Open Access Journals (Sweden)

    Zhi Dong

    Full Text Available Source number estimation methods for single channel signal have been investigated and the improvements for each method are suggested in this work. Firstly, the single channel data is converted to multi-channel form by delay process. Then, algorithms used in the array signal processing, such as Gerschgorin's disk estimation (GDE and minimum description length (MDL, are introduced to estimate the source number of the received signal. The previous results have shown that the MDL based on information theoretic criteria (ITC obtains a superior performance than GDE at low SNR. However it has no ability to handle the signals containing colored noise. On the contrary, the GDE method can eliminate the influence of colored noise. Nevertheless, its performance at low SNR is not satisfactory. In order to solve these problems and contradictions, the work makes remarkable improvements on these two methods on account of the above consideration. A diagonal loading technique is employed to ameliorate the MDL method and a jackknife technique is referenced to optimize the data covariance matrix in order to improve the performance of the GDE method. The results of simulation have illustrated that the performance of original methods have been promoted largely.

  7. Improved Battery Parameter Estimation Method Considering Operating Scenarios for HEV/EV Applications

    Directory of Open Access Journals (Sweden)

    Jufeng Yang

    2016-12-01

    Full Text Available This paper presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted dataset is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.

  8. Estimating basin scale evapotranspiration (ET) by water balance and remote sensing methods

    Science.gov (United States)

    Senay, G.B.; Leake, S.; Nagler, P.L.; Artan, G.; Dickinson, J.; Cordova, J.T.; Glenn, E.P.

    2011-01-01

    Evapotranspiration (ET) is an important hydrological process that can be studied and estimated at multiple spatial scales ranging from a leaf to a river basin. We present a review of methods in estimating basin scale ET and its applications in understanding basin water balance dynamics. The review focuses on two aspects of ET: (i) how the basin scale water balance approach is used to estimate ET; and (ii) how ‘direct’ measurement and modelling approaches are used to estimate basin scale ET. Obviously, the basin water balance-based ET requires the availability of good precipitation and discharge data to calculate ET as a residual on longer time scales (annual) where net storage changes are assumed to be negligible. ET estimated from such a basin water balance principle is generally used for validating the performance of ET models. On the other hand, many of the direct estimation methods involve the use of remotely sensed data to estimate spatially explicit ET and use basin-wide averaging to estimate basin scale ET. The direct methods can be grouped into soil moisture balance modelling, satellite-based vegetation index methods, and methods based on satellite land surface temperature measurements that convert potential ET into actual ET using a proportionality relationship. The review also includes the use of complementary ET estimation principles for large area applications. The review identifies the need to compare and evaluate the different ET approaches using standard data sets in basins covering different hydro-climatic regions of the world.

  9. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    Science.gov (United States)

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  10. The estimation of the measurement results with using statistical methods

    International Nuclear Information System (INIS)

    Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" data-affiliation=" (State Enterprise Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" >Velychko, O; UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" data-affiliation=" (State Scientific Institution UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" >Gordiyenko, T

    2015-01-01

    The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed

  11. The estimation of the measurement results with using statistical methods

    Science.gov (United States)

    Velychko, O.; Gordiyenko, T.

    2015-02-01

    The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed.

  12. Simplified model-based optimal control of VAV air-conditioning system

    Energy Technology Data Exchange (ETDEWEB)

    Nassif, N.; Kajl, S.; Sabourin, R. [Ecole de Technologie Superieure, Montreal, PQ (Canada). Dept. of Construction Engineering

    2005-07-01

    The improvement of Variable Air Volume (VAV) system performance is one of several attempts being made to minimize the high energy use associated with the operation of heating, ventilation and air conditioning (HVAC) systems. A Simplified Optimization Process (SOP) comprised of controller set point strategies and a simplified VAV model was presented in this paper. The aim of the SOP was to determine supply set points. The advantage of the SOP over previous methods was that it did not require a detailed VAV model and optimization program. In addition, the monitored data for representative local-loop control can be checked on-line, after which controller set points can be updated in order to ensure proper operation by opting for real situations with minimum energy use. The SOP was validated using existing monitoring data and a model of an existing VAV system. Energy use simulations were compared to that of the existing VAV system. At each simulation step, 3 controller set point values were proposed and studied using the VAV model in order to select a value for each point which corresponded to the best performance of the VAV system. Simplified VAV component models were presented. Strategies for controller set points were described, including zone air temperature, duct static pressure set points; chilled water supply set points and supply air temperature set points. Simplified optimization process calculations were presented. Results indicated that the SOP provided significant energy savings when applied to specific AHU systems. In a comparison with a Detailed Optimization Process (DOP), the SOP was capable of determining set points close to those obtained by the DOP. However, it was noted that the controller set points determined by the SOP need a certain amount of time to reach optimal values when outdoor conditions or thermal loads are significantly changed. It was suggested that this disadvantage could be overcome by the use of a dynamic incremental value, which

  13. Brooks–Corey Modeling by One-Dimensional Vertical Infiltration Method

    Directory of Open Access Journals (Sweden)

    Xuguang Xing

    2018-05-01

    Full Text Available The laboratory methods used for the soil water retention curve (SWRC construction and parameter estimation is time-consuming. A vertical infiltration method was proposed to estimate parameters α and n and to further construct the SWRC. In the present study, the relationships describing the cumulative infiltration and infiltration rate with the depth of the wetting front were established, and simplified expressions for estimating α and n parameters were proposed. The one-dimensional vertical infiltration experiments of four soils were conducted to verify if the proposed method would accurately estimate α and n. The fitted values of α and n, obtained from the RETC software, were consistent with the calculated values obtained from the infiltration method. The comparison between the measured SWRCs obtained from the centrifuge method and the calculated SWRCs that were based on the infiltration method displayed small values of root mean square error (RMSE, mean absolute percentage error (MAPE, and mean absolute error. SWMS_2D-based simulations of cumulative infiltration, based on the calculated α and n, remained consistent with the measured values due to small RMSE and MAPE values. The experiments verified the proposed one-dimensional vertical infiltration method, which has applications in field hydraulic parameter estimation.

  14. Power system frequency estimation based on an orthogonal decomposition method

    Science.gov (United States)

    Lee, Chih-Hung; Tsai, Men-Shen

    2018-06-01

    In recent years, several frequency estimation techniques have been proposed by which to estimate the frequency variations in power systems. In order to properly identify power quality issues under asynchronously-sampled signals that are contaminated with noise, flicker, and harmonic and inter-harmonic components, a good frequency estimator that is able to estimate the frequency as well as the rate of frequency changes precisely is needed. However, accurately estimating the fundamental frequency becomes a very difficult task without a priori information about the sampling frequency. In this paper, a better frequency evaluation scheme for power systems is proposed. This method employs a reconstruction technique in combination with orthogonal filters, which may maintain the required frequency characteristics of the orthogonal filters and improve the overall efficiency of power system monitoring through two-stage sliding discrete Fourier transforms. The results showed that this method can accurately estimate the power system frequency under different conditions, including asynchronously sampled signals contaminated by noise, flicker, and harmonic and inter-harmonic components. The proposed approach also provides high computational efficiency.

  15. Economic impact of simplified de Gramont regimen in first-line therapy in metastatic colorectal cancer.

    Science.gov (United States)

    Limat, Samuel; Bracco-Nolin, Claire-Hélène; Legat-Fagnoni, Christine; Chaigneau, Loic; Stein, Ulrich; Huchet, Bernard; Pivot, Xavier; Woronoff-Lemsi, Marie-Christine

    2006-06-01

    The cost of chemotherapy has dramatically increased in advanced colorectal cancer patients, and the schedule of fluorouracil administration appears to be a determining factor. This retrospective study compared direct medical costs related to two different de Gramont schedules (standard vs. simplified) given in first-line chemotherapy with oxaliplatin or irinotecan. This cost-minimization analysis was performed from the French Health System perspective. Consecutive unselected patients treated in first-line therapy by LV5FU2 de Gramont with oxaliplatin (Folfox regimen) or with irinotecan (Folfiri regimen) were enrolled. Hospital and outpatient resources related to chemotherapy and adverse events were collected from 1999 to 2004 in 87 patients. Overall cost was reduced in the simplified regimen. The major factor which explained cost saving was the lower need for admissions for chemotherapy. Amount of cost saving depended on the method for assessing hospital stay. In patients treated by the Folfox regimen the per diem and DRG methods found cost savings of Euro 1,997 and Euro 5,982 according to studied schedules; in patients treated by Folfiri regimen cost savings of Euro 4,773 and Euro 7,274 were observed, respectively. In addition, travel costs were also reduced by simplified regimens. The robustness of our results was showed by one-way sensitivity analyses. These findings demonstrate that the simplified de Gramont schedule reduces costs of current first-line chemotherapy in advanced colorectal cancer. Interestingly, our study showed several differences in costs between two costing approaches of hospital stay: average per diem and DRG costs. These results suggested that standard regimen may be considered a profitable strategy from the hospital perspective. The opposition between health system perspective and hospital perspective is worth examining and may affect daily practices. In conclusion, our study shows that the simplified de Gramont schedule in combination with

  16. Comparison of methods for estimating herbage intake in grazing dairy cows

    DEFF Research Database (Denmark)

    Hellwing, Anne Louise Frydendahl; Lund, Peter; Weisbjerg, Martin Riis

    2015-01-01

    Estimation of herbage intake is a challenge both under practical and experimental conditions. The aim of this study was to estimate herbage intake with different methods for cows grazing 7 h daily on either spring or autumn pastures. In order to generate variation between cows, the 20 cows per...... season, and the herbage intake was estimated twice during each season. Cows were on pasture from 8:00 until 15:00, and were subsequently housed inside and fed a mixed ration (MR) based on maize silage ad libitum. Herbage intake was estimated with nine different methods: (1) animal performance (2) intake...

  17. Communication: A simplified coupled-cluster Lagrangian for polarizable embedding.

    Science.gov (United States)

    Krause, Katharina; Klopper, Wim

    2016-01-28

    A simplified coupled-cluster Lagrangian, which is linear in the Lagrangian multipliers, is proposed for the coupled-cluster treatment of a quantum mechanical system in a polarizable environment. In the simplified approach, the amplitude equations are decoupled from the Lagrangian multipliers and the energy obtained from the projected coupled-cluster equation corresponds to a stationary point of the Lagrangian.

  18. Communication: A simplified coupled-cluster Lagrangian for polarizable embedding

    International Nuclear Information System (INIS)

    Krause, Katharina; Klopper, Wim

    2016-01-01

    A simplified coupled-cluster Lagrangian, which is linear in the Lagrangian multipliers, is proposed for the coupled-cluster treatment of a quantum mechanical system in a polarizable environment. In the simplified approach, the amplitude equations are decoupled from the Lagrangian multipliers and the energy obtained from the projected coupled-cluster equation corresponds to a stationary point of the Lagrangian

  19. Highly simplified lateral flow-based nucleic acid sample preparation and passive fluid flow control

    Science.gov (United States)

    Cary, Robert E.

    2015-12-08

    Highly simplified lateral flow chromatographic nucleic acid sample preparation methods, devices, and integrated systems are provided for the efficient concentration of trace samples and the removal of nucleic acid amplification inhibitors. Methods for capturing and reducing inhibitors of nucleic acid amplification reactions, such as humic acid, using polyvinylpyrrolidone treated elements of the lateral flow device are also provided. Further provided are passive fluid control methods and systems for use in lateral flow assays.

  20. Highly simplified lateral flow-based nucleic acid sample preparation and passive fluid flow control

    Energy Technology Data Exchange (ETDEWEB)

    Cary, Robert B.

    2018-04-17

    Highly simplified lateral flow chromatographic nucleic acid sample preparation methods, devices, and integrated systems are provided for the efficient concentration of trace samples and the removal of nucleic acid amplification inhibitors. Methods for capturing and reducing inhibitors of nucleic acid amplification reactions, such as humic acid, using polyvinylpyrrolidone treated elements of the lateral flow device are also provided. Further provided are passive fluid control methods and systems for use in lateral flow assays.