A simplified method of estimating noise power spectra
International Nuclear Information System (INIS)
Hanson, K.M.
1998-01-01
A technique to estimate the radial dependence of the noise power spectrum of images is proposed in which the calculations are conducted solely in the spatial domain of the noise image. The noise power spectrum averaged over a radial spatial-frequency interval is obtained form the variance of a noise image that has been convolved with a small kernel that approximates a Laplacian operator. Recursive consolidation of the image by factors of two in each dimension yields estimates of the noise power spectrum over that full range of spatial frequencies
A simplified dynamic method for field capacity estimation and its parameter analysis
Institute of Scientific and Technical Information of China (English)
Zhen-tao CONG; Hua-fang LÜ; Guang-heng NI
2014-01-01
This paper presents a simplified dynamic method based on the definition of field capacity. Two soil hydraulic characteristics models, the Brooks-Corey (BC) model and the van Genuchten (vG) model, and four soil data groups were used in this study. The relative drainage rate, which is a unique parameter and independent of the soil type in the simplified dynamic method, was analyzed using the pressure-based method with a matric potential of−1/3 bar and the flux-based method with a drainage flux of 0.005 cm/d. As a result, the relative drainage rate of the simplified dynamic method was determined to be 3% per day. This was verified by the similar field capacity results estimated with the three methods for most soils suitable for cultivating plants. In addition, the drainage time calculated with the simplified dynamic method was two to three days, which agrees with the classical definition of field capacity. We recommend the simplified dynamic method with a relative drainage rate of 3% per day due to its simple application and clearly physically-based concept.
Oguchi, Masahiro; Fuse, Masaaki
2015-02-03
Product lifespan estimates are important information for understanding progress toward sustainable consumption and estimating the stocks and end-of-life flows of products. Publications reported actual lifespan of products; however, quantitative data are still limited for many countries and years. This study presents regional and longitudinal estimation of lifespan distribution of consumer durables, taking passenger cars as an example, and proposes a simplified method for estimating product lifespan distribution. We estimated lifespan distribution parameters for 17 countries based on the age profile of in-use cars. Sensitivity analysis demonstrated that the shape parameter of the lifespan distribution can be replaced by a constant value for all the countries and years. This enabled a simplified estimation that does not require detailed data on the age profile. Applying the simplified method, we estimated the trend in average lifespans of passenger cars from 2000 to 2009 for 20 countries. Average lifespan differed greatly between countries (9-23 years) and was increasing in many countries. This suggests consumer behavior differs greatly among countries and has changed over time, even in developed countries. The results suggest that inappropriate assumptions of average lifespan may cause significant inaccuracy in estimating the stocks and end-of-life flows of products.
Przewłócki, Jarosław; Górski, Jarosław; Świdziński, Waldemar
2016-12-01
The paper deals with the probabilistic analysis of the settlement of a non-cohesive soil layer subjected to cyclic loading. Originally, the settlement assessment is based on a deterministic compaction model, which requires integration of a set of differential equations. However, with the use of the Bessel functions, the settlement of a soil stratum can be calculated by a simplified algorithm. The compaction model parameters were determined for soil samples taken from subsoil near the Izmit Bay, Turkey. The computations were performed for various sets of random variables. The point estimate method was applied, and the results were verified by the Monte Carlo method. The outcome leads to a conclusion that can be useful in the prediction of soil settlement under seismic loading.
International Nuclear Information System (INIS)
Singh, Harleen; Singh, Sarabjeet
2014-01-01
The discrimination of mixed radiation field is of prime importance due to its application in neutron detection which leads to radiation safety, nuclear material detection etc. The liquid scintillators are one of the most important radiation detectors because the relative decay rate of neutron pulse is slower as compared to gamma radiation in these detectors. There are techniques like zero crossing and charge comparison which are very popular and implemented using analogue electronics. In the recent years due to availability of fast ADC and FPGA, digital methods for discrimination of mixed field radiations have been investigated. Some of the digital time domain techniques developed are pulse gradient analysis (PGA), simplified digital charge collection method (SDCC), digital zero crossing method. The performance of these methods depends on the appropriate selection of gate time for which the pulse is processed. In this paper, the SDCC method is investigated for a neutron-gamma mixed field. The main focus of the study is to get the knowledge of optimum gate time which is very important in neutron gamma discrimination analysis in a mixed radiation field. The comparison with charge collection (CC) method is also investigated
Estimation of 131J-Jodohippurateclearance by a simplified method using a single plasma sample
International Nuclear Information System (INIS)
Botsch, H.; Golde, G.; Kampf, D.
1980-01-01
Theoretical volumes calculated from the reciprocal of the plasma concentration of 131 J-Jodohippurate were compared in 95 patients with clearance values calculated by the 2-compartment-method and in 18 patients with conventional PAH-clearance. For estimating Hippurate-clearance from a single blood sampling the most favorable time is 45 min. after injection (r = 0.96; clearance 400/ml/min.: r = 0.98). Clearance values may be derived from the formula: C = 0.4 + 7.26 V - 0.021 x V 2 (V = injected activity/activity per l plasma taken 45 min. after injection). The simplicity, precision and reproducibility of the above mentioned clearance-method is emphasized. (orig.) [de
Cao, Mengqiu; Suo, Shiteng; Han, Xu; Jin, Ke; Sun, Yawen; Wang, Yao; Ding, Weina; Qu, Jianxun; Zhang, Xiaohua; Zhou, Yan
2017-01-01
Purpose : To evaluate the feasibility of a simplified method based on diffusion-weighted imaging (DWI) acquired with three b -values to measure tissue perfusion linked to microcirculation, to validate it against from perfusion-related parameters derived from intravoxel incoherent motion (IVIM) and dynamic contrast-enhanced (DCE) magnetic resonance (MR) imaging, and to investigate its utility to differentiate low- from high-grade gliomas. Materials and Methods : The prospective study was approved by the local institutional review board and written informed consent was obtained from all patients. From May 2016 and May 2017, 50 patients confirmed with glioma were assessed with multi- b -value DWI and DCE MR imaging at 3.0 T. Besides conventional apparent diffusion coefficient (ADC 0,1000 ) map, perfusion-related parametric maps for IVIM-derived perfusion fraction ( f ) and pseudodiffusion coefficient (D*), DCE MR imaging-derived pharmacokinetic metrics, including K trans , v e and v p , as well as a metric named simplified perfusion fraction (SPF), were generated. Correlation between perfusion-related parameters was analyzed by using the Spearman rank correlation. All imaging parameters were compared between the low-grade ( n = 19) and high-grade ( n = 31) groups by using the Mann-Whitney U test. The diagnostic performance for tumor grading was evaluated with receiver operating characteristic (ROC) analysis. Results : SPF showed strong correlation with IVIM-derived f and D* ( ρ = 0.732 and 0.716, respectively; both P simplified method to measure tissue perfusion based on DWI by using three b -values may be helpful to differentiate low- from high-grade gliomas. SPF may serve as a valuable alternative to measure tumor perfusion in gliomas in a noninvasive, convenient and efficient way.
Directory of Open Access Journals (Sweden)
Mengqiu Cao
2018-01-01
Full Text Available Purpose: To evaluate the feasibility of a simplified method based on diffusion-weighted imaging (DWI acquired with three b-values to measure tissue perfusion linked to microcirculation, to validate it against from perfusion-related parameters derived from intravoxel incoherent motion (IVIM and dynamic contrast-enhanced (DCE magnetic resonance (MR imaging, and to investigate its utility to differentiate low- from high-grade gliomas.Materials and Methods: The prospective study was approved by the local institutional review board and written informed consent was obtained from all patients. From May 2016 and May 2017, 50 patients confirmed with glioma were assessed with multi-b-value DWI and DCE MR imaging at 3.0 T. Besides conventional apparent diffusion coefficient (ADC0,1000 map, perfusion-related parametric maps for IVIM-derived perfusion fraction (f and pseudodiffusion coefficient (D*, DCE MR imaging-derived pharmacokinetic metrics, including Ktrans, ve and vp, as well as a metric named simplified perfusion fraction (SPF, were generated. Correlation between perfusion-related parameters was analyzed by using the Spearman rank correlation. All imaging parameters were compared between the low-grade (n = 19 and high-grade (n = 31 groups by using the Mann-Whitney U test. The diagnostic performance for tumor grading was evaluated with receiver operating characteristic (ROC analysis.Results: SPF showed strong correlation with IVIM-derived f and D* (ρ = 0.732 and 0.716, respectively; both P < 0.001. Compared with f, SPF was more correlated with DCE MR imaging-derived Ktrans (ρ = 0.607; P < 0.001 and vp (ρ = 0.397; P = 0.004. Among all parameters, SPF achieved the highest accuracy for differentiating low- from high-grade gliomas, with an area under the ROC curve value of 0.942, which was significantly higher than that of ADC0,1000 (P = 0.004. By using SPF as a discriminative index, the diagnostic sensitivity and specificity were 87.1% and 94
Simplified dose calculation method for mantle technique
International Nuclear Information System (INIS)
Scaff, L.A.M.
1984-01-01
A simplified dose calculation method for mantle technique is described. In the routine treatment of lymphom as using this technique, the daily doses at the midpoints at five anatomical regions are different because the thicknesses are not equal. (Author) [pt
3.6 simplified methods for design
International Nuclear Information System (INIS)
Nickell, R.E.; Yahr, G.T.
1981-01-01
Simplified design analysis methods for elevated temperature construction are classified and reviewed. Because the major impetus for developing elevated temperature design methodology during the past ten years has been the LMFBR program, considerable emphasis is placed upon results from this source. The operating characteristics of the LMFBR are such that cycles of severe transient thermal stresses can be interspersed with normal elevated temperature operational periods of significant duration, leading to a combination of plastic and creep deformation. The various simplified methods are organized into two general categories, depending upon whether it is the material, or constitutive, model that is reduced, or the geometric modeling that is simplified. Because the elastic representation of material behavior is so prevalent, an entire section is devoted to elastic analysis methods. Finally, the validation of the simplified procedures is discussed
Rules of thumb and simplified methods
International Nuclear Information System (INIS)
Lahti, G.P.
1985-01-01
The author points out the value of a thorough grounding in fundamental physics combined with experience of applied practice when using simplified methods and rules of thumb in shield engineering. Present-day quality assurance procedures and good engineering practices require careful documentation of all calculations. The aforementioned knowledge of rules of thumb and back-of-the-envelope calculations can assure both the preparer and the reviewer that the results in the quality assurance documentation are the physically correct ones
A simplified multisupport response spectrum method
Ye, Jihong; Zhang, Zhiqiang; Liu, Xianming
2012-03-01
A simplified multisupport response spectrum method is presented. The structural response is a sum of two components of a structure with a first natural period less than 2 s. The first component is the pseudostatic response caused by the inconsistent motions of the structural supports, and the second is the structural dynamic response to ground motion accelerations. This method is formally consistent with the classical response spectrum method, and the effects of multisupport excitation are considered for any modal response spectrum or modal superposition. If the seismic inputs at each support are the same, the support displacements caused by the pseudostatic response become rigid body displacements. The response spectrum in the case of multisupport excitations then reduces to that for uniform excitations. In other words, this multisupport response spectrum method is a modification and extension of the existing response spectrum method under uniform excitation. Moreover, most of the coherency coefficients in this formulation are simplified by approximating the ground motion excitation as white noise. The results indicate that this simplification can reduce the calculation time while maintaining accuracy. Furthermore, the internal forces obtained by the multisupport response spectrum method are compared with those produced by the traditional response spectrum method in two case studies of existing long-span structures. Because the effects of inconsistent support displacements are not considered in the traditional response spectrum method, the values of internal forces near the supports are underestimated. These regions are important potential failure points and deserve special attention in the seismic design of reticulated structures.
Simplified discrete ordinates method in spherical geometry
International Nuclear Information System (INIS)
Elsawi, M.A.; Abdurrahman, N.M.; Yavuz, M.
1999-01-01
The authors extend the method of simplified discrete ordinates (SS N ) to spherical geometry. The motivation for such an extension is that the appearance of the angular derivative (redistribution) term in the spherical geometry transport equation makes it difficult to decide which differencing scheme best approximates this term. In the present method, the angular derivative term is treated implicitly and thus avoids the need for the approximation of such term. This method can be considered to be analytic in nature with the advantage of being free from spatial truncation errors from which most of the existing transport codes suffer. In addition, it treats the angular redistribution term implicitly with the advantage of avoiding approximations to that term. The method also can handle scattering in a very general manner with the advantage of spending almost the same computational effort for all scattering modes. Moreover, the methods can easily be applied to higher-order S N calculations
Development of simplified decommissioning cost estimation code for nuclear facilities
International Nuclear Information System (INIS)
Tachibana, Mitsuo; Shiraishi, Kunio; Ishigami, Tsutomu
2010-01-01
The simplified decommissioning cost estimation code for nuclear facilities (DECOST code) was developed in consideration of features and structures of nuclear facilities and similarity of dismantling methods. The DECOST code could calculate 8 evaluation items of decommissioning cost. Actual dismantling in the Japan Atomic Energy Agency (JAEA) was evaluated; unit conversion factors used to calculate the manpower of dismantling activities were evaluated. Consequently, unit conversion factors of general components could be classified into three kinds. Weights of components and structures of the facility were necessary for calculation of manpower. Methods for evaluating weights of components and structures of the facility were studied. Consequently, the weight of components in the facility was proportional to the weight of structures of the facility. The weight of structures of the facility was proportional to the total area of floors in the facility. Decommissioning costs of 7 nuclear facilities in the JAEA were calculated by using the DECOST code. To verify the calculated results, the calculated manpower was compared with the manpower gained from actual dismantling. Consequently, the calculated manpower and actual manpower were almost equal. The outline of the DECOST code, evaluation results of unit conversion factors, the evaluation method of the weights of components and structures of the facility are described in this report. (author)
Del Pico, Wayne J
2014-01-01
Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el
Simplified pressure method for respirator fit testing.
Han, D; Xu, M; Foo, S; Pilacinski, W; Willeke, K
1991-08-01
A simplified pressure method has been developed for fit testing air-purifying respirators. In this method, the air-purifying cartridges are replaced by a pressure-sensing attachment and a valve. While wearers hold their breath, a small pump extracts air from the respirator cavity until a steady-state pressure is reached in 1 to 2 sec. The flow rate through the face seal leak is a unique function of this pressure, which is determined once for all respirators, regardless of the respirator's cavity volume or deformation because of pliability. The contaminant concentration inside the respirator depends on the degree of dilution by the flow through the cartridges. The cartridge flow varies among different brands and is measured once for each brand. The ratio of cartridge to leakflow is a measure of fit. This flow ratio has been measured on human subjects and has been compared to fit factors determined on the same subjects by means of photometric and particle count tests. The aerosol tests gave higher values of fit.
Simplified methods for evaluating road prism stability
William J. Elliot; Mark Ballerini; David Hall
2003-01-01
Mass failure is one of the most common failures of low-volume roads in mountainous terrain. Current methods for evaluating stability of these roads require a geotechnical specialist. A stability analysis program, XSTABL, was used to estimate the stability of 3,696 combinations of road geometry, soil, and groundwater conditions. A sensitivity analysis was carried out to...
Simplified approach for estimating large early release frequency
International Nuclear Information System (INIS)
Pratt, W.T.; Mubayi, V.; Nourbakhsh, H.; Brown, T.; Gregory, J.
1998-04-01
The US Nuclear Regulatory Commission (NRC) Policy Statement related to Probabilistic Risk Analysis (PRA) encourages greater use of PRA techniques to improve safety decision-making and enhance regulatory efficiency. One activity in response to this policy statement is the use of PRA in support of decisions related to modifying a plant's current licensing basis (CLB). Risk metrics such as core damage frequency (CDF) and Large Early Release Frequency (LERF) are recommended for use in making risk-informed regulatory decisions and also for establishing acceptance guidelines. This paper describes a simplified approach for estimating LERF, and changes in LERF resulting from changes to a plant's CLB
Simplified theory of plastic zones based on Zarka's method
Hübel, Hartwig
2017-01-01
The present book provides a new method to estimate elastic-plastic strains via a series of linear elastic analyses. For a life prediction of structures subjected to variable loads, frequently encountered in mechanical and civil engineering, the cyclically accumulated deformation and the elastic plastic strain ranges are required. The Simplified Theory of Plastic Zones (STPZ) is a direct method which provides the estimates of these and all other mechanical quantities in the state of elastic and plastic shakedown. The STPZ is described in detail, with emphasis on the fact that not only scientists but engineers working in applied fields and advanced students are able to get an idea of the possibilities and limitations of the STPZ. Numerous illustrations and examples are provided to support the reader's understanding.
Kadji, Caroline; Cannie, Mieke M; De Angelis, Ricardo; Camus, Margaux; Klass, Magdalena; Fellas, Stéphanie; Cecotti, Vera; Dütemeyer, Vivien; Jani, Jacques C
2017-05-15
To evaluate the performance of a simple method of estimating fetal weight (EFW) using MR imaging as compared with 2D US in the prediction of large-for-date neonates. Written informed consent was obtained for this EC-approved study. Between March 2011 and May 2016, 2 groups of women with singleton pregnancies were evaluated: women that underwent US-EFW and MR-EFW within 48 h before delivery and those undergoing these evaluations between 35 + 0 weeks and 37 + 6 weeks of gestation. US-EFW was based on Hadlock et al. and MR-EFW on the formula described by Backer et al. Planimetric measurement of the fetal body volume (FBV) needed for MR-EFW was performed using a semi-automated method and the time required for measurement was noted. Our outcome measure was performance in prediction of large-for-date neonates by MR imaging versus US-EFW using receiver-operating characteristic (ROC) curves. 270 women were included in the first part of the study with 48 newborns (17.8%) of birthweight ≥90 th centile and 30 (11.1%) ≥95 th centile. Eighty-three women were included in the second part with 9 newborns (10.8%) of birthweight ≥95 th centile. The median time needed for FBV planimetric measurements in all 353 fetuses was 3.5 (range; 1.5-5.5) min. The area under the ROC curve for prediction of postnatal large-for-date neonates by prenatal MR imaging performed within 48 h before delivery was significantly better than by US (difference between the AUROC = 0.085, P < 0.001; standard error = 0.020 for birthweight ≥90 th centile and 0.036, P = 0.01; standard error = 0.014 for birthweight ≥95 th centile). Similarly, MR-EFW was better than US-EFW, with both performed remote from delivery, in predicting birthweight ≥ 95 th centile (difference between the AUROC = 0.077, P = 0.045; standard error = 0.039). MR planimetry using our purpose-designed semi-automated method is not time-consuming. MR-EFW performed immediately prior to
Directory of Open Access Journals (Sweden)
Francesca Venturi
2017-04-01
Full Text Available In the last few decades, the search for bioactive compounds or “target molecules” from natural sources or their by-products has become the most important application of the supercritical fluid extraction (SFE process. In this context, the present research had two main objectives: (i to verify the effectiveness of a two-step SFE process (namely, a preliminary Sc-CO2 extraction of carotenoids followed by the recovery of polyphenols by ethanol coupled with Sc-CO2 in order to obtain bioactive extracts from two widespread different matrices (chili pepper and tomato by-products, and (ii to test the validity of the mathematical model proposed to describe the kinetics of SFE of carotenoids from different matrices, the knowledge of which is required also for the definition of the role played in the extraction process by the characteristics of the sample matrix. On the basis of the results obtained, it was possible to introduce a simplified kinetic model that was able to describe the time evolution of the extraction of bioactive compounds (mainly carotenoids and phenols from different substrates. In particular, while both chili pepper and tomato were confirmed to be good sources of bioactive antioxidant compounds, the extraction process from chili pepper was faster than from tomato under identical operating conditions.
Fundamental characteristics and simplified evaluation method of dynamic earth pressure
International Nuclear Information System (INIS)
Nukui, Y.; Inagaki, Y.; Ohmiya, Y.
1989-01-01
In Japan, a method is commonly used in the evaluation of dynamic earth pressure acting on the underground walls of a deeply embedded nuclear reactor building. However, since this method was developed on the basis of the limit state of soil supported by retaining walls, the behavior of dynamic earth pressure acting on the embedded part of a nuclear reactor building may differ from the estimated by this method. This paper examines the fundamental characteristics of dynamic earth pressure through dynamic soil-structure interaction analysis. A simplified method to evaluate dynamic earth pressure for the design of underground walls of a nuclear reactor building is described. The dynamic earth pressure is fluctuating earth pressure during earthquake
International Nuclear Information System (INIS)
Allafi, Walid; Uddin, Kotub; Zhang, Cheng; Mazuir Raja Ahsan Sha, Raja; Marco, James
2017-01-01
Highlights: •Off-line estimation approach for continuous-time domain for non-invertible function. •Model reformulated to multi-input-single-output; nonlinearity described by sigmoid. •Method directly estimates parameters of nonlinear ECM from the measured-data. •Iterative on-line technique leads to smoother convergence. •The model is validated off-line and on-line using NCA battery. -- Abstract: The accuracy of identifying the parameters of models describing lithium ion batteries (LIBs) in typical battery management system (BMS) applications is critical to the estimation of key states such as the state of charge (SoC) and state of health (SoH). In applications such as electric vehicles (EVs) where LIBs are subjected to highly demanding cycles of operation and varying environmental conditions leading to non-trivial interactions of ageing stress factors, this identification is more challenging. This paper proposes an algorithm that directly estimates the parameters of a nonlinear battery model from measured input and output data in the continuous time-domain. The simplified refined instrumental variable method is extended to estimate the parameters of a Wiener model where there is no requirement for the nonlinear function to be invertible. To account for nonlinear battery dynamics, in this paper, the typical linear equivalent circuit model (ECM) is enhanced by a block-oriented Wiener configuration where the nonlinear memoryless block following the typical ECM is defined to be a sigmoid static nonlinearity. The nonlinear Weiner model is reformulated in the form of a multi-input, single-output linear model. This linear form allows the parameters of the nonlinear model to be estimated using any linear estimator such as the well-established least squares (LS) algorithm. In this paper, the recursive least square (RLS) method is adopted for online parameter estimation. The approach was validated on experimental data measured from an 18650-type Graphite
Simplified method evaluation for piping elastic follow-up
International Nuclear Information System (INIS)
Severud, L.K.
1983-05-01
A proposed simplified method for evaluating elastic follow-up effects in high temperature pipelines is presented. The method was evaluated by comparing the simplified analysis results with those obtained from detailed inelastic solutions. Nine different pipelines typical of a nuclear breeder reactor power plant were analyzed; the simplified method is attractive because it appears to give fairly accurate and conservative results. It is easy to apply and inexpensive since it employs iterative elastic solutions for the pipeline coupled with the readily available isochronous stress-strain data provided in the ASME Code
Use of simplified methods for predicting natural resource damages
International Nuclear Information System (INIS)
Loreti, C.P.; Boehm, P.D.; Gundlach, E.R.; Healy, E.A.; Rosenstein, A.B.; Tsomides, H.J.; Turton, D.J.; Webber, H.M.
1995-01-01
To reduce transaction costs and save time, the US Department of the Interior (DOI) and the National Oceanic and Atmospheric Administration (NOAA) have developed simplified methods for assessing natural resource damages from oil and chemical spills. DOI has proposed the use of two computer models, the Natural Resource Damage Assessment Model for Great Lakes Environments (NRDAM/GLE) and a revised Natural Resource Damage Assessment Model for Coastal and Marine Environments (NRDAM/CME) for predicting monetary damages for spills of oils and chemicals into the Great Lakes and coastal and marine environments. NOAA has used versions of these models to create Compensation Formulas, which it has proposed for calculating natural resource damages for oil spills of up to 50,000 gallons anywhere in the US. Based on a review of the documentation supporting the methods, the results of hundreds of sample runs of DOI's models, and the outputs of the thousands of model runs used to create NOAA's Compensation Formulas, this presentation discusses the ability of these simplified assessment procedures to make realistic damage estimates. The limitations of these procedures are described, and the need for validating the assumptions used in predicting natural resource injuries is discussed
A simplified method for assessing cytotechnologist workload.
Vaickus, Louis J; Tambouret, Rosemary
2014-01-01
Examining cytotechnologist workflow and how it relates to job performance and patient safety is important in determining guidelines governing allowable workloads. This report discusses the development of a software tool that significantly simplifies the process of analyzing cytotechnologist workload while simultaneously increasing the quantity and resolution of the data collected. The program runs in Microsoft Excel and minimizes manual data entry and data transcription by automating as many tasks as is feasible. Data show the cytotechnologists tested were remarkably consistent in the amount of time it took them to screen a cervical cytology (Gyn) or a nongynecologic cytology (Non-Gyn) case and that this amount of time was directly proportional to the number of slides per case. Namely, the time spent per slide did not differ significantly in Gyn versus Non-Gyn cases (216 ± 3.4 seconds and 235 ± 24.6 seconds, respectively; P=.16). There was no significant difference in the amount of time needed to complete a Gyn case between the morning and the evening (314 ± 4.7 seconds and 312 ± 7.1 seconds; P=.39), but a significantly increased time spent screening Non-Gyn cases (slide-adjusted) in the afternoon hours (323 ± 20.1 seconds and 454 ± 67.6 seconds; P=.027), which was largely the result of significantly increased time spent on prescreening activities such as checking the electronic medical record (62 ± 6.9 seconds and 145 ± 36 seconds; P=.006). This Excel-based data collection tool generates highly detailed data in an unobtrusive manner and is highly customizable to the individual working environment and clinical climate. © 2013 American Cancer Society.
A simplified method for processing dynamic images of gastric antrum
DEFF Research Database (Denmark)
Madsen, J L; Graff, J; Fugisang, S
2000-01-01
versus geometric centre curve. In all subjects, our technique gave unequivocal frequencies of antral contractions at each time point. Statistical analysis did not reveal any intraindividual variation in this frequency during gastric emptying. We believe that the simplified scintigraphic method...
A simplified method of power calibration
International Nuclear Information System (INIS)
Jones, M.; Elliott, A.
1974-01-01
The Nuclear Reactor Facility, University of Missouri Rolla, has developed a unique method of power calibration for pool type reactors. Since water is incompressible it can be assumed that a rise in the water level of the pool while operating at power can be attributed to the heat input from the reactor core. Water level changes of a small magnitude are easily detectable. This method has proven to be less costly, less time consuming, and more reproducible than the conventional gold foil calibration, and has proven to be more accurate than a heat balance because several problems with heat flow through the walls and to the atmosphere are automatically compensated for with this method. The accuracy of this means of calibration depends upon the accuracy of the measurement of the water level and can normally be expected to be two to four percent. (author)
A simplified method of power calibration
Energy Technology Data Exchange (ETDEWEB)
Jones, M; Elliott, A [University of Missouri-Rolla (United States)
1974-07-01
The Nuclear Reactor Facility, University of Missouri Rolla, has developed a unique method of power calibration for pool type reactors. Since water is incompressible it can be assumed that a rise in the water level of the pool while operating at power can be attributed to the heat input from the reactor core. Water level changes of a small magnitude are easily detectable. This method has proven to be less costly, less time consuming, and more reproducible than the conventional gold foil calibration, and has proven to be more accurate than a heat balance because several problems with heat flow through the walls and to the atmosphere are automatically compensated for with this method. The accuracy of this means of calibration depends upon the accuracy of the measurement of the water level and can normally be expected to be two to four percent. (author)
Equity of cadastral valuation and simplified methods
Directory of Open Access Journals (Sweden)
Gianni Guerrieri
2014-12-01
. Among these studies, there is the recent paper by Rocco Curto, Elena Fregonara, Patrizia Semeraro (2014 “Come rendere più eque le rendite catastali in attesa della revisione degli estimi?”( How can land registry values be made fairer pending a review of valuations? in which a rapid and simple methodology to vary the current real estate rent through corrective coefficients of location is proposed. In this way, the taxable basis on real estate fees is re-defined in order to reduce the current fiscal iniquity caused by the obsolescence of the incomes based on the current cadastre office. However, a temporary correction to implement while waiting for the reform of the entire cadastral system. In particular, in their paper, Curto et al. (2014 propose to multiply the value of the current income by a coefficient obtained as a ratio between the average prices of a given census microzone and a reference index that “the ratio between an index price which most accurately sums up property values in individual municipalities or aggregations of municipalities in the case of the smallest municipalities (determined on the basis of market observations constituting the entire statistical sample and the corresponding price indices of the values of each Microzone, defined on the basis of market observations (sub-samples” (p. 62. In the remainder of the paper, the methodology underlying the hypothesis contained in the MEF’s “Revision of real estate taxation proposal”(August 2013 is explained. Secondly, the methodological differences of corrections of real estate incomes proposed in the cited document by the MEF and in Curto et al. (2014’s article are then compared. Subsequently, some empirical proof is supplied relating to the two taxable-basis equity recovery methods. Lastly, further consideration on the effective and generalized implementation of the proposed methods will be made.
Simple design of slanted grating with simplified modal method.
Li, Shubin; Zhou, Changhe; Cao, Hongchao; Wu, Jun
2014-02-15
A simplified modal method (SMM) is presented that offers a clear physical image for subwavelength slanted grating. The diffraction characteristic of the slanted grating under Littrow configuration is revealed by the SMM as an equivalent rectangular grating, which is in good agreement with rigorous coupled-wave analysis. Based on the equivalence, we obtained an effective analytic solution for simplifying the design and optimization of a slanted grating. It offers a new approach for design of the slanted grating, e.g., a 1×2 beam splitter can be easily designed. This method should be helpful for designing various new slanted grating devices.
77 FR 54482 - Allocation of Costs Under the Simplified Methods
2012-09-05
... cost of goods sold cash or trade discounts that taxpayers do not capitalize for book purposes (and... to adjust additional section 263A costs for cash or trade discounts described in Sec. 1.471-3(b... Allocation of Costs Under the Simplified Methods AGENCY: Internal Revenue Service (IRS), Treasury. ACTION...
Simplified large African carnivore density estimators from track indices
Directory of Open Access Journals (Sweden)
Christiaan W. Winterbach
2016-12-01
Full Text Available Background The range, population size and trend of large carnivores are important parameters to assess their status globally and to plan conservation strategies. One can use linear models to assess population size and trends of large carnivores from track-based surveys on suitable substrates. The conventional approach of a linear model with intercept may not intercept at zero, but may fit the data better than linear model through the origin. We assess whether a linear regression through the origin is more appropriate than a linear regression with intercept to model large African carnivore densities and track indices. Methods We did simple linear regression with intercept analysis and simple linear regression through the origin and used the confidence interval for ß in the linear model y = αx + ß, Standard Error of Estimate, Mean Squares Residual and Akaike Information Criteria to evaluate the models. Results The Lion on Clay and Low Density on Sand models with intercept were not significant (P > 0.05. The other four models with intercept and the six models thorough origin were all significant (P < 0.05. The models using linear regression with intercept all included zero in the confidence interval for ß and the null hypothesis that ß = 0 could not be rejected. All models showed that the linear model through the origin provided a better fit than the linear model with intercept, as indicated by the Standard Error of Estimate and Mean Square Residuals. Akaike Information Criteria showed that linear models through the origin were better and that none of the linear models with intercept had substantial support. Discussion Our results showed that linear regression through the origin is justified over the more typical linear regression with intercept for all models we tested. A general model can be used to estimate large carnivore densities from track densities across species and study areas. The formula observed track density = 3.26
Comments on Simplified Calculation Method for Fire Exposed Concrete Columns
DEFF Research Database (Denmark)
Hertz, Kristian Dahl
1998-01-01
The author has developed new simplified calculation methods for fire exposed columns. Methods, which are found In ENV 1992-1-2 chapter 4.3 and in proposal for Danish code of Practise DS411 chapter 9. In the present supporting document the methods are derived and 50 eccentrically loaded fire expos...... columns are calculated and compared to results of full-scale tests. Furthermore 500 columns are calculated in order to present each test result related to a variation of the calculation in time of fire resistance....
Simplified methods to assess thermal fatigue due to turbulent mixing
International Nuclear Information System (INIS)
Hannink, M.H.C.; Timperi, A.
2011-01-01
Thermal fatigue is a safety relevant damage mechanism in pipework of nuclear power plants. A well-known simplified method for the assessment of thermal fatigue due to turbulent mixing is the so-called sinusoidal method. Temperature fluctuations in the fluid are described by a sinusoidally varying signal at the inner wall of the pipe. Because of limited information on the thermal loading conditions, this approach generally leads to overconservative results. In this paper, a new assessment method is presented, which has the potential of reducing the overconservatism of existing procedures. Artificial fluid temperature signals are generated by superposition of harmonic components with different amplitudes and frequencies. The amplitude-frequency spectrum of the components is modelled by a formula obtained from turbulence theory, whereas the phase differences are assumed to be randomly distributed. Lifetime predictions generated with the new simplified method are compared with lifetime predictions based on real fluid temperature signals, measured in an experimental setup of a mixing tee. Also, preliminary steady-state Computational Fluid Dynamics (CFD) calculations of the total power of the fluctuations are presented. The total power is needed as an input parameter for the spectrum formula in a real-life application. Solution of the transport equation for the total power was included in a CFD code and comparisons with experiments were made. The newly developed simplified method for generating the temperature signal is shown to be adequate for the investigated geometry and flow conditions, and demonstrates possibilities of reducing the conservatism of the sinusoidal method. CFD calculations of the total power show promising results, but further work is needed to develop the approach. (author)
Simplifying cardiovascular risk estimation using resting heart rate.
LENUS (Irish Health Repository)
Cooney, Marie Therese
2010-09-01
Elevated resting heart rate (RHR) is a known, independent cardiovascular (CV) risk factor, but is not included in risk estimation systems, including Systematic COronary Risk Evaluation (SCORE). We aimed to derive risk estimation systems including RHR as an extra variable and assess the value of this addition.
Simplified hourly method to calculate summer temperatures in dwellings
DEFF Research Database (Denmark)
Mortensen, Lone Hedegaard; Aggerholm, Søren
2012-01-01
with an ordinary distribution of windows and a “worst” case where the window area facing south and west was increased by more than 60%. The simplified method used Danish weather data and only needs information on transmission losses, thermal mass, surface contact, internal load, ventilation scheme and solar load...... program for thermal simulations of buildings. The results are based on one year simulations of two cases. The cases were based on a low energy dwelling of 196 m². The transmission loss for the building envelope was 3.3 W/m², not including windows and doors. The dwelling was tested in two cases, a case...
International Nuclear Information System (INIS)
Haggerty, R.; Schroth, M.H.; Istok, J.D.
1998-01-01
The single-well, ''''push-pull'''' test method is useful for obtaining information on a wide variety of aquifer physical, chemical, and microbiological characteristics. A push-pull test consists of the pulse-type injection of a prepared test solution into a single monitoring well followed by the extraction of the test solution/ground water mixture from the same well. The test solution contains a conservative tracer and one or more reactants selected to investigate a particular process. During the extraction phase, the concentrations of tracer, reactants, and possible reaction products are measured to obtain breakthrough curves for all solutes. This paper presents a simplified method of data analysis that can be used to estimate a first-order reaction rate coefficient from these breakthrough curves. Rate coefficients are obtained by fitting a regression line to a plot of normalized concentrations versus elapsed time, requiring no knowledge of aquifer porosity, dispersivity, or hydraulic conductivity. A semi-analytical solution to the advective-dispersion equation is derived and used in a sensitivity analysis to evaluate the ability of the simplified method to estimate reaction rate coefficients in simulated push-pull tests in a homogeneous, confined aquifer with a fully-penetrating injection/extraction well and varying porosity, dispersivity, test duration, and reaction rate. A numerical flow and transport code (SUTRA) is used to evaluate the ability of the simplified method to estimate reaction rate coefficients in simulated push-pull tests in a heterogeneous, unconfined aquifer with a partially penetrating well. In all cases the simplified method provides accurate estimates of reaction rate coefficients; estimation errors ranged from 0.1 to 8.9% with most errors less than 5%
Simplified Estimation of Tritium Inventory in Stainless Steel
International Nuclear Information System (INIS)
Willms, R. Scott
2005-01-01
An important part of tritium facility waste management is estimating the residual tritium inventory in stainless steel. This was needed as part of the decontamination and decommissioning associated with the Tritium Systems Test Assembly at Los Alamos National Laboratory. In particular, the disposal path for three, large tanks would vary substantially depending on the tritium inventory in the stainless steel walls. For this purpose the time-dependant diffusion equation was solved using previously measured parameters. These results were compared to previous work that measured the tritium inventory in the stainless steel wall of a 50-L tritium container. Good agreement was observed. These results are reduced to a simple algebraic equation that can readily be used to estimate tritium inventories in room temperature stainless steel based on tritium partial pressure and exposure time. Results are available for both constant partial pressure exposures and for varying partial pressures. Movies of the time dependant results were prepared which are particularly helpful for interpreting results and drawing conclusions
A simplified model for the estimation of energy production of PV systems
International Nuclear Information System (INIS)
Aste, Niccolò; Del Pero, Claudio; Leonforte, Fabrizio; Manfren, Massimiliano
2013-01-01
The potential of solar energy is far higher than any other renewable source, although several limits exist. In detail the fundamental factors that must be analyzed by investors and policy makers are the cost-effectiveness and the production of PV power plants, respectively, for the decision of investment schemes and energy policy strategies. Tools suitable to be used even by non-specialists, are therefore becoming increasingly important. Many research and development effort have been devoted to this goal in recent years. In this study, a simplified model for PV annual production estimation that can provide results with a level of accuracy comparable with the more sophisticated simulation tools from which it derives is fundamental data. The main advantage of the presented model is that it can be used by virtually anyone, without requiring a specific field expertise. The inherent limits of the model are related to its empirical base, but the methodology presented can be effectively reproduced in the future with a different spectrum of data in order to assess, for example, the effect of technological evolution on the overall performance of PV power generation or establishing performance benchmarks for a much larger variety kinds of PV plants and technologies. - Highlights: • We have analyzed the main methods for estimating the electricity production of photovoltaic systems. • We simulated the same system with two different software in different European locations and estimated the electric production. • We have studied the main losses of a plant PV. • We provide a simplified model to estimate the electrical production of any PV system well designed. • We validated the data obtained by the proposed model with experimental data from three PV systems
Cask crush pad analysis using detailed and simplified analysis methods
International Nuclear Information System (INIS)
Uldrich, E.D.; Hawkes, B.D.
1997-01-01
A crush pad has been designed and analyzed to absorb the kinetic energy of a hypothetically dropped spent nuclear fuel shipping cask into a 44-ft. deep cask unloading pool at the Fluorinel and Storage Facility (FAST). This facility, located at the Idaho Chemical Processing Plant (ICPP) at the Idaho national Engineering and Environmental Laboratory (INEEL), is a US Department of Energy site. The basis for this study is an analysis by Uldrich and Hawkes. The purpose of this analysis was to evaluate various hypothetical cask drop orientations to ensure that the crush pad design was adequate and the cask deceleration at impact was less than 100 g. It is demonstrated herein that a large spent fuel shipping cask, when dropped onto a foam crush pad, can be analyzed by either hand methods or by sophisticated dynamic finite element analysis using computer codes such as ABAQUS. Results from the two methods are compared to evaluate accuracy of the simplified hand analysis approach
A Novel Interference Detection Method of STAP Based on Simplified TT Transform
Directory of Open Access Journals (Sweden)
Qiang Wang
2017-01-01
Full Text Available Training samples contaminated by target-like signals is one of the major reasons for inhomogeneous clutter environment. In such environment, clutter covariance matrix in STAP (space-time adaptive processing is estimated inaccurately, which finally leads to detection performance reduction. In terms of this problem, a STAP interference detection method based on simplified TT (time-time transform is proposed in this letter. Considering the sparse physical property of clutter in the space-time plane, data on each range cell is first converted into a discrete slow time series. Then, the expression of simplified TT transform about sample data is derived step by step. Thirdly, the energy of each training sample is focalized and extracted by simplified TT transform from energy-variant difference between the unpolluted and polluted stage, and the physical significance of discarding the contaminated samples is analyzed. Lastly, the contaminated samples are picked out in light of the simplified TT transform-spectrum difference. The result on Monte Carlo simulation indicates that when training samples are contaminated by large power target-like signals, the proposed method is more effective in getting rid of the contaminated samples, reduces the computational complexity significantly, and promotes the target detection performance compared with the method of GIP (generalized inner product.
Simplified estimation technique for organic contaminant transport in ground water
Energy Technology Data Exchange (ETDEWEB)
Piver, W T; Lindstrom, F T
1984-05-01
The analytical solution for one-dimensional dispersive-advective transport of a single solute in a saturated soil accompanied by adsorption onto soil surfaces and first-order reaction rate kinetics for degradation can be used to evaluate the suitability of potential sites for burial of organic chemicals. The technique can be used to the greatest advantage with organic chemicals that are present in ground waters in small amounts. The steady-state solution provides a rapid method for chemical landfill site evaluation because it contains the important variables that describe interactions between hydrodynamics and chemical transformation. With this solution, solute concentration, at a specified distance from the landfill site, is a function of the initial concentration and two dimensionless groups. In the first group, the relative weights of advective and dispersive variables are compared, and in the second group the relative weights of hydrodynamic and degradation variables are compared. The ratio of hydrodynamic to degradation variables can be rearranged and written as (a/sub L lambda)/(q/epsilon), where a/sub L/ is the dispersivity of the soil, lambda is the reaction rate constant, q is ground water flow velocity, and epsilon is the soil porosity. When this term has a value less than 0.01, the degradation process is occurring at such a slow rate relative to the hydrodynamics that it can be neglected. Under these conditions the site is unsuitable because the chemicals are unreactive, and concentrations in ground waters will change very slowly with distance away from the landfill site.
Murayama, I; Miyano, A; Sasaki, Y; Hirata, T; Ichijo, T; Satoh, H; Sato, S; Furuhama, K
2013-11-01
This study was performed to clarify whether a formula (Holstein equation) based on a single blood sample and the isotonic, nonionic, iodine contrast medium iodixanol in Holstein dairy cows can apply to the estimation of glomerular filtration rate (GFR) for beef cattle. To verify the application of iodixanol in beef cattle, instead of the standard tracer inulin, both agents were coadministered as a bolus intravenous injection to identical animals at doses of 10 mg of I/kg of BW and 30 mg/kg. Blood was collected 30, 60, 90, and 120 min after the injection, and the GFR was determined by the conventional multisample strategies. The GFR values from iodixanol were well consistent with those from inulin, and no effects of BW, age, or parity on GFR estimates were noted. However, the GFR in cattle weighing less than 300 kg, aged<1 yr old, largely fluctuated, presumably due to the rapid ruminal growth and dynamic changes in renal function at young adult ages. Using clinically healthy cattle and those with renal failure, the GFR values estimated from the Holstein equation were in good agreement with those by the multisample method using iodixanol (r=0.89, P=0.01). The results indicate that the simplified Holstein equation using iodixanol can be used for estimating the GFR of beef cattle in the same dose regimen as Holstein dairy cows, and provides a practical and ethical alternative.
Implementation of a Simplified State Estimator for Wind Turbine Monitoring on an Embedded System
DEFF Research Database (Denmark)
Rasmussen, Theis Bo; Yang, Guangya; Nielsen, Arne Hejde
2017-01-01
system, including individual DER, is time consuming and numerically challenging. This paper presents the approach and results of implementing a simplified state estimator onto an embedded system for improving DER monitoring. The implemented state estimator is based on numerically robust orthogonal......The transition towards a cyber-physical energy system (CPES) entails an increased dependency on valid data. Simultaneously, an increasing implementation of renewable generation leads to possible control actions at individual distributed energy resources (DERs). A state estimation covering the whole...
Systematization of simplified J-integral evaluation method for flaw evaluation at high temperature
International Nuclear Information System (INIS)
Miura, Naoki; Takahashi, Yukio; Nakayama, Yasunari; Shimakawa, Takashi
2000-01-01
J-integral is an effective inelastic fracture parameter for the flaw evaluation of cracked components at high temperature. The evaluation of J-integral for an arbitrary crack configuration and an arbitrary loading condition can be generally accomplished by detailed numerical analysis such as finite element analysis, however, it is time-consuming and requires a high degree of expertise for its implementation. Therefore, it is important to develop simplified J-integral estimation techniques from the viewpoint of industrial requirements. In this study, a simplified J-integral evaluation method is proposed to estimate two types of J-integral parameters. One is the fatigue J-integral range to describe fatigue crack propagation behavior, and the other is the creep J-integral to describe creep crack propagation behavior. This paper presents the systematization of the simplified J-integral evaluation method incorporated with the reference stress method and the concept of elastic follow-up, and proposes a comprehensive evaluation procedure. The verification of the proposed method is presented in Part II of this paper. (author)
Simplified Model for the Hybrid Method to Design Stabilising Piles Placed at the Toe of Slopes
Directory of Open Access Journals (Sweden)
Dib M.
2018-01-01
Full Text Available Stabilizing precarious slopes by installing piles has become a widespread technique for landslides prevention. The design of slope-stabilizing piles by the finite element method is more accurate comparing to the conventional methods. This accuracy is because of the ability of this method to simulate complex configurations, and to analyze the soil-pile interaction effect. However, engineers prefer to use the simplified analytical techniques to design slope stabilizing piles, this is due to the high computational resources required by the finite element method. Aiming to combine the accuracy of the finite element method with simplicity of the analytical approaches, a hybrid methodology to design slope stabilizing piles was proposed in 2012. It consists of two steps; (1: an analytical estimation of the resisting force needed to stabilize the precarious slope, and (2: a numerical analysis to define the adequate pile configuration that offers the required resisting force. The hybrid method is applicable only for the analysis and the design of stabilizing piles placed in the middle of the slope, however, in certain cases like road constructions, piles are needed to be placed at the toe of the slope. Therefore, in this paper a simplified model for the hybrid method is dimensioned to analyze and design stabilizing piles placed at the toe of a precarious slope. The validation of the simplified model is presented by a comparative analysis with the full coupled finite element model.
Ungar, Eugene K.; Richards, W. Lance
2015-01-01
The aircraft-based Stratospheric Observatory for Infrared Astronomy (SOFIA) is a platform for multiple infrared astronomical observation experiments. These experiments carry sensors cooled to liquid helium temperatures. The liquid helium supply is contained in large (i.e., 10 liters or more) vacuum-insulated dewars. Should the dewar vacuum insulation fail, the inrushing air will condense and freeze on the dewar wall, resulting in a large heat flux on the dewar's contents. The heat flux results in a rise in pressure and the actuation of the dewar pressure relief system. A previous NASA Engineering and Safety Center (NESC) assessment provided recommendations for the wall heat flux that would be expected from a loss of vacuum and detailed an appropriate method to use in calculating the maximum pressure that would occur in a loss of vacuum event. This method involved building a detailed supercritical helium compressible flow thermal/fluid model of the vent stack and exercising the model over the appropriate range of parameters. The experimenters designing science instruments for SOFIA are not experts in compressible supercritical flows and do not generally have access to the thermal/fluid modeling packages that are required to build detailed models of the vent stacks. Therefore, the SOFIA Program engaged the NESC to develop a simplified methodology to estimate the maximum pressure in a liquid helium dewar after the loss of vacuum insulation. The method would allow the university-based science instrument development teams to conservatively determine the cryostat's vent neck sizing during preliminary design of new SOFIA Science Instruments. This report details the development of the simplified method, the method itself, and the limits of its applicability. The simplified methodology provides an estimate of the dewar pressure after a loss of vacuum insulation that can be used for the initial design of the liquid helium dewar vent stacks. However, since it is not an exact
Simplified thermal fatigue evaluations using the GLOSS method
International Nuclear Information System (INIS)
Adinarayana, N.; Seshadri, R.
1996-01-01
The Generalized Local Stress Strain (GLOSS) method has been extended to include thermal effects in addition to mechanical loadings. The method, designated as Thermal-GLOSS, has been applied to several pressure component configuration of practical interest. The inelastic strains calculated by the Thermal-GLOSS method has been compared with the Molski-Glinka method, the Neuber formula and the inelastic finite element analysis results, and found to give consistently good estimates. This is pertinent to power plant equipment
Study on a pattern classification method of soil quality based on simplified learning sample dataset
Zhang, Jiahua; Liu, S.; Hu, Y.; Tian, Y.
2011-01-01
Based on the massive soil information in current soil quality grade evaluation, this paper constructed an intelligent classification approach of soil quality grade depending on classical sampling techniques and disordered multiclassification Logistic regression model. As a case study to determine the learning sample capacity under certain confidence level and estimation accuracy, and use c-means algorithm to automatically extract the simplified learning sample dataset from the cultivated soil quality grade evaluation database for the study area, Long chuan county in Guangdong province, a disordered Logistic classifier model was then built and the calculation analysis steps of soil quality grade intelligent classification were given. The result indicated that the soil quality grade can be effectively learned and predicted by the extracted simplified dataset through this method, which changed the traditional method for soil quality grade evaluation. ?? 2011 IEEE.
Legault, A.; Scott, L.; Rosemann, A.L.P.; Hopkins, M.
2014-01-01
CSA C873 Building Energy Estimation Methodology (BEEM) is a new series of (10) standards that is intended to simplify building energy calculations. The standard is based upon the German DIN Standard 18599 that has 8 years of proven track record and has been modified for the Canadian market. The BEEM
Application of the simplified J-estimation scheme Aramis to mismatching welds in CCP
International Nuclear Information System (INIS)
Eripret, C.; Franco, C.; Gilles, P.
1995-01-01
The J-based criteria give reasonable predictions of the failure behaviour of ductile cracked metallic structures, even if the material characterization may be sensitive to the size of the specimens. However in cracked welds, this phenomenon due to stress triaxiality effects could be enhanced. Furthermore, the application of conventional methods of toughness measurement (ESIS or ASTM standard) have evidenced a strong influence of the portion of the weld metal in the specimen. Several authors have shown the inadequacy of the simplified J-estimation methods developed for homogeneous materials. These heterogeneity effects mainly related to the mismatch ratio (ratio of weld metal yield strength upon base metal yield strength) as well as to the geometrical parameter h/W-a (weld width upon ligament size). In order to make decisive progress in this field, the Atomic Energy Commission (CEA), the PWR manufacturer FRAMATOME, and the French utility (EDF) have launched a large research program on cracked piping welds behaviour. As part of this program, a new J-estimation scheme, so called ARAMIS, has been developed to account for the influence of both materials, i.e. base metal and weld metal, on the structural resistance of cracked welds. It has been shown that, when the mismatch is high, and when the ligament size is small compared to the weld width, a classical J-based method using the softer material properties is very conservative. On the opposite the ARAMIS method provides a good estimate of J, because it predicts pretty well the shift of the cracked weld limit load, due to the presence of the weld. the influence of geometrical parameters such as crack size, weld width, or specimen length is property accounted for. (authors). 23 refs., 8 figs., 1 tab., 1 appendix
Photographic and drafting techniques simplify method of producing engineering drawings
Provisor, H.
1968-01-01
Combination of photographic and drafting techniques has been developed to simplify the preparation of three dimensional and dimetric engineering drawings. Conventional photographs can be converted to line drawings by making copy negatives on high contrast film.
Simplified method for beatlength measurement in optical fibre
International Nuclear Information System (INIS)
Chu, R.; Town, G.
2000-01-01
Full text: A simplified technique for measuring beatlength in birefringent optical fibres using magnetic modulation was analysed, and tested experimentally. By avoiding the use of unnecessary optical components and splicing to the fibre under test, the beatlength was measured accurately with good signal-to-noise ratio
Simplified elastoplastic methods of analysing fatigue in notches
International Nuclear Information System (INIS)
Autrusson, B.
1993-01-01
The aim of this study is to show the state of the art concerning methods of mechanical analysis available in the literature for evaluating notch root elastoplastic strain. The components of fast breeder reactors are subjected to numerous thermal transients, which can cause fatigue failure. To prevent this from happening, it is necessary to know the local strain range and to use it to estimate the number of cycles to crack initiation. Practical methods have been developed for the calculation of the local strain range, and have led to the drafting of design rules. Direct methods of determining the local strain range of the 'inelastic analysis' type have also been described. In conclusion a series of recommendations is made on the applicability and the conservatism of these methods
Simplified Methods Applied to Nonlinear Motion of Spar Platforms
Energy Technology Data Exchange (ETDEWEB)
Haslum, Herbjoern Alf
2000-07-01
Simplified methods for prediction of motion response of spar platforms are presented. The methods are based on first and second order potential theory. Nonlinear drag loads and the effect of the pumping motion in a moon-pool are also considered. Large amplitude pitch motions coupled to extreme amplitude heave motions may arise when spar platforms are exposed to long period swell. The phenomenon is investigated theoretically and explained as a Mathieu instability. It is caused by nonlinear coupling effects between heave, surge, and pitch. It is shown that for a critical wave period, the envelope of the heave motion makes the pitch motion unstable. For the same wave period, a higher order pitch/heave coupling excites resonant heave response. This mutual interaction largely amplifies both the pitch and the heave response. As a result, the pitch/heave instability revealed in this work is more critical than the previously well known Mathieu's instability in pitch which occurs if the wave period (or the natural heave period) is half the natural pitch period. The Mathieu instability is demonstrated both by numerical simulations with a newly developed calculation tool and in model experiments. In order to learn more about the conditions for this instability to occur and also how it may be controlled, different damping configurations (heave damping disks and pitch/surge damping fins) are evaluated both in model experiments and by numerical simulations. With increased drag damping, larger wave amplitudes and more time are needed to trigger the instability. The pitch/heave instability is a low probability of occurrence phenomenon. Extreme wave periods are needed for the instability to be triggered, about 20 seconds for a typical 200m draft spar. However, it may be important to consider the phenomenon in design since the pitch/heave instability is very critical. It is also seen that when classical spar platforms (constant cylindrical cross section and about 200m draft
20 CFR 404.241 - 1977 simplified old-start method.
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false 1977 simplified old-start method. 404.241... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Old-Start Method of Computing Primary Insurance Amounts § 404.241 1977 simplified old-start method. (a) Who is qualified. To qualify for the old...
A simplified 137Cs transport model for estimating erosion rates in undisturbed soil
International Nuclear Information System (INIS)
Zhang Xinbao; Long Yi; He Xiubin; Fu Jiexiong; Zhang Yunqi
2008-01-01
137 Cs is an artificial radionuclide with a half-life of 30.12 years which released into the environment as a result of atmospheric testing of thermo-nuclear weapons primarily during the period of 1950s-1970s with the maximum rate of 137 Cs fallout from atmosphere in 1963. 137 Cs fallout is strongly and rapidly adsorbed by fine particles in the surface horizons of the soil, when it falls down on the ground mostly with precipitation. Its subsequent redistribution is associated with movements of the soil or sediment particles. The 137 Cs nuclide tracing technique has been used for assessment of soil losses for both undisturbed and cultivated soils. For undisturbed soils, a simple profile-shape model was developed in 1990 to describe the 137 Cs depth distribution in profile, where the maximum 137 Cs occurs in the surface horizon and it exponentially decreases with depth. The model implied that the total 137 Cs fallout amount deposited on the earth surface in 1963 and the 137 Cs profile shape has not changed with time. The model has been widely used for assessment of soil losses on undisturbed land. However, temporal variations of 137 Cs depth distribution in undisturbed soils after its deposition on the ground due to downward transport processes are not considered in the previous simple profile-shape model. Thus, the soil losses are overestimated by the model. On the base of the erosion assessment model developed by Walling, D.E., He, Q. [1999. Improved models for estimating soil erosion rates from cesium-137 measurements. Journal of Environmental Quality 28, 611-622], we discuss the 137 Cs transport process in the eroded soil profile and make some simplification to the model, develop a method to estimate the soil erosion rate more expediently. To compare the soil erosion rates calculated by the simple profile-shape model and the simple transport model, the soil losses related to different 137 Cs loss proportions of the reference inventory at the Kaixian site of the
A simplified procedure for mass and stiffness estimation of existing structures
Nigro, Antonella; Ditommaso, Rocco; Carlo Ponzo, Felice; Salvatore Nigro, Domenico
2016-04-01
This work focuses the attention on a parametric method for mass and stiffness identification of framed structures, based on frequencies evaluation. The assessment of real structures is greatly affected by the consistency of information retrieved on materials and on the influence of both non-structural components and soil. One of the most important matter is the correct definition of the distribution, both in plan and in elevation, of mass and stiffness: depending on concentrated and distributed loads, the presence of infill panels and the distribution of structural elements. In this study modal identification is performed under several mass-modified conditions and structural parameters consistent with the identified modal parameters are determined. Modal parameter identification of a structure before and after the introduction of additional masses is conducted. By considering the relationship between the additional masses and modal properties before and after the mass modification, structural parameters of a damped system, i.e. mass, stiffness and damping coefficient are inversely estimated from these modal parameters variations. The accuracy of the method can be improved by using various mass-modified conditions. The proposed simplified procedure has been tested on both numerical and experimental models by means linear numerical analyses and shaking table tests performed on scaled structures at the Seismic Laboratory of the University of Basilicata (SISLAB). Results confirm the effectiveness of the proposed procedure to estimate masses and stiffness of existing real structures with a maximum error equal to 10%, under the worst conditions. Acknowledgements This study was partially funded by the Italian Civil Protection Department within the project DPC-RELUIS 2015 - RS4 ''Seismic observatory of structures and health monitoring''.
Study on simplified estimation of J-integral under thermal loading
International Nuclear Information System (INIS)
Takahashi, Y.
1993-01-01
For assessing structural integrity or safety of nuclear power plants, strength of structures under the presence of flaws sometimes needs to be evaluated. Because relative large inelastic deformation is anticipated in the liquid metal reactor components even without flaws due to high operating temperature and large temperature gradients, inelastic effects should be properly taken into account in the flaw assessment procedures. It is widely recognized that J-integral and its variations - e.g. fatigue J-integral range and creep J-integral - play substantial roles in the flaw assessment under the presence of large inelastic deformation. Therefore their utilization has been promoted in the recent flaw assessment procedure both for low and high temperature plants. However, it is not very practical to conduct a detailed numerical computation for cracked structures to estimate the values of these parameters for the purpose of trailing crack growth history. Thus development of simplified estimation methods which do not require full numerical calculation for cracked structures is desirable. A method using normalized J-integral solutions tabulated in the handbook is a direct extension of linear fracture mechanics counterpart and it can be used for standard specimen and simple structural configurations subjected to specified loading type. The reference stress method has also been developed but in this case limit load solutions, which are often difficult to obtain for general stress distribution, are necessary instead of nonlinear J-integral solutions. However, both methods have been developed mainly for mechanical loading and thus applying these techniques to thermal stress problem is rather difficult except the cases where the thermal stress can be properly substituted by equivalent mechanical loading as in the case of simple thermal expansion loading. Therefore alternative approach should be pursued for estimating J-integral and their variations in thermal stress problems
Simplified MPN method for enumeration of soil naphthalene degraders using gaseous substrate.
Wallenius, Kaisa; Lappi, Kaisa; Mikkonen, Anu; Wickström, Annika; Vaalama, Anu; Lehtinen, Taru; Suominen, Leena
2012-02-01
We describe a simplified microplate most-probable-number (MPN) procedure to quantify the bacterial naphthalene degrader population in soil samples. In this method, the sole substrate naphthalene is dosed passively via gaseous phase to liquid medium and the detection of growth is based on the automated measurement of turbidity using an absorbance reader. The performance of the new method was evaluated by comparison with a recently introduced method in which the substrate is dissolved in inert silicone oil and added individually to each well, and the results are scored visually using a respiration indicator dye. Oil-contaminated industrial soil showed slightly but significantly higher MPN estimate with our method than with the reference method. This suggests that gaseous naphthalene was dissolved in an adequate concentration to support the growth of naphthalene degraders without being too toxic. The dosing of substrate via gaseous phase notably reduced the work load and risk of contamination. The result scoring by absorbance measurement was objective and more reliable than measurement with indicator dye, and it also enabled further analysis of cultures. Several bacterial genera were identified by cloning and sequencing of 16S rRNA genes from the MPN wells incubated in the presence of gaseous naphthalene. In addition, the applicability of the simplified MPN method was demonstrated by a significant positive correlation between the level of oil contamination and the number of naphthalene degraders detected in soil.
Simplified Method for Rapid Purification of Soluble Histones
Directory of Open Access Journals (Sweden)
Nives Ivić
2016-06-01
Full Text Available Functional and structural studies of histone-chaperone complexes, nucleosome modifications, their interactions with remodelers and regulatory proteins rely on obtaining recombinant histones from bacteria. In the present study, we show that co-expression of Xenopus laevis histone pairs leads to production of soluble H2AH2B heterodimer and (H3H42 heterotetramer. The soluble histone complexes are purified by simple chromatographic techniques. Obtained H2AH2B dimer and H3H4 tetramer are proficient in histone chaperone binding and histone octamer and nucleosome formation. Our optimized protocol enables rapid purification of multiple soluble histone variants with a remarkable high yield and simplifies histone octamer preparation. We expect that this simple approach will contribute to the histone chaperone and chromatin research. This work is licensed under a Creative Commons Attribution 4.0 International License.
Measurement of gastric emptying rate in humans. Simplified scanning method
Energy Technology Data Exchange (ETDEWEB)
Holt, S.; Colliver, J.; Guram, M.; Neal, C.; Verhulst, S.J.; Taylor, T.V. (Univ. of South Carolina School of Medicine, Columbia (USA))
1990-11-01
Simultaneous measurements of the gastric emptying rate of the solid and liquid phase of a dual-isotope-labeled test meal were made using a gamma camera and a simple scintillation detector, similar to that used in a hand-held probe. A simple scanning apparatus, similar to that used in a hand-held scintillation probe, was compared with simultaneous measurements made by a gamma camera in 16 healthy males. A dual-labeled test meal was utilized to measure liquid and solid emptying simultaneously. Anterior and posterior scans were taken at intervals up to 120 min using both a gamma camera and the scintillation probe. Good relative agreement between the methods was obtained both for solid-phase (correlation range 0.92-0.99, mean 0.97) and for liquid-phase data (correlation range 0.93-0.99, mean 0.97). For solid emptying data regression line slopes varied from 0.75 to 1.03 (mean 0.84). Liquid emptying data indicated that slopes ranged from 0.71 to 1.06 (mean 0.87). These results suggested that an estimate of the gamma measurement could be obtained by multiplying the scintillation measurement by a factor of 0.84 for the solid phase and 0.87 for the liquid phase. Correlation between repeat studies was 0.97 and 0.96 for solids and liquids, respectively. The application of a hand-held probe technique provides a noninvasive and inexpensive method for accurately assessing solid- and liquid-phase gastric emptying from the human stomach that correlates well with the use of a gamma camera, within the range of gastric emptying rate in the normal individuals in this study.
Measurement of gastric emptying rate in humans. Simplified scanning method
International Nuclear Information System (INIS)
Holt, S.; Colliver, J.; Guram, M.; Neal, C.; Verhulst, S.J.; Taylor, T.V.
1990-01-01
Simultaneous measurements of the gastric emptying rate of the solid and liquid phase of a dual-isotope-labeled test meal were made using a gamma camera and a simple scintillation detector, similar to that used in a hand-held probe. A simple scanning apparatus, similar to that used in a hand-held scintillation probe, was compared with simultaneous measurements made by a gamma camera in 16 healthy males. A dual-labeled test meal was utilized to measure liquid and solid emptying simultaneously. Anterior and posterior scans were taken at intervals up to 120 min using both a gamma camera and the scintillation probe. Good relative agreement between the methods was obtained both for solid-phase (correlation range 0.92-0.99, mean 0.97) and for liquid-phase data (correlation range 0.93-0.99, mean 0.97). For solid emptying data regression line slopes varied from 0.75 to 1.03 (mean 0.84). Liquid emptying data indicated that slopes ranged from 0.71 to 1.06 (mean 0.87). These results suggested that an estimate of the gamma measurement could be obtained by multiplying the scintillation measurement by a factor of 0.84 for the solid phase and 0.87 for the liquid phase. Correlation between repeat studies was 0.97 and 0.96 for solids and liquids, respectively. The application of a hand-held probe technique provides a noninvasive and inexpensive method for accurately assessing solid- and liquid-phase gastric emptying from the human stomach that correlates well with the use of a gamma camera, within the range of gastric emptying rate in the normal individuals in this study
The large break LOCA evaluation method with the simplified statistic approach
International Nuclear Information System (INIS)
Kamata, Shinya; Kubo, Kazuo
2004-01-01
USNRC published the Code Scaling, Applicability and Uncertainty (CSAU) evaluation methodology to large break LOCA which supported the revised rule for Emergency Core Cooling System performance in 1989. In USNRC regulatory guide 1.157, it is required that the peak cladding temperature (PCT) cannot exceed 2200deg F with high probability 95th percentile. In recent years, overseas countries have developed statistical methodology and best estimate code with the model which can provide more realistic simulation for the phenomena based on the CSAU evaluation methodology. In order to calculate PCT probability distribution by Monte Carlo trials, there are approaches such as the response surface technique using polynomials, the order statistics method, etc. For the purpose of performing rational statistic analysis, Mitsubishi Heavy Industries, LTD (MHI) tried to develop the statistic LOCA method using the best estimate LOCA code MCOBRA/TRAC and the simplified code HOTSPOT. HOTSPOT is a Monte Carlo heat conduction solver to evaluate the uncertainties of the significant fuel parameters at the PCT positions of the hot rod. The direct uncertainty sensitivity studies can be performed without the response surface because the Monte Carlo simulation for key parameters can be performed in short time using HOTSPOT. With regard to the parameter uncertainties, MHI established the treatment that the bounding conditions are given for LOCA boundary and plant initial conditions, the Monte Carlo simulation using HOTSPOT is applied to the significant fuel parameters. The paper describes the large break LOCA evaluation method with the simplified statistic approach and the results of the application of the method to the representative four-loop nuclear power plant. (author)
RESEARCH ON THE BREEDING VALUE ESTIMATION FOR BEEF TRAITS BY A SIMPLIFIED MIXED MODEL
Directory of Open Access Journals (Sweden)
Agatha POPESCU
2014-10-01
Full Text Available The paper purpose was to apply a simplified mixed model BLUP for estimating bulls' breeding value for meat production in terms of weight daily gain and establish their hierarchy, Also, it aimed to compare the bulls' ranging obtained by a simplified BLUP mixed model with their hierarchy set up by contemporary comparison. A sample of 1,705 half sibs steers, offspring of 106 Friesian bulls were used as biological material. Bulls' breeding value varied between + 244.5 g for the best bull and -204.7 g for the bull with the weakest records. A number of 57 bulls ( 53.77% registered positive breeding values. The accuracy of the breeding value estimation varied between 80, the highest precision, in case of the bull number 21 and 53, the lowest precision, in case of the bull number 38. A number of 7 bulls of the total of 57 with a positive breeding value were situated aproximately on the same positions at a difference of 0 to 1 points on the both lists established by BLUP and contemporary comparison. As a conclusion, BLUP could be largely and easily applied in bull evaluation for meat production traits in term of weight daily gain, considered the key parameter during the fattening period and its precision is very high, a guarantee that the bulls' hierarchy is a correct one. If a farmer would chose a high breeding value bull from a catalogue, he could be sure of the improvement of beef production by genetic gain.
International Nuclear Information System (INIS)
Picciotto, G.; Cacace, G.; Mosso, R.; De Filippi, P.G.; Cesana, P.; Ropolo, R.
1992-01-01
Chromium-51 ethylene diamine tetra-acetic acid ( 51 Cr-EDTA) total plasma clearance was evaluated using a multi-sample method (i.e. 12 blood samples) as the reference compared with several simplified methods which necessitated only one or few blood samples. The following 5 methods were evaluated: Terminal slope-intercept method with 3 blood samples, simplified method of Broechner-Mortensen and 3 single-sample methods (Constable, Christensen and Groth, Tauxe). Linear regression analysis was performed. Standard error of estimate, bias and imprecision of different methods were evaluated. For 51 Cr-EDTA total plasma clearance greater than 30 ml.min -1 , the results which most approximated the reference source were obtained by the Christensen and Groth method at a sampling time of 300 min (inaccuracy of 4.9%). For clearances between 10 and 30 ml.min -1 , single-sample methods failed to give reliable results. Terminal slope-intercept and Broechner-Mortensen methods were better, with inaccuracies of 17.7% and 16.9%, respectively. Although sampling times at 180, 240 and 300 min are time-consuming for patients, 51 Cr-EDTA total plasma clearance can be accurately calculated for values greater than 10 ml.min -1 using the Broechner-Mortensen method. In patients with clearance greater than 30 ml.min -1 , single-sample techniques provide a good alternative to the multi-sample method; the choice of the method to be used depends on the degree of accuracy required. (orig.)
Simplified methods and application to preliminary design of piping for elevated temperature service
International Nuclear Information System (INIS)
Severud, L.K.
1975-01-01
A number of simplified stress analysis methods and procedures that have been used on the FFTF project for preliminary design of piping operating at elevated temperatures are described. The rationale and considerations involved in developing the procedures and preliminary design guidelines are given. Applications of the simplified methods to a few FFTF pipelines are described and the success of these guidelines are measured by means of comparisons to pipeline designs that have had detailed Code type stress analyses. (U.S.)
Jibson, Randall W.; Jibson, Matthew W.
2003-01-01
Landslides typically cause a large proportion of earthquake damage, and the ability to predict slope performance during earthquakes is important for many types of seismic-hazard analysis and for the design of engineered slopes. Newmark's method for modeling a landslide as a rigid-plastic block sliding on an inclined plane provides a useful method for predicting approximate landslide displacements. Newmark's method estimates the displacement of a potential landslide block as it is subjected to earthquake shaking from a specific strong-motion record (earthquake acceleration-time history). A modification of Newmark's method, decoupled analysis, allows modeling landslides that are not assumed to be rigid blocks. This open-file report is available on CD-ROM and contains Java programs intended to facilitate performing both rigorous and simplified Newmark sliding-block analysis and a simplified model of decoupled analysis. For rigorous analysis, 2160 strong-motion records from 29 earthquakes are included along with a search interface for selecting records based on a wide variety of record properties. Utilities are available that allow users to add their own records to the program and use them for conducting Newmark analyses. Also included is a document containing detailed information about how to use Newmark's method to model dynamic slope performance. This program will run on any platform that supports the Java Runtime Environment (JRE) version 1.3, including Windows, Mac OSX, Linux, Solaris, etc. A minimum of 64 MB of available RAM is needed, and the fully installed program requires 400 MB of disk space.
International Nuclear Information System (INIS)
Nakajima, Ken
2003-01-01
Applicability of four simplified methods to evaluate the consequences of criticality accident was investigated. Fissions in the initial burst and total fissions were evaluated using the simplified methods and those results were compared with the past accident data. The simplified methods give the number of fissions in the initial burst as a function of solution volume; however the accident data did not show such tendency. This would be caused by the lack of accident data for the initial burst with high accuracy. For total fissions, simplified almost reproduced the upper envelope of the accidents. However several accidents, which were beyond the applicable conditions, resulted in the larger total fissions than the evaluations. In particular, the Tokai-mura accident in 1999 gave in the largest total specific fissions, because the activation of cooling system brought the relatively high power for a long time. (author)
Energy Technology Data Exchange (ETDEWEB)
Patri, Sudheer, E-mail: patri@igcar.gov.in; Mohana, M.; Kameswari, K.; Kumar, S. Suresh; Narmadha, S.; Vijayshree, R.; Meikandamurthy, C.; Venkatesan, A.; Palanisami, K.; Murthy, D. Thirugnana; Babu, B.; Prakash, V.; Rajan, K.K.
2015-04-15
Highlights: • An alternative method for estimating the electromagnet clutch release time. • A systematic approach to develop a computer based measuring system. • Prototype tests on the measurement system. • Accuracy of the method is ±6% and repeatability error is within 2%. - Abstract: The delay time in electromagnet clutch release during a reactor trip (scram action) is an important safety parameter, having a bearing on the plant safety during various design basis events. Generally, it is measured using current decay characteristics of electromagnet coil and its energising circuit. A simplified method of measuring the same in a Sodium cooled fast reactors (SFR) is proposed in this paper. The method utilises the position data of control rod to estimate the delay time in electromagnet clutch release. A computer based real time measurement system for measuring the electromagnet clutch delay time is developed and qualified for retrofitting in prototype fast breeder reactor. Various stages involved in the development of the system are principle demonstration, experimental verification of hardware capabilities and prototype system testing. Tests on prototype system have demonstrated the satisfactory performance of the system with intended accuracy and repeatability.
Simplified approaches to some nonoverlapping domain decomposition methods
Energy Technology Data Exchange (ETDEWEB)
Xu, Jinchao
1996-12-31
An attempt will be made in this talk to present various domain decomposition methods in a way that is intuitively clear and technically coherent and concise. The basic framework used for analysis is the {open_quotes}parallel subspace correction{close_quotes} or {open_quotes}additive Schwarz{close_quotes} method, and other simple technical tools include {open_quotes}local-global{close_quotes} and {open_quotes}global-local{close_quotes} techniques, the formal one is for constructing subspace preconditioner based on a preconditioner on the whole space whereas the later one for constructing preconditioner on the whole space based on a subspace preconditioner. The domain decomposition methods discussed in this talk fall into two major categories: one, based on local Dirichlet problems, is related to the {open_quotes}substructuring method{close_quotes} and the other, based on local Neumann problems, is related to the {open_quotes}Neumann-Neumann method{close_quotes} and {open_quotes}balancing method{close_quotes}. All these methods will be presented in a systematic and coherent manner and the analysis for both two and three dimensional cases are carried out simultaneously. In particular, some intimate relationship between these algorithms are observed and some new variants of the algorithms are obtained.
A simplified method for scanning electron microscopy (SEM) autoradiography
International Nuclear Information System (INIS)
Shahar, A.; Lasher, R.
1980-01-01
The combination of autoradiography with SEM provides a valuable tool for the study of labeled biological materials, but the previously described methods are complicated because they call first for the removal of gelatin from the film emulsion and this is then followed by deposition of gold vapor on the specimen. The authors describe a much simpler method which can easily be adapted to routine examination of cell cultures. In this method, gelatin is not removed; the film is coated with vaporized carbon only. This procedure permits visualization of both cellular image and distribution of silver grains. (Auth.)
Applicability of simplified human reliability analysis methods for severe accidents
Energy Technology Data Exchange (ETDEWEB)
Boring, R.; St Germain, S. [Idaho National Lab., Idaho Falls, Idaho (United States); Banaseanu, G.; Chatri, H.; Akl, Y. [Canadian Nuclear Safety Commission, Ottawa, Ontario (Canada)
2016-03-15
Most contemporary human reliability analysis (HRA) methods were created to analyse design-basis accidents at nuclear power plants. As part of a comprehensive expansion of risk assessments at many plants internationally, HRAs will begin considering severe accident scenarios. Severe accidents, while extremely rare, constitute high consequence events that significantly challenge successful operations and recovery. Challenges during severe accidents include degraded and hazardous operating conditions at the plant, the shift in control from the main control room to the technical support center, the unavailability of plant instrumentation, and the need to use different types of operating procedures. Such shifts in operations may also test key assumptions in existing HRA methods. This paper discusses key differences between design basis and severe accidents, reviews efforts to date to create customized HRA methods suitable for severe accidents, and recommends practices for adapting existing HRA methods that are already being used for HRAs at the plants. (author)
Simplified Method for Groundwater Treatment Using Dilution and Ceramic Filter
Musa, S.; Ariff, N. A.; Kadir, M. N. Abdul; Denan, F.
2016-07-01
Groundwater is one of the natural resources that is not susceptible to pollutants. However, increasing activities of municipal, industrial, agricultural or extreme land use activities have resulted in groundwater contamination as occured at the Research Centre for Soft Soil Malaysia (RECESS), Universiti Tun Hussein Onn Malaysia (UTHM). Thus, aims of this study is to treat groundwater by using rainwater and simple ceramic filter as a treatment agent. The treatment uses rain water dilution, ceramic filters and combined method of dilute and filtering as an alternate treatment which are simple and more practical compared to modern or chemical methods. The water went through dilution treatment processes able to get rid of 57% reduction compared to initial condition. Meanwhile, the water that passes through the filtering process successfully get rid of as much as 86% groundwater parameters where only chloride does not pass the standard. Favorable results for the combination methods of dilution and filtration methods that can succesfully eliminate 100% parameters that donot pass the standards of the Ministry of Health and the Interim National Drinking Water Quality Standard such as those found in groundwater in RECESS, UTHM especially sulfate and chloride. As a result, it allows the raw water that will use clean drinking water and safe. It also proves that the method used in this study is very effective in improving the quality of groundwater.
Office-based sperm concentration: A simplified method for ...
African Journals Online (AJOL)
Methods: Semen samples from 51 sperm donors were used. Following swim-up separation, the sperm concentration of the retrieved motile fraction was counted, as well as progressive motile sperm using a standardised wet preparation. The number of sperm in a 10 μL droplet covered with a 22 × 22 mm coverslip was ...
Quick analysis of inelastic structures using a simplified method
International Nuclear Information System (INIS)
Ingelbert, G.; Frelat, J.
1989-01-01
The main difficulty occurring in the analysis of plastic structures lies in the plasticity criterion. Usually an incremental methods is needed to solve such a problem. The originality of the present method is the introduction of a new transformed parameter Y linked to both the local material behaviour and structural coupling which leads to an easy solution. Y solutions to find limit loads or stabilized cyclic response to cyclic loads can easily be found. Some applications will be shown. Extensions to problems such as short-peening, viscoelastic behaviour and transient dynamic loads have already been derived, applied and satisfactorily compared with experimental data. This approach has been shown to be a very useful tool for the design in the explored fields and could be applied to a very large class of other problems. (orig.)
A simplified ultrafiltration method for determination of serum free cortisol
International Nuclear Information System (INIS)
MacMahon, W.; Sgoutas, D.
1983-01-01
The authors describe the suitability of the Amicon MPS-1 centrifugal ultrafiltration device and the YMB membrane for measuring free cortisol in serum. The method combines two independent assays: total cortisol and the ultrafiltrate fraction of added [ 3 H]cortisol. The unbound fraction is determined in 0.25-0.30 ml of ultrafiltrate collected from 0.6 to 1 ml of serum that has been equilibrated with [ 3 H]cortisol at 37 0 C for 20 min. The assay is rapid (less than 1 h), practical (no more than 0.6 ml of serum is necessary) and repeatable (CV: 3.8% within-assay and 12.2% in different assays). Error introduced in free cortisol measurement due to dilution effects in dialysis is systematically defined, and the effect of tracer purity on the ultrafiltration method is examined. Dialyzed sera from normal men and women, from patients with Cushing's disease and adrenal insufficiency, and from pregnant women gave ultrafiltration results that accurately duplicated those obtained by previous dialysis. (Auth.)
Simplified Hybrid-Secondary Uncluttered Machine And Method
Hsu, John S [Oak Ridge, TN
2005-05-10
An electric machine (40, 40') has a stator (43) and a rotor (46) and a primary air gap (48) has secondary coils (47c, 47d) separated from the rotor (46) by a secondary air gap (49) so as to induce a slip current in the secondary coils (47c, 47d). The rotor (46, 76) has magnetic brushes (A, B, C, D) or wires (80) which couple flux in through the rotor (46) to the secondary coils (47c, 47d) without inducing a current in the rotor (46) and without coupling a stator rotational energy component to the secondary coils (47c, 47d). The machine can be operated as a motor or a generator in multi-phase or single-phase embodiments. A method of providing a slip energy controller is also disclosed.
Seismic analysis of long tunnels: A review of simplified and unified methods
Directory of Open Access Journals (Sweden)
Haitao Yu
2017-06-01
Full Text Available Seismic analysis of long tunnels is important for safety evaluation of the tunnel structure during earthquakes. Simplified models of long tunnels are commonly adopted in seismic design by practitioners, in which the tunnel is usually assumed as a beam supported by the ground. These models can be conveniently used to obtain the overall response of the tunnel structure subjected to seismic loading. However, simplified methods are limited due to the assumptions that need to be made to reach the solution, e.g. shield tunnels are assembled with segments and bolts to form a lining ring and such structural details may not be included in the simplified model. In most cases, the design will require a numerical method that does not have the shortcomings of the analytical solutions, as it can consider the structural details, non-linear behavior, etc. Furthermore, long tunnels have significant length and pass through different strata. All of these would require large-scale seismic analysis of long tunnels with three-dimensional models, which is difficult due to the lack of available computing power. This paper introduces two types of methods for seismic analysis of long tunnels, namely simplified and unified methods. Several models, including the mass-spring-beam model, and the beam-spring model and its analytical solution are presented as examples of the simplified method. The unified method is based on a multiscale framework for long tunnels, with coarse and refined finite element meshes, or with the discrete element method and the finite difference method to compute the overall seismic response of the tunnel while including detailed dynamic response at positions of potential damage or of interest. A bridging scale term is introduced in the framework so that compatibility of dynamic behavior between the macro- and meso-scale subdomains is enforced. Examples are presented to demonstrate the applicability of the simplified and the unified methods.
A simplified, data-constrained approach to estimate the permafrost carbon-climate feedback.
Koven, C D; Schuur, E A G; Schädel, C; Bohn, T J; Burke, E J; Chen, G; Chen, X; Ciais, P; Grosse, G; Harden, J W; Hayes, D J; Hugelius, G; Jafarov, E E; Krinner, G; Kuhry, P; Lawrence, D M; MacDougall, A H; Marchenko, S S; McGuire, A D; Natali, S M; Nicolsky, D J; Olefeldt, D; Peng, S; Romanovsky, V E; Schaefer, K M; Strauss, J; Treat, C C; Turetsky, M
2015-11-13
We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation-Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a three-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100. Under a medium warming scenario (RCP4.5), the approach projects permafrost soil C losses of 12.2-33.4 Pg C; under a high warming scenario (RCP8.5), the approach projects C losses of 27.9-112.6 Pg C. Projected C losses are roughly linearly proportional to global temperature changes across the two scenarios. These results indicate a global sensitivity of frozen soil C to climate change (γ sensitivity) of -14 to -19 Pg C °C(-1) on a 100 year time scale. For CH4 emissions, our approach assumes a fixed saturated area and that increases in CH4 emissions are related to increased heterotrophic respiration in anoxic soil, yielding CH4 emission increases of 7% and 35% for the RCP4.5 and RCP8.5 scenarios, respectively, which add an additional greenhouse gas forcing of approximately 10-18%. The simplified approach
A simplified, data-constrained approach to estimate the permafrost carbon–climate feedback
Koven, C.D.; Schuur, E.A.G.; Schädel, C.; Bohn, T. J.; Burke, E. J.; Chen, G.; Chen, X.; Ciais, P.; Grosse, G.; Harden, J.W.; Hayes, D.J.; Hugelius, G.; Jafarov, Elchin E.; Krinner, G.; Kuhry, P.; Lawrence, D.M.; MacDougall, A. H.; Marchenko, Sergey S.; McGuire, A. David; Natali, Susan M.; Nicolsky, D.J.; Olefeldt, David; Peng, S.; Romanovsky, V.E.; Schaefer, Kevin M.; Strauss, J.; Treat, C.C.; Turetsky, M.
2015-01-01
We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation–Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a three-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100. Under a medium warming scenario (RCP4.5), the approach projects permafrost soil C losses of 12.2–33.4 Pg C; under a high warming scenario (RCP8.5), the approach projects C losses of 27.9–112.6 Pg C. Projected C losses are roughly linearly proportional to global temperature changes across the two scenarios. These results indicate a global sensitivity of frozen soil C to climate change (γ sensitivity) of −14 to −19 Pg C °C−1 on a 100 year time scale. For CH4 emissions, our approach assumes a fixed saturated area and that increases in CH4 emissions are related to increased heterotrophic respiration in anoxic soil, yielding CH4 emission increases of 7% and 35% for the RCP4.5 and RCP8.5 scenarios, respectively, which add an additional greenhouse gas forcing of approximately 10–18%. The
A Manual of Simplified Laboratory Methods for Operators of Wastewater Treatment Facilities.
Westerhold, Arnold F., Ed.; Bennett, Ernest C., Ed.
This manual is designed to provide the small wastewater treatment plant operator, as well as the new or inexperienced operator, with simplified methods for laboratory analysis of water and wastewater. It is emphasized that this manual is not a replacement for standard methods but a guide for plants with insufficient equipment to perform analyses…
Simplified solutions of the Cox-Thompson inverse scattering method at fixed energy
International Nuclear Information System (INIS)
Palmai, Tamas; Apagyi, Barnabas; Horvath, Miklos
2008-01-01
Simplified solutions of the Cox-Thompson inverse quantum scattering method at fixed energy are derived if a finite number of partial waves with only even or odd angular momenta contribute to the scattering process. Based on new formulae various approximate methods are introduced which also prove applicable to the generic scattering events
Evaluation of single-sided natural ventilation using a simplified and fair calculation method
DEFF Research Database (Denmark)
Plesner, Christoffer; Larsen, Tine Steen; Leprince, Valérie
2016-01-01
the scope of standards and regulations in the best way. This has been done by comparing design expressions using parameter variations, comparison to wind-tunnel experiments and full-scale outdoor measurements. A modified De Gids & Phaff method showed to be a simplified and fair calculation method that would...
International Nuclear Information System (INIS)
Wang, Xiaoliang; Lei, Bo; Bi, Haiquan; Yu, Tao
2017-01-01
Highlights: • A simplified method for evaluating thermal performance of UTC is developed. • Experiments, numerical simulations, dimensional analysis and data fitting are used. • The correlation of absorber plate temperature for UTC is established. • The empirical correlation of heat exchange effectiveness for UTC is proposed. - Abstract: Due to the advantages of low investment and high energy efficiency, unglazed transpired solar collectors (UTC) have been widely used for heating in buildings. However, it is difficult for designers to quickly evaluate the thermal performance of UTC based on the conventional methods such as experiments and numerical simulations. Therefore, a simple and fast method to determine the thermal performance of UTC is indispensable. The objective of this work is to provide a simplified calculation method to easily evaluate the thermal performance of UTC under steady state. Different parameters are considered in the simplified method, including pitch, perforation diameter, solar radiation, solar absorptivity, approach velocity, ambient air temperature, absorber plate temperature, and so on. Based on existing design parameters and operating conditions, correlations for the absorber plate temperature and the heat exchange effectiveness are developed using dimensional analysis and data fitting, respectively. Results show that the proposed simplified method has a high accuracy and can be employed to evaluate the collector efficiency, the heat exchange effectiveness and the air temperature rise. The proposed method in this paper is beneficial to directly determine design parameters and operating status for UTC.
Simplified inelastic analysis methods applied to fast breeder reactor core design
International Nuclear Information System (INIS)
Abo-El-Ata, M.M.
1978-01-01
The paper starts with a review of some currently available simplified inelastic analysis methods used in elevated temperature design for evaluating plastic and thermal creep strains. The primary purpose of the paper is to investigate how these simplified methods may be applied to fast breeder reactor core design where neutron irradiation effects are significant. One of the problems discussed is irradiation-induced creep and its effect on shakedown, ratcheting, and plastic cycling. Another problem is the development of swelling-induced stress which is an additional loading mechanism and must be taken into account. In this respect an expression for swelling-induced stress in the presence of irradiation creep is derived and a model for simplifying the stress analysis under these conditions is proposed. As an example, the effects of irradiation creep and swelling induced stress on the analysis of a thin walled tube under constant internal pressure and intermittent heat fluxes, simulating a fuel pin, is presented
de Carvalho, Fábio Romeu; Abe, Jair Minoro
2010-11-01
Two recent non-classical logics have been used to make decision: fuzzy logic and paraconsistent annotated evidential logic Et. In this paper we present a simplified version of the fuzzy decision method and its comparison with the paraconsistent one. Paraconsistent annotated evidential logic Et, introduced by Da Costa, Vago and Subrahmanian (1991), is capable of handling uncertain and contradictory data without becoming trivial. It has been used in many applications such as information technology, robotics, artificial intelligence, production engineering, decision making etc. Intuitively, one Et logic formula is type p(a, b), in which a and b belong to [0, 1] (real interval) and represent respectively the degree of favorable evidence (or degree of belief) and the degree of contrary evidence (or degree of disbelief) found in p. The set of all pairs (a; b), called annotations, when plotted, form the Cartesian Unitary Square (CUS). This set, containing a similar order relation of real number, comprises a network, called lattice of the annotations. Fuzzy logic was introduced by Zadeh (1965). It tries to systematize the knowledge study, searching mainly to study the fuzzy knowledge (you don't know what it means) and distinguish it from the imprecise one (you know what it means, but you don't know its exact value). This logic is similar to paraconsistent annotated one, since it attributes a numeric value (only one, not two values) to each proposition (then we can say that it is an one-valued logic). This number translates the intensity (the degree) with which the preposition is true. Let's X a set and A, a subset of X, identified by the function f(x). For each element x∈X, you have y = f(x)∈[0, 1]. The number y is called degree of pertinence of x in A. Decision making theories based on these logics have shown to be powerful in many aspects regarding more traditional methods, like the one based on Statistics. In this paper we present a first study for a simplified
Formative Research on the Simplifying Conditions Method (SCM) for Task Analysis and Sequencing.
Kim, YoungHwan; Reigluth, Charles M.
The Simplifying Conditions Method (SCM) is a set of guidelines for task analysis and sequencing of instructional content under the Elaboration Theory (ET). This article introduces the fundamentals of SCM and presents the findings from a formative research study on SCM. It was conducted in two distinct phases: design and instruction. In the first…
A Simplified Method for Tissue Engineering Skeletal Muscle Organoids in Vitro
Shansky, Janet; DelTatto, Michael; Chromiak, Joseph; Vandenburgh, Herman
1996-01-01
Tissue-engineered three dimensional skeletal muscle organ-like structures have been formed in vitro from primary myoblasts by several different techniques. This report describes a simplified method for generating large numbers of muscle organoids from either primary embryonic avian or neonatal rodent myoblasts, which avoids the requirements for stretching and other mechanical stimulation.
A successive over-relaxation for slab geometry Simplified SN method with interface flux iteration
International Nuclear Information System (INIS)
Yavuz, M.
1995-01-01
A Successive Over-Relaxation scheme is proposed for speeding up the solution of one-group slab geometry transport problems using a Simplified S N method. The solution of the Simplified S N method that is completely free from all spatial truncation errors is based on the expansion of the angular flux in spherical-harmonics solutions. One way to obtain the (numerical) solution of the Simplified S N method is to use Interface Flux Iteration, which can be considered as the Gauss-Seidel relaxation scheme; the new information is immediately used in the calculations. To accelerate the convergence, an over relaxation parameter is employed in the solution algorithm. The over relaxation parameters for a number of cases depending on scattering ratios and mesh sizes are determined by Fourier analyzing infinite-medium Simplified S 2 equations. Using such over relaxation parameters in the iterative scheme, a significant increase in the convergence of transport problems can be achieved for coarse spatial cells whose spatial widths are greater than one mean-free-path
Update and Improve Subsection NH - Alternative Simplified Creep-Fatigue Design Methods
International Nuclear Information System (INIS)
Asayama, Tai
2009-01-01
This report described the results of investigation on Task 10 of DOE/ASME Materials NGNP/Generation IV Project based on a contract between ASME Standards Technology, LLC (ASME ST-LLC) and Japan Atomic Energy Agency (JAEA). Task 10 is to Update and Improve Subsection NH -- Alternative Simplified Creep-Fatigue Design Methods. Five newly proposed promising creep-fatigue evaluation methods were investigated. Those are (1) modified ductility exhaustion method, (2) strain range separation method, (3) approach for pressure vessel application, (4) hybrid method of time fraction and ductility exhaustion, and (5) simplified model test approach. The outlines of those methods are presented first, and predictability of experimental results of these methods is demonstrated using the creep-fatigue data collected in previous Tasks 3 and 5. All the methods (except the simplified model test approach which is not ready for application) predicted experimental results fairly accurately. On the other hand, predicted creep-fatigue life in long-term regions showed considerable differences among the methodologies. These differences come from the concepts each method is based on. All the new methods investigated in this report have advantages over the currently employed time fraction rule and offer technical insights that should be thought much of in the improvement of creep-fatigue evaluation procedures. The main points of the modified ductility exhaustion method, the strain range separation method, the approach for pressure vessel application and the hybrid method can be reflected in the improvement of the current time fraction rule. The simplified mode test approach would offer a whole new advantage including robustness and simplicity which are definitely attractive but this approach is yet to be validated for implementation at this point. Therefore, this report recommends the following two steps as a course of improvement of NH based on newly proposed creep-fatigue evaluation
Directory of Open Access Journals (Sweden)
Y. Zhao
2017-06-01
Full Text Available Local line rolling forming is a common forming approach for the complex curvature plate of ships. However, the processing mode based on artificial experience is still applied at present, because it is difficult to integrally determine relational data for the forming shape, processing path, and process parameters used to drive automation equipment. Numerical simulation is currently the major approach for generating such complex relational data. Therefore, a highly precise and effective numerical computation method becomes crucial in the development of the automated local line rolling forming system for producing complex curvature plates used in ships. In this study, a three-dimensional elastoplastic finite element method was first employed to perform numerical computations for local line rolling forming, and the corresponding deformation and strain distribution features were acquired. In addition, according to the characteristics of strain distributions, a simplified deformation simulation method, based on the deformation obtained by applying strain was presented. Compared to the results of the three-dimensional elastoplastic finite element method, this simplified deformation simulation method was verified to provide high computational accuracy, and this could result in a substantial reduction in calculation time. Thus, the application of the simplified deformation simulation method was further explored in the case of multiple rolling loading paths. Moreover, it was also utilized to calculate the local line rolling forming for the typical complex curvature plate of ships. Research findings indicated that the simplified deformation simulation method was an effective tool for rapidly obtaining relationships between the forming shape, processing path, and process parameters.
3D Bearing Capacity of Structured Cells Supported on Cohesive Soil: Simplified Analysis Method
Directory of Open Access Journals (Sweden)
Martínez-Galván Sergio Antonio
2013-06-01
Full Text Available In this paper a simplified analysis method to compute the bearing capacity of structured cell foundations subjected to vertical loading and supported in soft cohesive soil is proposed. A structured cell is comprised by a top concrete slab structurally connected to concrete external walls that enclose the natural soil. Contrary to a box foundation it does not include a bottom slab and hence, the soil within the walls becomes an important component of the structured cell. This simplified method considers the three-dimensional geometry of the cell, the undrained shear strength of cohesive soils and the existence of structural continuity between the top concrete slab and the surrounding walls, along the walls themselves and the walls structural joints. The method was developed from results of numerical-parametric analyses, from which it was found that structured cells fail according to a punching-type mechanism.
Directory of Open Access Journals (Sweden)
Agatha POPESCU
2014-10-01
Full Text Available The paper goal was to set up a simplified BLUP model in order to estimate the bulls' breeding value for milk production characters and establish their hierarchy, Also, it aimed to compare the bulls' hierarchy set up by means of the simplified BLUP model with their hierarchy established by using the traditional contemporary comparison method. In this purpose, a number of 51 Romanian Friesian bulls were used for evaluating their breeding value for milk production characters: milk yield, fat percentage and fat yield during the 305 days of the 1st lactation of a number of 1,989 daughters in various dairy herds. The simplified BLUP model set up in this research work has demonstrated its high precision of breeding value, which varied between 55 and 92, and more than this it proved that in some cases, the position occupied by bulls could be similar with the one registered by using the contemporary comparison. The higher precision assured by the simplified BLUP model is the guarantee that the bulls' hierarchy in catalogues is a correct one. In this way, farmers could chose the best bulls for improving milk yield in their dairy herds.
Appraisal of elastic follow-up for a generic mechanical structure through two simplified methods
International Nuclear Information System (INIS)
Gamboni, S.; Ravera, C.; Stretti, G.; Rebora, A.
1989-01-01
Elastic follow-up (EFU) is a complex phenomenon which affects the behaviour of some structural components, especially in high temperature operations. One of the major problems encountered by the designer is the quantitative evaluation of the amount of elastic follow-up that must be taken into account for the structures under examination. In the present paper a review of the guidance furnished by the ASME Code regarding EFU is presented through an application concerning a structural problem in which EFU occurs. This has been carried out with the additional purpose of comparing the percentage EFU obtained by two simplified methods: an inelastic simplified method involving relaxation analysis; the reduced elastic modulus procedure generally used for EFU problems in piping systems. The results obtained demonstrate a substantial agreement between the two methodologies when applied to a general type structure. (author)
Calculation methods for single-sided natural ventilation - simplified or detailed?
DEFF Research Database (Denmark)
Larsen, Tine Steen; Plesner, Christoffer; Leprince, Valérie
2016-01-01
A great energy saving potential lies within increased use of natural ventilation, not only during summer and midseason periods, where it is mainly used today, but also during winter periods, where the outdoor air holds a great cooling potential for ventilative cooling if draft problems can...... be handled. This paper presents a newly developed simplified calculation method for single-sided natural ventilation, which is proposed for the revised standard FprEN 16798-7 (earlier EN 15242:2007) for design of ventilative cooling. The aim for predicting ventilative cooling is to find the most suitable......, while maintaining an acceptable correlation with measurements on average and the authors consider the simplified calculation method well suited for the use in standards such as FprEN 16798-7 for the ventilative cooling effects from single-sided natural ventilation The comparison of different design...
International Nuclear Information System (INIS)
Severud, L.K.
1987-01-01
Simplified methods for predicting equivalent viscous damping are used to assess damping contributions due to piping inelastic plastic hinge action and support snubbers. These increments are compared to experimental findings from shake and snap-back tests of several pipe systems. Good correlations were found confirming the usefulness of the simplified methods
CSIR Research Space (South Africa)
Snedden, Glen C
2003-09-01
Full Text Available Institute of Aeronautics and Astronautics Inc. All rights reserved. DETAILED DISC ASSEMBLY TEMPERATURE PREDICTION: COMPARISON BETWEEN CFD AND SIMPLIFIED ENGINEERING METHODS ISABE-2005-1130 Glen Snedden, Thomas Roos and Kavendra Naidoo CSIR, Defencetek... transfer and conduction code (Gaugler, 1978) Taw Adiabatic Wall Temperature y+ Near wall Reynolds number Introduction In order to calculate life degradation of gas turbine disc assemblies, it is necessary to model the transient thermal and mechanical...
Senay, G.B.; Budde, Michael; Verdin, J.P.; Melesse, Assefa M.
2007-01-01
Accurate crop performance monitoring and production estimation are critical for timely assessment of the food balance of several countries in the world. Since 2001, the Famine Early Warning Systems Network (FEWS NET) has been monitoring crop performance and relative production using satellite-derived data and simulation models in Africa, Central America, and Afghanistan where ground-based monitoring is limited because of a scarcity of weather stations. The commonly used crop monitoring models are based on a crop water-balance algorithm with inputs from satellite-derived rainfall estimates. These models are useful to monitor rainfed agriculture, but they are ineffective for irrigated areas. This study focused on Afghanistan, where over 80 percent of agricultural production comes from irrigated lands. We developed and implemented a Simplified Surface Energy Balance (SSEB) model to monitor and assess the performance of irrigated agriculture in Afghanistan using a combination of 1-km thermal data and 250m Normalized Difference Vegetation Index (NDVI) data, both from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor. We estimated seasonal actual evapotranspiration (ETa) over a period of six years (2000-2005) for two major irrigated river basins in Afghanistan, the Kabul and the Helmand, by analyzing up to 19 cloud-free thermal and NDVI images from each year. These seasonal ETa estimates were used as relative indicators of year-to-year production magnitude differences. The temporal water-use pattern of the two irrigated basins was indicative of the cropping patterns specific to each region. Our results were comparable to field reports and to estimates based on watershed-wide crop water-balance model results. For example, both methods found that the 2003 seasonal ETa was the highest of all six years. The method also captured water management scenarios where a unique year-to-year variability was identified in addition to water-use differences between
Energy Technology Data Exchange (ETDEWEB)
Ando, Masanori, E-mail: ando.masanori@jaea.go.jp; Takaya, Shigeru, E-mail: takaya.shigeru@jaea.go.jp
2016-12-15
Highlights: • Creep-fatigue evaluation method for weld joint of Mod.9Cr-1Mo steel is proposed. • A simplified evaluation method is also proposed for the codification. • Both proposed evaluation method was validated by the plate bending test. • For codification, the local stress and strain behavior was analyzed. - Abstract: In the present study, to develop an evaluation procedure and design rules for Mod.9Cr-1Mo steel weld joints, a method for evaluating the creep-fatigue life of Mod.9Cr-1Mo steel weld joints was proposed based on finite element analysis (FEA) and a series of cyclic plate bending tests of longitudinal and horizontal seamed plates. The strain concentration and redistribution behaviors were evaluated and the failure cycles were estimated using FEA by considering the test conditions and metallurgical discontinuities in the weld joints. Inelastic FEA models consisting of the base metal, heat-affected zone and weld metal were employed to estimate the elastic follow-up behavior caused by the metallurgical discontinuities. The elastic follow-up factors determined by comparing the elastic and inelastic FEA results were determined to be less than 1.5. Based on the estimated elastic follow-up factors obtained via inelastic FEA, a simplified technique using elastic FEA was proposed for evaluating the creep-fatigue life in Mod.9Cr-1Mo steel weld joints. The creep-fatigue life obtained using the plate bending test was compared to those estimated from the results of inelastic FEA and by a simplified evaluation method.
International Nuclear Information System (INIS)
Ando, Masanori; Takaya, Shigeru
2016-01-01
Highlights: • Creep-fatigue evaluation method for weld joint of Mod.9Cr-1Mo steel is proposed. • A simplified evaluation method is also proposed for the codification. • Both proposed evaluation method was validated by the plate bending test. • For codification, the local stress and strain behavior was analyzed. - Abstract: In the present study, to develop an evaluation procedure and design rules for Mod.9Cr-1Mo steel weld joints, a method for evaluating the creep-fatigue life of Mod.9Cr-1Mo steel weld joints was proposed based on finite element analysis (FEA) and a series of cyclic plate bending tests of longitudinal and horizontal seamed plates. The strain concentration and redistribution behaviors were evaluated and the failure cycles were estimated using FEA by considering the test conditions and metallurgical discontinuities in the weld joints. Inelastic FEA models consisting of the base metal, heat-affected zone and weld metal were employed to estimate the elastic follow-up behavior caused by the metallurgical discontinuities. The elastic follow-up factors determined by comparing the elastic and inelastic FEA results were determined to be less than 1.5. Based on the estimated elastic follow-up factors obtained via inelastic FEA, a simplified technique using elastic FEA was proposed for evaluating the creep-fatigue life in Mod.9Cr-1Mo steel weld joints. The creep-fatigue life obtained using the plate bending test was compared to those estimated from the results of inelastic FEA and by a simplified evaluation method.
Simplified method for the transverse bending analysis of twin celled concrete box girder bridges
Chithra, J.; Nagarajan, Praveen; S, Sajith A.
2018-03-01
Box girder bridges are one of the best options for bridges with span more than 25 m. For the study of these bridges, three-dimensional finite element analysis is the best suited method. However, performing three-dimensional analysis for routine design is difficult as well as time consuming. Also, software used for the three-dimensional analysis are very expensive. Hence designers resort to simplified analysis for predicting longitudinal and transverse bending moments. Among the many analytical methods used to find the transverse bending moments, SFA is the simplest and widely used in design offices. Results from simplified frame analysis can be used for the preliminary analysis of the concrete box girder bridges.From the review of literatures, it is found that majority of the work done using SFA is restricted to the analysis of single cell box girder bridges. Not much work has been done on the analysis multi-cell concrete box girder bridges. In this present study, a double cell concrete box girder bridge is chosen. The bridge is modelled using three- dimensional finite element software and the results are then compared with the simplified frame analysis. The study mainly focuses on establishing correction factors for transverse bending moment values obtained from SFA.
A simplified Excel® algorithm for estimating the least limiting water range of soils
Directory of Open Access Journals (Sweden)
Leão Tairone Paiva
2004-01-01
Full Text Available The least limiting water range (LLWR of soils has been employed as a methodological approach for evaluation of soil physical quality in different agricultural systems, including forestry, grasslands and major crops. However, the absence of a simplified methodology for the quantification of LLWR has hampered the popularization of its use among researchers and soil managers. Taking this into account this work has the objective of proposing and describing a simplified algorithm developed in Excel® software for quantification of the LLWR, including the calculation of the critical bulk density, at which the LLWR becomes zero. Despite the simplicity of the procedures and numerical techniques of optimization used, the nonlinear regression produced reliable results when compared to those found in the literature.
River Discharge Estimation by Using Altimetry Data and Simplified Flood Routing Modeling
Directory of Open Access Journals (Sweden)
Tommaso Moramarco
2013-08-01
Full Text Available A methodology to estimate the discharge along rivers, even poorly gauged ones, taking advantage of water level measurements derived from satellite altimetry is proposed. The procedure is based on the application of the Rating Curve Model (RCM, a simple method allowing for the estimation of the flow conditions in a river section using only water levels recorded at that site and the discharges observed at another upstream section. The European Remote-Sensing Satellite 2, ERS-2, and the Environmental Satellite, ENVISAT, altimetry data are used to provide time series of water levels needed for the application of RCM. In order to evaluate the usefulness of the approach, the results are compared with the ones obtained by applying an empirical formula that allows discharge estimation from remotely sensed hydraulic information. To test the proposed procedure, the 236 km-reach of the Po River is investigated, for which five in situ stations and four satellite tracks are available. Results show that RCM is able to appropriately represent the discharge, and its performance is better than the empirical formula, although this latter does not require upstream hydrometric data. Given its simple formal structure, the proposed approach can be conveniently utilized in ungauged sites where only the survey of the cross-section is needed.
Simplified analysis method for vibration of fusion reactor components with magnetic damping
International Nuclear Information System (INIS)
Tanaka, Yoshikazu; Horie, Tomoyoshi; Niho, Tomoya
2000-01-01
This paper describes two simplified analysis methods for the magnetically damped vibration. One is the method modifying the result of finite element uncoupled analysis using the coupling intensity parameter, and the other is the method using the solution and coupled eigenvalues of the single-degree-of-freedom coupled model. To verify these methods, numerical analyses of a plate and a thin cylinder are performed. The comparison between the results of the former method and the finite element tightly coupled analysis show almost satisfactory agreement. The results of the latter method agree very well with the finite element tightly coupled results because of the coupled eigenvalues. Since the vibration with magnetic damping can be evaluated using these methods without finite element coupled analysis, these approximate methods will be practical and useful for the wide range of design analyses taking account of the magnetic damping effect
Immersed boundary-simplified lattice Boltzmann method for incompressible viscous flows
Chen, Z.; Shu, C.; Tan, D.
2018-05-01
An immersed boundary-simplified lattice Boltzmann method is developed in this paper for simulations of two-dimensional incompressible viscous flows with immersed objects. Assisted by the fractional step technique, the problem is resolved in a predictor-corrector scheme. The predictor step solves the flow field without considering immersed objects, and the corrector step imposes the effect of immersed boundaries on the velocity field. Different from the previous immersed boundary-lattice Boltzmann method which adopts the standard lattice Boltzmann method (LBM) as the flow solver in the predictor step, a recently developed simplified lattice Boltzmann method (SLBM) is applied in the present method to evaluate intermediate flow variables. Compared to the standard LBM, SLBM requires lower virtual memories, facilitates the implementation of physical boundary conditions, and shows better numerical stability. The boundary condition-enforced immersed boundary method, which accurately ensures no-slip boundary conditions, is implemented as the boundary solver in the corrector step. Four typical numerical examples are presented to demonstrate the stability, the flexibility, and the accuracy of the present method.
A Simplified Model to Estimate the Concentration of Inorganic Ions and Heavy Metals in Rivers
Directory of Open Access Journals (Sweden)
Clemêncio Nhantumbo
2016-10-01
Full Text Available This paper presents a model that uses only pH, alkalinity, and temperature to estimate the concentrations of major ions in rivers (Na+, K+, Mg2+, Ca2+, HCO3−, SO42−, Cl−, and NO3− together with the equilibrium concentrations of minor ions and heavy metals (Fe3+, Mn2+, Cd2+, Cu2+, Al3+, Pb2+, and Zn2+. Mining operations have been increasing, which has led to changes in the pollution loads to receiving water systems, meanwhile most developing countries cannot afford water quality monitoring. A possible solution is to implement less resource-demanding monitoring programs, supported by mathematical models that minimize the required sampling and analysis, while still being able to detect water quality changes, thereby allowing implementation of measures to protect the water resources. The present model was developed using existing theories for: (i carbonate equilibrium; (ii total alkalinity; (iii statistics of major ions; (iv solubility of minerals; and (v conductivity of salts in water. The model includes two options to estimate the concentrations of major ions: (1 a generalized method, which employs standard values from a world-wide data base; and (2 a customized method, which requires specific baseline data for the river of interest. The model was tested using data from four monitoring stations in Swedish rivers with satisfactory results.
Evaluation of simplified dna extraction methods for EMM typing of group a streptococci
Directory of Open Access Journals (Sweden)
Jose JJM
2006-01-01
Full Text Available Simplified methods of DNA extraction for amplification and sequencing for emm typing of group A streptococci (GAS can save valuable time and cost in resource crunch situations. To evaluate this, we compared two methods of DNA extraction directly from colonies with the standard CDC cell lysate method for emm typing of 50 GAS strains isolated from children with pharyngitis and impetigo. For this, GAS colonies were transferred into two sets of PCR tubes. One set was preheated at 94oC for two minutes in the thermal cycler and cooled while the other set was frozen overnight at -20oC and then thawed before adding the PCR mix. For the cell lysate method, cells were treated with mutanolysin and hyaluronidase before heating at 100oC for 10 minutes and cooling immediately as recommended in the CDC method. All 50 strains could be typed by sequencing the hyper variable region of the emm gene after amplification. The quality of sequences and the emm types identified were also identical. Our study shows that the two simplified DNA extraction methods directly from colonies can conveniently be used for typing a large number of GAS strains easily in relatively short time.
Directory of Open Access Journals (Sweden)
Marko Mladineo
2016-12-01
Full Text Available In the last 20 years, priority setting in mine actions, i.e. in humanitarian demining, has become an increasingly important topic. Given that mine action projects require management and decision-making based on a multi -criteria approach, multi-criteria decision-making methods like PROMETHEE and AHP have been used worldwide for priority setting. However, from the aspect of mine action, where stakeholders in the decision-making process for priority setting are project managers, local politicians, leaders of different humanitarian organizations, or similar, applying these methods can be difficult. Therefore, a specialized web-based decision support system (Web DSS for priority setting, developed as part of the FP7 project TIRAMISU, has been extended using a module for developing custom priority setting scenarios in line with an exceptionally easy, user-friendly approach. The idea behind this research is to simplify the multi-criteria analysis based on the PROMETHEE method. Therefore, a simplified PROMETHEE method based on statistical analysis for automated suggestions of parameters such as preference function thresholds, interactive selection of criteria weights, and easy input of criteria evaluations is presented in this paper. The result is web-based DSS that can be applied worldwide for priority setting in mine action. Additionally, the management of mine action projects is supported using modules for providing spatial data based on the geographic information system (GIS. In this paper, the benefits and limitations of a simplified PROMETHEE method are presented using a case study involving mine action projects, and subsequently, certain proposals are given for the further research.
A Simplified Method for Stationary Heat Transfer of a Hollow Core Concrete Slab Used for TABS
DEFF Research Database (Denmark)
Yu, Tao; Heiselberg, Per Kvols; Lei, Bo
2014-01-01
Thermally activated building systems (TABS) have been an energy efficient way to improve the indoor thermal comfort. Due to the complicated structure, heat transfer prediction for a hollow core concrete used for TABS is difficult. This paper proposes a simplified method using equivalent thermal...... resistance for the stationary heat transfer of this kind of system. Numerical simulations are carried out to validate this method, and this method shows very small deviations from the numerical simulations. Meanwhile, this method is used to investigate the influence of the thickness of insulation on the heat...... transfer. The insulation with a thickness of more than 0.06 m can keep over 95 % of the heat transferred from the lower surface, which is beneficial to the radiant ceiling cooling. Finally, this method is extended to involve the effect of the pipe, and the numerical comparison results show that this method...
International Nuclear Information System (INIS)
Baup, Olivier
2001-01-01
The aim of this work was to study the TIG multipass welding process on stainless steel, by means of numerical methods and then to work out simplified and bead lumping methods in order to reduce adjusting and realisation times of these calculations. A simulation was used as reference for the validation of these methods; after the presentation of the test series having led to the option choices of this calculation (2D generalised plane strains, elastoplastic model with an isotropic hardening, hardening restoration due to high temperatures), various simplifications were tried on a plate geometry. These simplifications related various modelling points with a correct plastic flow representation in the plate. The use of a reduced number of thermal fields characterising the bead deposit and a low number of tensile curves allow to obtain interesting results, decreasing significantly the Computing times. In addition various lumping bead methods have been studied and concerning both the shape and the thermic of the macro-deposits. The macro-deposit shapes studied are in 'L', or in layer or they represent two beads one on top of the other. Among these three methods, only those using a few number of lumping beads gave bad results since thermo-mechanical history was deeply modified near and inside the weld. Thereafter, simplified methods have been applied to a tubular geometry. On this new geometry, experimental measurements were made during welding, which allow a validation of the reference calculation. Simplified and reference calculations gave approximately the same stress fields as found on plate geometry. Finally, in the last part of this document a procedure for automatic data setting permitting to reduce significantly the calculation phase preparation is presented. It has been applied to the calculation of thick pipe welding in 90 beads; the results are compared with a simplified simulation realised by Framatome and with experimental measurements. A bead by
Sun, Hao; Guo, Jianbin; Wu, Shubiao; Liu, Fang; Dong, Renjie
2017-09-01
The volatile fatty acids (VFAs) concentration has been considered as one of the most sensitive process performance indicators in anaerobic digestion (AD) process. However, the accurate determination of VFAs concentration in AD processes normally requires advanced equipment and complex pretreatment procedures. A simplified method with fewer sample pretreatment procedures and improved accuracy is greatly needed, particularly for on-site application. This report outlines improvements to the Nordmann method, one of the most popular titrations used for VFA monitoring. The influence of ion and solid interfering subsystems in titrated samples on results accuracy was discussed. The total solid content in titrated samples was the main factor affecting accuracy in VFA monitoring. Moreover, a high linear correlation was established between the total solids contents and VFA measurement differences between the traditional Nordmann equation and gas chromatography (GC). Accordingly, a simplified titration method was developed and validated using a semi-continuous experiment of chicken manure anaerobic digestion with various organic loading rates. The good fitting of the results obtained by this method in comparison with GC results strongly supported the potential application of this method to VFA monitoring. Copyright © 2017. Published by Elsevier Ltd.
J evaluation by simplified method for cracked pipes under mechanical loading
International Nuclear Information System (INIS)
Lacire, M.H.; Michel, B.; Gilles, P.
2001-01-01
The integrity of structures behaviour is an important subject for the nuclear reactor safety. Most of assessment methods of cracked components are based on the evaluation of the parameter J. However to avoid complex elastic-plastic finite element calculations of J, a simplified method has been jointly developed by CEA, EDF and Framatome. This method, called Js, is based on the reference stress approach and a new KI handbook. To validate this method, a complete set of 2D and 3D elastic-plastic finite element calculations of J have been performed on pipes (more than 300 calculations are available) for different types of part through wall crack (circumferential or longitudinal); mechanical loading (pressure, bending moment, axial load, torsion moment, and combination of these loading); different kind of materials (austenitic or ferritic steel). This paper presents a comparison between the simplified assessment of J and finite element results on these configurations for mechanical loading. Then, validity of the method is discussed and an applicability domain is proposed. (author)
Boundary methods for mode estimation
Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.
1999-08-01
This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).
Heuristic introduction to estimation methods
International Nuclear Information System (INIS)
Feeley, J.J.; Griffith, J.M.
1982-08-01
The methods and concepts of optimal estimation and control have been very successfully applied in the aerospace industry during the past 20 years. Although similarities exist between the problems (control, modeling, measurements) in the aerospace and nuclear power industries, the methods and concepts have found only scant acceptance in the nuclear industry. Differences in technical language seem to be a major reason for the slow transfer of estimation and control methods to the nuclear industry. Therefore, this report was written to present certain important and useful concepts with a minimum of specialized language. By employing a simple example throughout the report, the importance of several information and uncertainty sources is stressed and optimal ways of using or allowing for these sources are presented. This report discusses optimal estimation problems. A future report will discuss optimal control problems
Zhou, Xiao; Yang, Gongliu; Wang, Jing; Wen, Zeyang
2018-05-14
In recent decades, gravity compensation has become an important way to reduce the position error of an inertial navigation system (INS), especially for a high-precision INS, because of the extensive application of high precision inertial sensors (accelerometers and gyros). This paper first deducts the INS's solution error considering gravity disturbance and simulates the results. Meanwhile, this paper proposes a combined gravity compensation method using a simplified gravity model and gravity database. This new combined method consists of two steps all together. Step 1 subtracts the normal gravity using a simplified gravity model. Step 2 first obtains the gravity disturbance on the trajectory of the carrier with the help of ELM training based on the measured gravity data (provided by Institute of Geodesy and Geophysics; Chinese Academy of sciences), and then compensates it into the error equations of the INS, considering the gravity disturbance, to further improve the navigation accuracy. The effectiveness and feasibility of this new gravity compensation method for the INS are verified through vehicle tests in two different regions; one is in flat terrain with mild gravity variation and the other is in complex terrain with fierce gravity variation. During 2 h vehicle tests, the positioning accuracy of two tests can improve by 20% and 38% respectively, after the gravity is compensated by the proposed method.
International Nuclear Information System (INIS)
Iizuka, Daisuke; Kawai, Hidehiko; Kamiya, Kenji; Suzuki, Fumio; Izumi, Shunsuke
2014-01-01
Until now, counting chromosome aberration is the most accurate method for evaluating radiation doses. However, this method is time consuming and requires skills for evaluating chromosome aberrations. It could be difficult to apply this method to majority of people who are expected to be exposed to ionizing radiation. In this viewpoint, establishment of rapid, simplified biodosimetric methods for triage will be anticipated. Due to the development of mass spectrometry method and the identification of new molecules such as microRNA (miRNA), it is conceivable that new molecular biomarker of radiation exposure using some newly developed mass spectrometry. In this review article, the part of our results including the changes of protein (including the changes of glycosylation), peptide, metabolite, miRNA after radiation exposure will be shown. (author)
Directory of Open Access Journals (Sweden)
Xiaoqing Wei
2017-02-01
Full Text Available As one of the most widely used units in water cooling systems, the closed wet cooling towers (CWCTs have two typical counter-flow constructions, in which the spray water flows from the top to the bottom, and the moist air and cooling water flow in the opposite direction vertically (parallel or horizontally (cross, respectively. This study aims to present a simplified calculation method for conveniently and accurately analyzing the thermal performance of the two types of counter-flow CWCTs, viz. the parallel counter-flow CWCT (PCFCWCT and the cross counter-flow CWCT (CCFCWCT. A simplified cooling capacity model that just includes two characteristic parameters is developed. The Levenberg–Marquardt method is employed to determine the model parameters by curve fitting of experimental data. Based on the proposed model, the predicted outlet temperatures of the process water are compared with the measurements of a PCFCWCT and a CCFCWCT, respectively, reported in the literature. The results indicate that the predicted values agree well with the experimental data in previous studies. The maximum absolute errors in predicting the process water outlet temperatures are 0.20 and 0.24 °C for the PCFCWCT and CCFCWCT, respectively. These results indicate that the simplified method is reliable for performance prediction of counter-flow CWCTs. Although the flow patterns of the two towers are different, the variation trends of thermal performance are similar to each other under various operating conditions. The inlet air wet-bulb temperature, inlet cooling water temperature, air flow rate, and cooling water flow rate are crucial for determining the cooling capacity of a counter-flow CWCT, while the cooling tower effectiveness is mainly determined by the flow rates of air and cooling water. Compared with the CCFCWCT, the PCFCWCT is much more applicable in a large-scale cooling water system, and the superiority would be amplified when the scale of water
Latimer, Nicholas R; Abrams, K R; Lambert, P C; Crowther, M J; Wailoo, A J; Morden, J P; Akehurst, R L; Campbell, M J
2017-04-01
Estimates of the overall survival benefit of new cancer treatments are often confounded by treatment switching in randomised controlled trials (RCTs) - whereby patients randomised to the control group are permitted to switch onto the experimental treatment upon disease progression. In health technology assessment, estimates of the unconfounded overall survival benefit associated with the new treatment are needed. Several switching adjustment methods have been advocated in the literature, some of which have been used in health technology assessment. However, it is unclear which methods are likely to produce least bias in realistic RCT-based scenarios. We simulated RCTs in which switching, associated with patient prognosis, was permitted. Treatment effect size and time dependency, switching proportions and disease severity were varied across scenarios. We assessed the performance of alternative adjustment methods based upon bias, coverage and mean squared error, related to the estimation of true restricted mean survival in the absence of switching in the control group. We found that when the treatment effect was not time-dependent, rank preserving structural failure time models (RPSFTM) and iterative parameter estimation methods produced low levels of bias. However, in the presence of a time-dependent treatment effect, these methods produced higher levels of bias, similar to those produced by an inverse probability of censoring weights method. The inverse probability of censoring weights and structural nested models produced high levels of bias when switching proportions exceeded 85%. A simplified two-stage Weibull method produced low bias across all scenarios and provided the treatment switching mechanism is suitable, represents an appropriate adjustment method.
Weather data for simplified energy calculation methods. Volume II. Middle United States: TRY data
Energy Technology Data Exchange (ETDEWEB)
Olsen, A.R.; Moreno, S.; Deringer, J.; Watson, C.R.
1984-08-01
The objective of this report is to provide a source of weather data for direct use with a number of simplified energy calculation methods available today. Complete weather data for a number of cities in the United States are provided for use in the following methods: degree hour, modified degree hour, bin, modified bin, and variable degree day. This report contains sets of weather data for 22 cities in the continental United States using Test Reference Year (TRY) source weather data. The weather data at each city has been summarized in a number of ways to provide differing levels of detail necessary for alternative simplified energy calculation methods. Weather variables summarized include dry bulb and wet bulb temperature, percent relative humidity, humidity ratio, wind speed, percent possible sunshine, percent diffuse solar radiation, total solar radiation on horizontal and vertical surfaces, and solar heat gain through standard DSA glass. Monthly and annual summaries, in some cases by time of day, are available. These summaries are produced in a series of nine computer generated tables.
Weather data for simplified energy calculation methods. Volume IV. United States: WYEC data
Energy Technology Data Exchange (ETDEWEB)
Olsen, A.R.; Moreno, S.; Deringer, J.; Watson, C.R.
1984-08-01
The objective of this report is to provide a source of weather data for direct use with a number of simplified energy calculation methods available today. Complete weather data for a number of cities in the United States are provided for use in the following methods: degree hour, modified degree hour, bin, modified bin, and variable degree day. This report contains sets of weather data for 23 cities using Weather Year for Energy Calculations (WYEC) source weather data. Considerable overlap is present in cities (21) covered by both the TRY and WYEC data. The weather data at each city has been summarized in a number of ways to provide differing levels of detail necessary for alternative simplified energy calculation methods. Weather variables summarized include dry bulb and wet bulb temperature, percent relative humidity, humidity ratio, wind speed, percent possible sunshine, percent diffuse solar radiation, total solar radiation on horizontal and vertical surfaces, and solar heat gain through standard DSA glass. Monthly and annual summaries, in some cases by time of day, are available. These summaries are produced in a series of nine computer generated tables.
Simplified DFT methods for consistent structures and energies of large systems
Caldeweyher, Eike; Gerit Brandenburg, Jan
2018-05-01
Kohn–Sham density functional theory (DFT) is routinely used for the fast electronic structure computation of large systems and will most likely continue to be the method of choice for the generation of reliable geometries in the foreseeable future. Here, we present a hierarchy of simplified DFT methods designed for consistent structures and non-covalent interactions of large systems with particular focus on molecular crystals. The covered methods are a minimal basis set Hartree–Fock (HF-3c), a small basis set screened exchange hybrid functional (HSE-3c), and a generalized gradient approximated functional evaluated in a medium-sized basis set (B97-3c), all augmented with semi-classical correction potentials. We give an overview on the methods design, a comprehensive evaluation on established benchmark sets for geometries and lattice energies of molecular crystals, and highlight some realistic applications on large organic crystals with several hundreds of atoms in the primitive unit cell.
International Nuclear Information System (INIS)
Chen, Y.-S.; Chien, K.-H.; Wang, C.-C.; Hung, T.-C.; Pei, B.-S.
2006-01-01
The vapor chambers (flat plate heat pipes) have been applied on the electronic cooling recently. To satisfy the quick-response requirement of the industries, a simplified transient three-dimensional linear model has been developed and tested in this study. In the proposed model, the vapor is assumed as a single interface between the evaporator and condenser wicks, and this assumption enables the vapor chamber to be analyzed by being split into small control volumes. Comparing with the previous available results, the calculated transient responses have shown good agreements with the existing results. For further validation of the proposed model, a water-cooling experiment was conducted. In addition to the vapor chamber, the heating block is also taken into account in the simulation. It is found that the inclusion of the capacitance of heating block shows a better agreement with the measurements
Variational estimation of process parameters in a simplified atmospheric general circulation model
Lv, Guokun; Koehl, Armin; Stammer, Detlef
2016-04-01
Parameterizations are used to simulate effects of unresolved sub-grid-scale processes in current state-of-the-art climate model. The values of the process parameters, which determine the model's climatology, are usually manually adjusted to reduce the difference of model mean state to the observed climatology. This process requires detailed knowledge of the model and its parameterizations. In this work, a variational method was used to estimate process parameters in the Planet Simulator (PlaSim). The adjoint code was generated using automatic differentiation of the source code. Some hydrological processes were switched off to remove the influence of zero-order discontinuities. In addition, the nonlinearity of the model limits the feasible assimilation window to about 1day, which is too short to tune the model's climatology. To extend the feasible assimilation window, nudging terms for all state variables were added to the model's equations, which essentially suppress all unstable directions. In identical twin experiments, we found that the feasible assimilation window could be extended to over 1-year and accurate parameters could be retrieved. Although the nudging terms transform to a damping of the adjoint variables and therefore tend to erases the information of the data over time, assimilating climatological information is shown to provide sufficient information on the parameters. Moreover, the mechanism of this regularization is discussed.
International Nuclear Information System (INIS)
Scott, M.A.; Holmes, P.A.
1991-01-01
A simplified static analysis methodology is presented for qualifying equipment in moderate and high-hazard facility-use category structures, where the facility use is defined in Design and Evaluation Guidelines for Department of Energy Facilities Subjected to Natural Phenomena Hazards, UCRL-15910. Currently there are no equivalent simplified static methods for determining seismic loads on equipment in these facility use categories without completing dynamic analysis of the facility to obtain local floor accelerations or spectra. The requirements of UCRL-15910 specify the use of open-quotes dynamicclose quotes analysis methods, consistent with Seismic Design Guidelines for Essential Buildings, Chapter 6, open-quotes Nonstructural Elements,close quotes TM5-809-10-1, be used for determining seismic loads on mechanical equipment and components. Chapter 6 assumes that the dynamic analysis of the facility has generated either floor response spectra or model floor accelerations. These in turn are utilized with the dynamic modification factor and the actual demand and capacity ratios to determine equipment loading. This complex methodology may be necessary to determine more exacting loads for hard to qualify equipment but does not provide a simple conservative loading methodology for equipment with ample structural capacity
A non overlapping parallel domain decomposition method applied to the simplified transport equations
International Nuclear Information System (INIS)
Lathuiliere, B.; Barrault, M.; Ramet, P.; Roman, J.
2009-01-01
A reactivity computation requires to compute the highest eigenvalue of a generalized eigenvalue problem. An inverse power algorithm is used commonly. Very fine modelizations are difficult to tackle for our sequential solver, based on the simplified transport equations, in terms of memory consumption and computational time. So, we propose a non-overlapping domain decomposition method for the approximate resolution of the linear system to solve at each inverse power iteration. Our method brings to a low development effort as the inner multigroup solver can be re-use without modification, and allows us to adapt locally the numerical resolution (mesh, finite element order). Numerical results are obtained by a parallel implementation of the method on two different cases with a pin by pin discretization. This results are analyzed in terms of memory consumption and parallel efficiency. (authors)
Simplified Analytical Methods to Analyze Lock Gates Submitted to Ship Collisions and Earthquakes
Directory of Open Access Journals (Sweden)
Buldgen Loic
2015-09-01
Full Text Available This paper presents two simplified analytical methods to analyze lock gates submitted to two different accidental loads. The case of an impact involving a vessel is first investigated. In this situation, the resistance of the struck gate is evaluated by assuming a local and a global deforming mode. The super-element method is used in the first case, while an equivalent beam model is simultaneously introduced to capture the overall bending motion of the structure. The second accidental load considered in this paper is the seismic action, for which an analytical method is presented to evaluate the total hydrodynamic pressure applied on a lock gate during an earthquake, due account being taken of the fluid-structure interaction. For each of these two actions, numerical validations are presented and the analytical results are compared to finite-element solutions.
Simplified method for elastic plastic analysis of material presenting bilinear kinematic hardening
International Nuclear Information System (INIS)
Roche, R.
1983-12-01
A simplified method for elastic plastic analysis is presented. Material behavior is assumed to be elastic plastic with bilinear kinematic hardening. The proposed method give a strain-stress field fullfilling material constitutive equations, equations of equilibrium and continuity conditions. This strain-stress is obtained through two linear computations. The first one is the conventional elastic analysis of the body submitted to the applied load. The second one use tangent matrix (tangent Young's modulus and Poisson's ratio) for the determination of an additional stress due to imposed initial strain. Such a method suits finite elements computer codes, the most useful result being plastic strains resulting from the applied loading (load control or deformation control). Obviously, there is not unique solution, for stress-strain field is not depending only of the applied load, but of the load history. Therefore, less pessimistic solutions can be got by one or two additional linear computations [fr
Order statistics & inference estimation methods
Balakrishnan, N
1991-01-01
The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co
Methods for estimating the semivariogram
DEFF Research Database (Denmark)
Lophaven, Søren Nymand; Carstensen, Niels Jacob; Rootzen, Helle
2002-01-01
. In the existing literature various methods for modelling the semivariogram have been proposed, while only a few studies have been made on comparing different approaches. In this paper we compare eight approaches for modelling the semivariogram, i.e. six approaches based on least squares estimation...... maximum likelihood performed better than the least squares approaches. We also applied maximum likelihood and least squares estimation to a real dataset, containing measurements of salinity at 71 sampling stations in the Kattegat basin. This showed that the calculation of spatial predictions...
A simplified method of calculating heat flow through a two-phase heat exchanger
Energy Technology Data Exchange (ETDEWEB)
Yohanis, Y.G. [Thermal Systems Engineering Group, Faculty of Engineering, University of Ulster, Newtownabbey, Co Antrim, BT37 0QB Northern Ireland (United Kingdom)]. E-mail: yg.yohanis@ulster.ac.uk; Popel, O.S. [Non-traditional Renewable Energy Sources, Institute for High Temperatures, Russian Academy of Sciences, 13/19 Izhorskaya str., IVTAN, Moscow 125412 (Russian Federation); Frid, S.E. [Non-traditional Renewable Energy Sources, Institute for High Temperatures, Russian Academy of Sciences, 13/19 Izhorskaya str., IVTAN, Moscow 125412 (Russian Federation)
2005-10-01
A simplified method of calculating the heat flow through a heat exchanger in which one or both heat carrying media are undergoing a phase change is proposed. It is based on enthalpies of the heat carrying media rather than their temperatures. The method enables the determination of the maximum rate of heat flow provided the thermodynamic properties of both heat-carrying media are known. There will be no requirement to separately simulate each part of the system or introduce boundaries within the heat exchanger if one or both heat-carrying media undergo a phase change. The model can be used at the pre-design stage, when the parameters of the heat exchangers may not be known, i.e., to carry out an assessment of a complex energy scheme such as a steam power plant. One such application of this model is in thermal simulation exercises within the TRNSYS modeling environment.
A simplified method of calculating heat flow through a two-phase heat exchanger
International Nuclear Information System (INIS)
Yohanis, Y.G.; Popel, O.S.; Frid, S.E.
2005-01-01
A simplified method of calculating the heat flow through a heat exchanger in which one or both heat carrying media are undergoing a phase change is proposed. It is based on enthalpies of the heat carrying media rather than their temperatures. The method enables the determination of the maximum rate of heat flow provided the thermodynamic properties of both heat-carrying media are known. There will be no requirement to separately simulate each part of the system or introduce boundaries within the heat exchanger if one or both heat-carrying media undergo a phase change. The model can be used at the pre-design stage, when the parameters of the heat exchangers may not be known, i.e., to carry out an assessment of a complex energy scheme such as a steam power plant. One such application of this model is in thermal simulation exercises within the TRNSYS modeling environment
Tanaka, Hiroaki; Inaka, Koji; Sugiyama, Shigeru; Takahashi, Sachiko; Sano, Satoshi; Sato, Masaru; Yoshitomi, Susumu
2004-01-01
We developed a new protein crystallization method has been developed using a simplified counter-diffusion method for optimizing crystallization condition. It is composed of only a single capillary, the gel in the silicon tube and the screw-top test tube, which are readily available in the laboratory. The one capillary can continuously scan a wide range of crystallization conditions (combination of the concentrations of the precipitant and the protein) unless crystallization occurs, which means that it corresponds to many drops in the vapor-diffusion method. The amount of the precipitant and the protein solutions can be much less than in conventional methods. In this study, lysozyme and alpha-amylase were used as model proteins for demonstrating the efficiency of this method. In addition, one-dimensional (1-D) simulations of the crystal growth were performed based on the 1-D diffusion model. The optimized conditions can be applied to the initial crystallization conditions for both other counter-diffusion methods with the Granada Crystallization Box (GCB) and for the vapor-diffusion method after some modification.
A gravimetric simplified method for nucleated marrow cell counting using an injection needle.
Saitoh, Toshiki; Fang, Liu; Matsumoto, Kiyoshi
2005-08-01
A simplified gravimetric marrow cell counting method for rats is proposed for a regular screening method. After fresh bone marrow was aspirated by an injection needle, the marrow cells were suspended in carbonate buffered saline. The nucleated marrow cell count (NMC) was measured by an automated multi-blood cell analyzer. When this gravimetric method was applied to rats, the NMC of the left and right femurs had essentially identical values due to careful handling. The NMC at 4 to 10 weeks of age in male and female Crj:CD(SD)IGS rats was 2.72 to 1.96 and 2.75 to 1.98 (x10(6) counts/mg), respectively. More useful information for evaluation could be obtained by using this gravimetric method in addition to myelogram examination. However, some difficulties with this method include low NMC due to blood contamination and variation of NMC due to handling. Therefore, the utility of this gravimetric method for screening will be clarified by the accumulation of the data on myelotoxicity studies with this method.
Unrecorded Alcohol Consumption: Quantitative Methods of Estimation
Razvodovsky, Y. E.
2010-01-01
unrecorded alcohol; methods of estimation In this paper we focused on methods of estimation of unrecorded alcohol consumption level. Present methods of estimation of unrevorded alcohol consumption allow only approximate estimation of unrecorded alcohol consumption level. Tacking into consideration the extreme importance of such kind of data, further investigation is necessary to improve the reliability of methods estimation of unrecorded alcohol consumption.
RADTRAD: A simplified model for RADionuclide Transport and Removal And Dose estimation
International Nuclear Information System (INIS)
Humphreys, S.L.; Miller, L.A.; Monroe, D.K.; Heames, T.J.
1998-04-01
This report documents the RADTRAD computer code developed for the U.S. Nuclear Regulatory Commission (NRC) Office of Nuclear Reactor Regulation (NRR) to estimate transport and removal of radionuclides and dose at selected receptors. The document includes a users' guide to the code, a description of the technical basis for the code, the quality assurance and code acceptance testing documentation, and a programmers' guide. The RADTRAD code can be used to estimate the containment release using either the NRC TID-14844 or NUREG-1465 source terms and assumptions, or a user-specified table. In addition, the code can account for a reduction in the quantity of radioactive material due to containment sprays, natural deposition, filters, and other natural and engineered safety features. The RADTRAD code uses a combination of tables and/or numerical models of source term reduction phenomena to determine the time-dependent dose at user-specified locations for a given accident scenario. The code system also provides the inventory, decay chain, and dose conversion factor tables needed for the dose calculation. The RADTRAD code can be used to assess occupational radiation exposures, typically in the control room; to estimate site boundary doses; and to estimate dose attenuation due to modification of a facility or accident sequence
A simplified presentation of the multigroup analytic nodal method in 2-D Cartesian geometry
International Nuclear Information System (INIS)
Hebert, Alain
2008-01-01
The nodal diffusion algorithms used in many production reactor simulation codes are originating from a common ancestry developed in the 1970s, the analytic nodal method (ANM) of the QUANDRY code. However, this original presentation of the ANM is complex and makes difficult the calculation of the nodal coupling matrices. Moreover, QUANDRY is limited to two-energy groups and its generalization to more groups appears laborious. We are presenting a simplified implementation of the ANM requiring only limited programming work. This formulation is consistent with the initial QUANDRY implementation and is easily generalizable to arbitrary G-group problems. A Matlab script is provided to highlight the simplicity of our presentation. For the sake of clarity, our implementation is limited to G-group, 2-D Cartesian geometry
A simplified method of evaluating the stress wave environment of internal equipment
Colton, J. D.; Desmond, T. P.
1979-01-01
A simplified method called the transfer function technique (TFT) was devised for evaluating the stress wave environment in a structure containing internal equipment. The TFT consists of following the initial in-plane stress wave that propagates through a structure subjected to a dynamic load and characterizing how the wave is altered as it is transmitted through intersections of structural members. As a basis for evaluating the TFT, impact experiments and detailed stress wave analyses were performed for structures with two or three, or more members. Transfer functions that relate the wave transmitted through an intersection to the incident wave were deduced from the predicted wave response. By sequentially applying these transfer functions to a structure with several intersections, it was found that the environment produced by the initial stress wave propagating through the structure can be approximated well. The TFT can be used as a design tool or as an analytical tool to determine whether a more detailed wave analysis is warranted.
A numerical simulation of wheel spray for simplified vehicle model based on discrete phase method
Directory of Open Access Journals (Sweden)
Xingjun Hu
2015-07-01
Full Text Available Road spray greatly affects vehicle body soiling and driving safety. The study of road spray has attracted increasing attention. In this article, computational fluid dynamics software with widely used finite volume method code was employed to investigate the numerical simulation of spray induced by a simplified wheel model and a modified square-back model proposed by the Motor Industry Research Association. Shear stress transport k-omega turbulence model, discrete phase model, and Eulerian wall-film model were selected. In the simulation process, the phenomenon of breakup and coalescence of drops were considered, and the continuous and discrete phases were treated as two-way coupled in momentum and turbulent motion. The relationship between the vehicle external flow structure and body soiling was also discussed.
Directory of Open Access Journals (Sweden)
Myung-Rag Jung
2015-01-01
Full Text Available A simplified analytical method providing accurate unstrained lengths of all structural elements is proposed to find the optimized initial state of self-anchored suspension bridges under dead loads. For this, equilibrium equations of the main girder and the main cable system are derived and solved by evaluating the self-weights of cable members using unstrained cable lengths and iteratively updating both the horizontal tension component and the vertical profile of the main cable. Furthermore, to demonstrate the validity of the simplified analytical method, the unstrained element length method (ULM is applied to suspension bridge models based on the unstressed lengths of both cable and frame members calculated from the analytical method. Through numerical examples, it is demonstrated that the proposed analytical method can indeed provide an optimized initial solution by showing that both the simplified method and the nonlinear FE procedure lead to practically identical initial configurations with only localized small bending moment distributions.
International Nuclear Information System (INIS)
Plansky, L.E.; Seitz, R.R.
1994-02-01
This report documents user instructions for several simplified subroutines and driver programs that can be used to estimate various aspects of the long-term performance of cement-based barriers used in low-level radioactive waste disposal facilities. The subroutines are prepared in a modular fashion to allow flexibility for a variety of applications. Three levels of codes are provided: the individual subroutines, interactive drivers for each of the subroutines, and an interactive main driver, CEMENT, that calls each of the individual drivers. The individual subroutines for the different models may be taken independently and used in larger programs, or the driver modules can be used to execute the subroutines separately or as part of the main driver routine. A brief program description is included and user-interface instructions for the individual subroutines are documented in the main report. These are intended to be used when the subroutines are used as subroutines in a larger computer code
Simplified LCA and matrix methods in identifying the environmental aspects of a product system.
Hur, Tak; Lee, Jiyong; Ryu, Jiyeon; Kwon, Eunsun
2005-05-01
In order to effectively integrate environmental attributes into the product design and development processes, it is crucial to identify the significant environmental aspects related to a product system within a relatively short period of time. In this study, the usefulness of life cycle assessment (LCA) and a matrix method as tools for identifying the key environmental issues of a product system were examined. For this, a simplified LCA (SLCA) method that can be applied to Electrical and Electronic Equipment (EEE) was developed to efficiently identify their significant environmental aspects for eco-design, since a full scale LCA study is usually very detailed, expensive and time-consuming. The environmentally responsible product assessment (ERPA) method, which is one of the matrix methods, was also analyzed. Then, the usefulness of each method in eco-design processes was evaluated and compared using the case studies of the cellular phone and vacuum cleaner systems. It was found that the SLCA and the ERPA methods provided different information but they complemented each other to some extent. The SLCA method generated more information on the inherent environmental characteristics of a product system so that it might be useful for new design/eco-innovation when developing a completely new product or method where environmental considerations play a major role from the beginning. On the other hand, the ERPA method gave more information on the potential for improving a product so that it could be effectively used in eco-redesign which intends to alleviate environmental impacts of an existing product or process.
On-Road Validation of a Simplified Model for Estimating Real-World Fuel Economy: Preprint
Energy Technology Data Exchange (ETDEWEB)
Wood, Eric; Gonder, Jeff; Jehlik, Forrest
2017-01-01
On-road fuel economy is known to vary significantly between individual trips in real-world driving conditions. This work introduces a methodology for rapidly simulating a specific vehicle's fuel economy over the wide range of real-world conditions experienced across the country. On-road test data collected using a highly instrumented vehicle is used to refine and validate this modeling approach. Model accuracy relative to on-road data collection is relevant to the estimation of 'off-cycle credits' that compensate for real-world fuel economy benefits that are not observed during certification testing on a chassis dynamometer.
International Nuclear Information System (INIS)
Aihara, S.; Atsumi, K.; Ujiie, K.; Satoh, S.
1981-01-01
Self-restraining stresses generate not only moments but also axial forces. Therefore the moment and force equilibriums of cross section are considered simultaneously, in combination with other external forces. Thus, under this theory, two computer programs are prepared for. Using these programs, the design procedures which considered the reduction of self-restraining stress, become easy if the elastic design stresses, which are separated normal stresses and self-restraining stresses, are given. Numerical examples are given to illustrate the application of the simplified elastic-plastic analysis and to study its effectiveness. First this method is applied to analyze an upper shielding wall in MARK-2 type's Reactor building. The results are compared with those obtained by the elastic-plastic analysis of Finite Element Method. From this comparison it was confirmed that the method described, had adequate accuracy for re-bar design. As a second example, Mat slab of Reactor building is analyzed. The quantity of re-bars calculated by this method, comes to about two third of re-bars less than those required when self-restraining stress is considered as normal stress. Also, the self-restraining stress reduction factor is about 0.5. (orig./HP)
Simplified method to solve sound transmission through structures lined with elastic porous material.
Lee, J H; Kim, J
2001-11-01
An approximate analysis method is developed to calculate sound transmission through structures lined with porous material. Because the porous material has both the solid phase and fluid phase, three wave components exist in the material, which makes the related analysis very complicated. The main idea in developing the approximate method is very simple: modeling the porous material using only the strongest of the three waves, which in effect idealizes the material as an equivalent fluid. The analysis procedure has to be conducted in two steps. In the first step, sound transmission through a flat double panel with a porous liner of infinite extents, which has the same cross sectional construction as the actual structure, is solved based on the full theory and the strongest wave component is identified. In the second step sound transmission through the actual structure is solved modeling the porous material as an equivalent fluid while using the actual geometry of the structure. The development and validation of the method are discussed in detail. As an application example, the transmission loss through double walled cylindrical shells with a porous core is calculated utilizing the simplified method.
Coach simplified structure modeling and optimization study based on the PBM method
Zhang, Miaoli; Ren, Jindong; Yin, Ying; Du, Jian
2016-09-01
For the coach industry, rapid modeling and efficient optimization methods are desirable for structure modeling and optimization based on simplified structures, especially for use early in the concept phase and with capabilities of accurately expressing the mechanical properties of structure and with flexible section forms. However, the present dimension-based methods cannot easily meet these requirements. To achieve these goals, the property-based modeling (PBM) beam modeling method is studied based on the PBM theory and in conjunction with the characteristics of coach structure of taking beam as the main component. For a beam component of concrete length, its mechanical characteristics are primarily affected by the section properties. Four section parameters are adopted to describe the mechanical properties of a beam, including the section area, the principal moments of inertia about the two principal axles, and the torsion constant of the section. Based on the equivalent stiffness strategy, expressions for the above section parameters are derived, and the PBM beam element is implemented in HyperMesh software. A case is realized using this method, in which the structure of a passenger coach is simplified. The model precision is validated by comparing the basic performance of the total structure with that of the original structure, including the bending and torsion stiffness and the first-order bending and torsional modal frequencies. Sensitivity analysis is conducted to choose design variables. The optimal Latin hypercube experiment design is adopted to sample the test points, and polynomial response surfaces are used to fit these points. To improve the bending and torsion stiffness and the first-order torsional frequency and taking the allowable maximum stresses of the braking and left turning conditions as constraints, the multi-objective optimization of the structure is conducted using the NSGA-II genetic algorithm on the ISIGHT platform. The result of the
Senay, Gabriel B.; Budde, Michael E.; Verdin, James P.
2011-01-01
Evapotranspiration (ET) can be derived from satellite data using surface energy balance principles. METRIC (Mapping EvapoTranspiration at high Resolution with Internalized Calibration) is one of the most widely used models available in the literature to estimate ET from satellite imagery. The Simplified Surface Energy Balance (SSEB) model is much easier and less expensive to implement. The main purpose of this research was to present an enhanced version of the Simplified Surface Energy Balance (SSEB) model and to evaluate its performance using the established METRIC model. In this study, SSEB and METRIC ET fractions were compared using 7 Landsat images acquired for south central Idaho during the 2003 growing season. The enhanced SSEB model compared well with the METRIC model output exhibiting an r2 improvement from 0.83 to 0.90 in less complex topography (elevation less than 2000 m) and with an improvement of r2 from 0.27 to 0.38 in more complex (mountain) areas with elevation greater than 2000 m. Independent evaluation showed that both models exhibited higher variation in complex topographic regions, although more with SSEB than with METRIC. The higher ET fraction variation in the complex mountainous regions highlighted the difficulty of capturing the radiation and heat transfer physics on steep slopes having variable aspect with the simple index model, and the need to conduct more research. However, the temporal consistency of the results suggests that the SSEB model can be used on a wide range of elevation (more successfully up 2000 m) to detect anomalies in space and time for water resources management and monitoring such as for drought early warning systems in data scarce regions. SSEB has a potential for operational agro-hydrologic applications to estimate ET with inputs of surface temperature, NDVI, DEM and reference ET.
Asiri, Sharefa M.; Laleg-Kirati, Taous-Meriem
2016-01-01
In this paper, modulating functions-based method is proposed for estimating space–time-dependent unknowns in one-dimensional partial differential equations. The proposed method simplifies the problem into a system of algebraic equations linear
Simplified method of calculating residual stress in circumferential welding of piping
International Nuclear Information System (INIS)
Umemoto, Tadahiro
1984-01-01
Many circumferential joints of piping are used in as-welded state, but in these welded joints, the residual stress as high as the yield stress of materials arises, and causes to accelerate stress corrosion cracking and corrosion fatigue. The experiment or the finite element method to clarify welding residual stress requires much time and labor, and is expensive, therefore, the author proposed the simplified method of calculation. The heating and cooling process of welding is very complex, and cannot be modeled as it is, therefore, it was assumed that in multiple layer welding, the welding condition of the last layer determines the residual stress, that material constants are invariable regardless of temperature, that the temperature distribution and residual stress are axisymmetric, and that there is repeated stress-strain relation in the vicinity of welded parts. The temperature distribution at the time of welding, thermal stress and welding residual stress are analyzed, and the material constants used for the calculation of residual stress are given. As the example of calculation, the effect of welding heat input and materials is shown. The extension of the method to a thick-walled pipe is discussed. (Kako, I.)
Simplified formulae for the estimation of offshore wind turbines clutter on marine radars.
Grande, Olatz; Cañizo, Josune; Angulo, Itziar; Jenn, David; Danoon, Laith R; Guerra, David; de la Vega, David
2014-01-01
The potential impact that offshore wind farms may cause on nearby marine radars should be considered before the wind farm is installed. Strong radar echoes from the turbines may degrade radars' detection capability in the area around the wind farm. Although conventional computational methods provide accurate results of scattering by wind turbines, they are not directly implementable in software tools that can be used to conduct the impact studies. This paper proposes a simple model to assess the clutter that wind turbines may generate on marine radars. This method can be easily implemented in the system modeling software tools for the impact analysis of a wind farm in a real scenario.
A simplified model of natural and mechanical removal to estimate cleanup equipment efficiency
International Nuclear Information System (INIS)
Lehr, W.
2001-01-01
Oil spill response organizations rely on modelling to make decisions in offshore response operations. Models are used to test different cleanup strategies and to measure the expected cost of cleanup and the reduction in environmental impact. The oil spill response community has traditionally used the concept of worst case scenario in developing contingency plans for spill response. However, there are many drawbacks to this approach. The Hazardous Materials Response Division of the National Oceanic and Atmospheric Administration in Cooperation with the U.S. Navy Supervisor of Salvage and Diving has developed a Trajectory Analysis Planner (TAP) which will give planners the tool to try out different cleanup strategies and equipment configurations based upon historical wind and current conditions instead of worst-case scenarios. The spill trajectory model is a classic example in oil spill modelling that uses advanced non-linear three-dimensional hydrodynamical sub-models to estimate surface currents under conditions where oceanographic initial conditions are not accurately known and forecasts of wind stress are unreliable. In order to get better answers, it is often necessary to refine input values rather than increasing the sophistication of the hydrodynamics. This paper described another spill example where the level of complexity of the algorithms needs to be evaluated with regard to the reliability of the input, the sensitivity of the answers to input and model parameters, and the comparative reliability of other algorithms in the model. 9 refs., 1 fig
Simplified rotor load models and fatigue damage estimates for offshore wind turbines.
Muskulus, M
2015-02-28
The aim of rotor load models is to characterize and generate the thrust loads acting on an offshore wind turbine. Ideally, the rotor simulation can be replaced by time series from a model with a few parameters and state variables only. Such models are used extensively in control system design and, as a potentially new application area, structural optimization of support structures. Different rotor load models are here evaluated for a jacket support structure in terms of fatigue lifetimes of relevant structural variables. All models were found to be lacking in accuracy, with differences of more than 20% in fatigue load estimates. The most accurate models were the use of an effective thrust coefficient determined from a regression analysis of dynamic thrust loads, and a novel stochastic model in state-space form. The stochastic model explicitly models the quasi-periodic components obtained from rotational sampling of turbulent fluctuations. Its state variables follow a mean-reverting Ornstein-Uhlenbeck process. Although promising, more work is needed on how to determine the parameters of the stochastic model and before accurate lifetime predictions can be obtained without comprehensive rotor simulations. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Bayesian estimation methods in metrology
International Nuclear Information System (INIS)
Cox, M.G.; Forbes, A.B.; Harris, P.M.
2004-01-01
In metrology -- the science of measurement -- a measurement result must be accompanied by a statement of its associated uncertainty. The degree of validity of a measurement result is determined by the validity of the uncertainty statement. In recognition of the importance of uncertainty evaluation, the International Standardization Organization in 1995 published the Guide to the Expression of Uncertainty in Measurement and the Guide has been widely adopted. The validity of uncertainty statements is tested in interlaboratory comparisons in which an artefact is measured by a number of laboratories and their measurement results compared. Since the introduction of the Mutual Recognition Arrangement, key comparisons are being undertaken to determine the degree of equivalence of laboratories for particular measurement tasks. In this paper, we discuss the possible development of the Guide to reflect Bayesian approaches and the evaluation of key comparison data using Bayesian estimation methods
A simplified method for low-level tritium measurement in the environmental water samples
International Nuclear Information System (INIS)
Sakuma, Yoichi; Yamanishi, Hirokuni; Ogata, Yoshimune
2004-01-01
Low level liquid scintillation counting took much time with a lot of doing to distill off the impurities in the sample water before mixing the sample with the liquid scintillation cocktail. In the light of it, we investigated the possibility of an alternative filtration method for sample purification. The tritium concentration in the environmental water has become very low, and the samples have to be treated by electrolysis enrichment with a liquid scintillation analyzer. Using the solid polymer electrolyte enriching device, there is no need to add neither any electrolyte nor the neutralization after the concentration. If we could replace the distillation process with the filtration, the procedure would be simplified very much. We investigated the procedure and we were able to prove that the reverse osmosis (RO) filtration was available. Moreover, in order to rationalize all through the measurement method, we examined the followings: (1) Improvement of the enriching apparatus. (2) Easier measurement of heavy water concentration using a density meter, instead of a mass spectrometer. The concentration of water samples was measured to determine the enrichment rate of tritium during the electrolysis enrichment. (author)
Simplified method for the determination of N-nitrosamines in rubber vulcanizates
Energy Technology Data Exchange (ETDEWEB)
Incavo, Joseph A [Goodyear Tire and Rubber Company, Akron, OH (United States); Schafer, Melvin A [oodyear Tire and Rubber Company, Akron, OH (United States)
2006-01-31
A simplified method for the trace determination of N-nitrosamines in carbon black-loaded rubber compounds is described. The extraction of volatile nitrosamines is accomplished by thermal desorption rather than the traditional solvent extraction procedure. The analytes are trapped on Thermosorb/N sorbent and subsequently analyzed by gas chromatography with thermal energy analyzer detection (GC/TEA). Conditions that provide full extraction of nitrosamines from actual rubber compounds were determined to be 30 min at 150 deg. C in vessels dynamically purged with N{sub 2}. Method precision was found to be 10% for NDMA at 71 ng/g and 7.3% for NMOR at 248 ng/g. Recoveries for the seven common N-nitrosamines ranged from 94 to 117%. Limits of detection in the rubber matrix are 6.3-13 ng/g. The technique is found to offer improved recovery of lower molecular weight nitrosamines and it is shown to be simpler and faster than previous techniques.
Simplified method for the determination of N-nitrosamines in rubber vulcanizates
International Nuclear Information System (INIS)
Incavo, Joseph A.; Schafer, Melvin A.
2006-01-01
A simplified method for the trace determination of N-nitrosamines in carbon black-loaded rubber compounds is described. The extraction of volatile nitrosamines is accomplished by thermal desorption rather than the traditional solvent extraction procedure. The analytes are trapped on Thermosorb/N sorbent and subsequently analyzed by gas chromatography with thermal energy analyzer detection (GC/TEA). Conditions that provide full extraction of nitrosamines from actual rubber compounds were determined to be 30 min at 150 deg. C in vessels dynamically purged with N 2 . Method precision was found to be 10% for NDMA at 71 ng/g and 7.3% for NMOR at 248 ng/g. Recoveries for the seven common N-nitrosamines ranged from 94 to 117%. Limits of detection in the rubber matrix are 6.3-13 ng/g. The technique is found to offer improved recovery of lower molecular weight nitrosamines and it is shown to be simpler and faster than previous techniques
A Simplified Method for Laboratory Preparation of Organ Specific Indium 113m Compounds
Energy Technology Data Exchange (ETDEWEB)
Adatepe, M H; Potchen, E James [Washington University School of Medicine, St. Louis (United States)
1969-03-15
Generator systems producing short lived nuclides from longer lived parents have distinct clinical advantages. They are more economical, result in a lower radiation dose, and can make short lived scanning readily available even in areas remote from rapid radiopharmaceutical delivery services. The {sup 113}Sn-{sup 113m}In generator has the additional advantage that, as a transition metal, Indium can be readily complexed into organ specific preparations. 113Sn, a reactor produced nuclide with a 118 day half life, is absorbed on a zirconium or silica gel column. the generator is eluded with 5 to 8 ml of 0.05 N HCL solution at pH 1.3-1.4. The daughter nuclide, {sup 113m}In, has a half life of 1.7 hours and emits a 393 Kev monoenergetic gamma ray. Previous methods for labeling organ specific complexes with {sup 113m}In required terminal autoclaving before injection. With the recent introduction of sterile, apyrogenic {sup 113}Sn-{sup 113m}In generators, we have developed a simplified technique for the laboratory preparation of Indium labeled compounds. This method eliminates autoclaving and titration enabling us to pre-prepare organ specific complexes for blood pool, liver, spleen, brain, kidney and lung scanning.
A simplified spherical harmonic method for coupled electron-photon transport calculations
International Nuclear Information System (INIS)
Josef, J.A.
1996-12-01
In this thesis we have developed a simplified spherical harmonic method (SP N method) and associated efficient solution techniques for 2-D multigroup electron-photon transport calculations. The SP N method has never before been applied to charged-particle transport. We have performed a first time Fourier analysis of the source iteration scheme and the P 1 diffusion synthetic acceleration (DSA) scheme applied to the 2-D SP N equations. Our theoretical analyses indicate that the source iteration and P 1 DSA schemes are as effective for the 2-D SP N equations as for the 1-D S N equations. Previous analyses have indicated that the P 1 DSA scheme is unstable (with sufficiently forward-peaked scattering and sufficiently small absorption) for the 2-D S N equations, yet is very effective for the 1-D S N equations. In addition, we have applied an angular multigrid acceleration scheme, and computationally demonstrated that it performs as well for the 2-D SP N equations as for the 1-D S N equations. It has previously been shown for 1-D S N calculations that this scheme is much more effective than the DSA scheme when scattering is highly forward-peaked. We have investigated the applicability of the SP N approximation to two different physical classes of problems: satellite electronics shielding from geomagnetically trapped electrons, and electron beam problems. In the space shielding study, the SP N method produced solutions that are accurate within 10% of the benchmark Monte Carlo solutions, and often orders of magnitude faster than Monte Carlo. We have successfully modeled quasi-void problems and have obtained excellent agreement with Monte Carlo. We have observed that the SP N method appears to be too diffusive an approximation for beam problems. This result, however, is in agreement with theoretical expectations
A simplified approach to estimating reference source terms for LWR designs
International Nuclear Information System (INIS)
1999-12-01
systems. The publication of this IAEA technical document represents the conclusion of a task, initiated in 1996, devoted to the estimation of the radioactive source term in nuclear reactors. It focuses mainly on light water reactors (LWRs)
Directory of Open Access Journals (Sweden)
N. A. Siddiqui
2011-06-01
Full Text Available Underground concrete barriers are frequently used to protect strategic structures like Nuclear power plants (NPP, deep under the soil against any possible high velocity missile impact. For a given range and type of missile (or projectile it is of paramount importance to examine the reliability of underground concrete barriers under expected uncertainties involved in the missile, concrete, and soil parameters. In this paper, a simple procedure for the reliability assessment of underground concrete barriers against normal missile impact has been presented using the First Order Reliability Method (FORM. The presented procedure is illustrated by applying it to a concrete barrier that lies at a certain depth in the soil. Some parametric studies are also conducted to obtain the design values which make the barrier as reliable as desired.
A simplified method for generation of pseudo natural colours from colour infrared aerial photos
DEFF Research Database (Denmark)
Knudsen, Thomas; Olsen, Brian Pilemann
mapping methods. The method presented is a dramatic simplification of a recently published method, going from a 7 step to a 2 step procedure. The first step is a classification of the input image into 4 domains, based on simple thresholding of a vegetation index and a saturation measure for each pixel....... In the second step the blue colour component is estimated using tailored models for each domain. Green and red colour components are taken directly fron the CIR photo. The visual impression of the results from the 2 step method is only slightly inferior to the original 7 step method. The implementation, however...
Energy Technology Data Exchange (ETDEWEB)
Beresford, N.A. [Lancaster Environment Centre, NERC Centre for Ecology and Hydrology, Lancaster (United Kingdom); Vives i Batlle, J. [Belgian Nuclear Research Centre, Mol (Belgium)
2013-11-15
The application of allometric, or mass-dependent, relationships within radioecology has increased with the evolution of models to predict the exposure of organisms other than man. Allometry presents a method of addressing the lack of empirical data on radionuclide transfer and metabolism for the many radionuclide-species combinations which may need to be considered. However, sufficient data across a range of species with different masses are required to establish allometric relationships and this is not always available. Here, an alternative allometric approach to predict the biological half-life of radionuclides in homoeothermic vertebrates which does not require such data is derived. Biological half-life values are predicted for four radionuclides and compared to available data for a range of species. All predictions were within a factor of five of the observed values when the model was parameterised appropriate to the feeding strategy of each species. This is an encouraging level of agreement given that the allometric models are intended to provide broad approximations rather than exact values. However, reasons why some radionuclides deviate from what would be anticipated from Kleiber's law need to be determined to allow a more complete exploitation of the potential of allometric extrapolation within radioecological models. (orig.)
Directory of Open Access Journals (Sweden)
Ülker Bekir
2006-10-01
Full Text Available Abstract Background The Agrobacterium vacuum (Bechtold et al 1993 and floral-dip (Clough and Bent 1998 are very efficient methods for generating transgenic Arabidopsis plants. These methods allow plant transformation without the need for tissue culture. Large volumes of bacterial cultures grown in liquid media are necessary for both of these transformation methods. This limits the number of transformations that can be done at a given time due to the need for expensive large shakers and limited space on them. Additionally, the bacterial colonies derived from solid media necessary for starting these liquid cultures often fail to grow in such large volumes. Therefore the optimum stage of plant material for transformation is often missed and new plant material needs to be grown. Results To avoid problems associated with large bacterial liquid cultures, we investigated whether bacteria grown on plates are also suitable for plant transformation. We demonstrate here that bacteria grown on plates can be used with similar efficiency for transforming plants even after one week of storage at 4°C. This makes it much easier to synchronize Agrobacterium and plants for transformation. DNA gel blot analysis was carried out on the T1 plants surviving the herbicide selection and demonstrated that the surviving plants are indeed transgenic. Conclusion The simplified method works as efficiently as the previously reported protocols and significantly reduces the workload, cost and time. Additionally, the protocol reduces the risk of large scale contaminations involving GMOs. Most importantly, many more independent transformations per day can be performed using this modified protocol.
A Simplified Control Method for Tie-Line Power of DC Micro-Grid
Directory of Open Access Journals (Sweden)
Yanbo Che
2018-04-01
Full Text Available Compared with the AC micro-grid, the DC micro-grid has low energy loss and no issues of frequency stability, which makes it more accessible for distributed energy. Thus, the DC micro-grid has good potential for development. A variety of renewable energy is included in the DC micro-grid, which is easily affected by the environment, causing fluctuation of the DC voltage. For grid-connected DC micro-grid with droop control strategy, the tie-line power is affected by fluctuations in the DC voltage, which sets higher requirements for coordinated control of the DC micro-grid. This paper presents a simplified control method to maintain a constant tie-line power that is suitable for the DC micro-grid with the droop control strategy. By coordinating the designs of the droop control characteristics of generators, energy storage units and grid-connected inverter, a dead band is introduced to the droop control to improve the system performance. The tie-line power in the steady state is constant. When a large disturbance occurs, the AC power grid can provide power support to the micro-grid in time. The simulation example verifies the effectiveness of the proposed control strategy.
Simplified methods for in vivo measurement of acetylcholinesterase activity in rodent brain
International Nuclear Information System (INIS)
Kilbourn, Michael R.; Sherman, Phillip S.; Snyder, Scott E.
1999-01-01
Simplified methods for in vivo studies of acetylcholinesterase (AChE) activity in rodent brain were evaluated using N-[ 11 C]methylpiperidinyl propionate ([ 11 C]PMP) as an enzyme substrate. Regional mouse brain distributions were determined at 1 min (representing initial brain uptake) and 30 min (representing trapped product) after intravenous [ 11 C]PMP administration. Single time point tissue concentrations (percent injected dose/gram at 30 min), tissue concentration ratios (striatum/cerebellum and striatum/cortex ratios at 30 min), and regional tissue retention fractions (defined as percent injected dose 30 min/percent injected dose 1 min) were evaluated as measures of AChE enzymatic activity in mouse brain. Studies were carried out in control animals and after dosing with phenserine, a selective centrally active AChE inhibitor; neostigmine, a peripheral cholinesterase inhibitor; and a combination of the two drugs. In control and phenserine-treated animals, absolute tissue concentrations and regional retention fractions provide good measures of dose-dependent inhibition of brain AChE; tissue concentration ratios, however, provide erroneous conclusions. Peripheral inhibition of cholinesterases, which changes the blood pharmacokinetics of the radiotracer, diminishes the sensitivity of all measures to detect changes in central inhibition of the enzyme. We conclude that certain simple measures of AChE hydrolysis rates for [ 11 C]PMP are suitable for studies where alterations of the peripheral blood metabolism of the tracer are kept to a minimum
Simplified methods for in vivo measurement of acetylcholinesterase activity in rodent brain
Energy Technology Data Exchange (ETDEWEB)
Kilbourn, Michael R. E-mail: mkilbour@umich.edu; Sherman, Phillip S.; Snyder, Scott E
1999-07-01
Simplified methods for in vivo studies of acetylcholinesterase (AChE) activity in rodent brain were evaluated using N-[{sup 11}C]methylpiperidinyl propionate ([{sup 11}C]PMP) as an enzyme substrate. Regional mouse brain distributions were determined at 1 min (representing initial brain uptake) and 30 min (representing trapped product) after intravenous [{sup 11}C]PMP administration. Single time point tissue concentrations (percent injected dose/gram at 30 min), tissue concentration ratios (striatum/cerebellum and striatum/cortex ratios at 30 min), and regional tissue retention fractions (defined as percent injected dose 30 min/percent injected dose 1 min) were evaluated as measures of AChE enzymatic activity in mouse brain. Studies were carried out in control animals and after dosing with phenserine, a selective centrally active AChE inhibitor; neostigmine, a peripheral cholinesterase inhibitor; and a combination of the two drugs. In control and phenserine-treated animals, absolute tissue concentrations and regional retention fractions provide good measures of dose-dependent inhibition of brain AChE; tissue concentration ratios, however, provide erroneous conclusions. Peripheral inhibition of cholinesterases, which changes the blood pharmacokinetics of the radiotracer, diminishes the sensitivity of all measures to detect changes in central inhibition of the enzyme. We conclude that certain simple measures of AChE hydrolysis rates for [{sup 11}C]PMP are suitable for studies where alterations of the peripheral blood metabolism of the tracer are kept to a minimum.
Finite element method solution of simplified P3 equation for flexible geometry handling
International Nuclear Information System (INIS)
Ryu, Eun Hyun; Joo, Han Gyu
2011-01-01
In order to obtain efficiently core flux solutions which would be much closer to the transport solution than the diffusion solution is, not being limited by the geometry of the core, the simplified P 3 (SP 3 ) equation is solved with the finite element method (FEM). A generic mesh generator, GMSH, is used to generate linear and quadratic mesh data. The linear system resulting from the SP 3 FEM discretization is solved by Krylov subspace methods (KSM). A symmetric form of the SP 3 equation is derived to apply the conjugate gradient method rather than the KSMs for nonsymmetric linear systems. An optional iso-parametric quadratic mapping scheme, which is to selectively model nonlinear shapes with a quadratic mapping to prevent significant mismatch in local domain volume, is also implemented for efficient application of arbitrary geometry handling. The gain in the accuracy attainable by the SP 3 solution over the diffusion solution is assessed by solving numerous benchmark problems having various core geometries including the IAEA PWR problems involving rectangular fuels and the Takeda fast reactor problems involving hexagonal fuels. The reference transport solution is produced by the McCARD Monte Carlo code and the multiplication factor and power distribution errors are assessed. In addition, the effect of quadratic mapping is examined for circular cell problems. It is shown that significant accuracy gain is possible with the SP 3 solution for the fast reactor problems whereas only marginal improvement is noted for thermal reactor problems. The quadratic mapping is also quite effective handling geometries with curvature. (author)
Onken, Reiner
2017-11-01
A relocatable ocean prediction system (ROPS) was employed to an observational data set which was collected in June 2014 in the waters to the west of Sardinia (western Mediterranean) in the framework of the REP14-MED experiment. The observational data, comprising more than 6000 temperature and salinity profiles from a fleet of underwater gliders and shipborne probes, were assimilated in the Regional Ocean Modeling System (ROMS), which is the heart of ROPS, and verified against independent observations from ScanFish tows by means of the forecast skill score as defined by Murphy(1993). A simplified objective analysis (OA) method was utilised for assimilation, taking account of only those profiles which were located within a predetermined time window W. As a result of a sensitivity study, the highest skill score was obtained for a correlation length scale C = 12.5 km, W = 24 h, and r = 1, where r is the ratio between the error of the observations and the background error, both for temperature and salinity. Additional ROPS runs showed that (i) the skill score of assimilation runs was mostly higher than the score of a control run without assimilation, (i) the skill score increased with increasing forecast range, and (iii) the skill score for temperature was higher than the score for salinity in the majority of cases. Further on, it is demonstrated that the vast number of observations can be managed by the applied OA method without data reduction, enabling timely operational forecasts even on a commercially available personal computer or a laptop.
Directory of Open Access Journals (Sweden)
Hassan Sarailoo
2013-10-01
Full Text Available Objectives: The aim of this study was to extract suitable spatiotemporal and kinematic parameters to determine how Total Knee Replacement (TKR alters patients’ knee kinematics during gait, using a rapid and simplified quantitative two-dimensional gait analysis procedure. Methods: Two-dimensional kinematic gait pattern of 10 participants were collected before and after the TKR surgery, using a 60 Hz camcorder in sagittal plane. Then, the kinematic parameters were extracted using the gait data. A student t-test was used to compare the group-average of spatiotemporal and peak kinematic characteristics in the sagittal plane. The knee condition was also evaluated using the Oxford Knee Score (OKS Questionnaire to ensure thateach subject was placed in the right group. Results: The results showed a significant improvement in knee flexion during stance and swing phases after TKR surgery. The walking speed was increased as a result of stride length and cadence improvement, but this increment was not statistically significant. Both post-TKR and control groups showed an increment in spatiotemporal and peak kinematic characteristics between comfortable and fast walking speeds. Discussion: The objective kinematic parameters extracted from 2D gait data were able to show significant improvements of the knee joint after TKR surgery. The patients with TKR surgery were also able to improve their knee kinematics during fast walking speed equal to the control group. These results provide a good insight into the capabilities of the presented method to evaluate knee functionality before and after TKR surgery and to define a more effective rehabilitation program.
International Nuclear Information System (INIS)
Kohno, R; Hotta, K; Nishioka, S; Matsubara, K; Tansho, R; Suzuki, T
2011-01-01
We implemented the simplified Monte Carlo (SMC) method on graphics processing unit (GPU) architecture under the computer-unified device architecture platform developed by NVIDIA. The GPU-based SMC was clinically applied for four patients with head and neck, lung, or prostate cancer. The results were compared to those obtained by a traditional CPU-based SMC with respect to the computation time and discrepancy. In the CPU- and GPU-based SMC calculations, the estimated mean statistical errors of the calculated doses in the planning target volume region were within 0.5% rms. The dose distributions calculated by the GPU- and CPU-based SMCs were similar, within statistical errors. The GPU-based SMC showed 12.30–16.00 times faster performance than the CPU-based SMC. The computation time per beam arrangement using the GPU-based SMC for the clinical cases ranged 9–67 s. The results demonstrate the successful application of the GPU-based SMC to a clinical proton treatment planning. (note)
Simplified calculation method for radiation dose under normal condition of transport
International Nuclear Information System (INIS)
Watabe, N.; Ozaki, S.; Sato, K.; Sugahara, A.
1993-01-01
In order to estimate radiation dose during transportation of radioactive materials, the following computer codes are available: RADTRAN, INTERTRAN, J-TRAN. Because these codes consist of functions for estimating doses not only under normal conditions but also in the case of accidents, when nuclei may leak and spread into the environment by air diffusion, the user needs to have special knowledge and experience. In this presentation, we describe how, with a view to preparing a method by which a person in charge of transportation can calculate doses in normal conditions, the main parameters upon which the value of doses depends were extracted and the dose for a unit of transportation was estimated. (J.P.N.)
Eloff, J N; Ntloedibe, D T; van Brummelen, R
2011-01-01
Three of the factors limiting the rational use of herbal medicine are uncertainty on effectivity, uncertainty on safety and variation in quality of the product. Because many herbal medicines have been used over centuries by indigenous peoples, the safety and effectivity is frequently not such a big concern. With more people collecting and distributing herbal medicine, the offered product is however, frequently not what the label indicates either through a genuine mistake, but also through fraud especially where expensive herbal medicine is concerned. Some wrong identifications have already led to serious side effects and deaths. Planar chromatography or thin layer chromatography [TLC] is widely used to verify the identity of plant extracts by determining the chemical fingerprint of the extracts. In a leading publication 17 different extractants, 41 solvent systems and 44 spray reagents have been used to verify the identity of important herbal preparations. We investigated whether a simplified system could not be developed to aid small laboratories in identifying different herbal medicines. We compared the efficacy of different extractants, identified and developed three TLC solvent systems that would separate compounds with low, medium and high polarity and then also investigated the use of several spray reagents. With acetone as extractant and benzene:ethanol:ammonia [9:1:0.1], chloroform:ethylacetate:formic acid [5:4:1] and ethylacetate:methanol:water [10:1.35:1] as TLC solvent system and vanillin-sulphuric acid as spray reagent the identity of 81 samples of more than 50 herbal preparations could be verified on the basis of the chromatograms. The same product from different suppliers usually gave similar chromatograms. More importantly in several cases it was clear that products with the same label were so different that a mistake must have occurred in the labelling. This method has found application in the quality control of the most important African medicinal
International Nuclear Information System (INIS)
Hübel, Hartwig; Willuweit, Adrian; Rudolph, Jürgen; Ziegler, Rainer; Lang, Hermann; Rother, Klemens; Deller, Simon
2014-01-01
As elastic–plastic fatigue analyses are still time consuming the simplified elastic–plastic analysis (e.g. ASME Section III, NB 3228.5, the French RCC-M code, paragraphs B 3234.3, B 3234.5 and B3234.6 and the German KTA rule 3201.2, paragraph 7.8.4) is often applied. Besides linearly elastic analyses and factorial plasticity correction (K e factors) direct methods are an option. In fact, calculation effort and accuracy of results are growing in the following graded scheme: a) linearly elastic analysis along with K e correction, b) direct methods for the determination of stabilized elastic–plastic strain ranges and c) incremental elastic–plastic methods for the determination of stabilized elastic–plastic strain ranges. The paper concentrates on option b) by substantiating the practical applicability of the simplified theory of plastic zones STPZ (based on Zarka's method) and – for comparison – the established Twice-Yield method. The Twice-Yield method is explicitly addressed in ASME Code, Section VIII, Div. 2. Application relevant aspects are particularly addressed. Furthermore, the applicability of the STPZ for arbitrary load time histories in connection with an appropriate cycle counting method is discussed. Note, that the STPZ is applicable both for the determination of (fatigue relevant) elastic–plastic strain ranges and (ratcheting relevant) locally accumulated strains. This paper concentrates on the performance of the method in terms of the determination of elastic–plastic strain ranges and fatigue usage factors. The additional performance in terms of locally accumulated strains and ratcheting will be discussed in a future publication. - Highlights: • Simplified elastic–plastic fatigue analyses. • Simplified theory of plastic zones. • Thermal cyclic loading. • Twice-Yield method. • Practical application examples
International Nuclear Information System (INIS)
Dupas, P.
1996-01-01
EDF has developed a software package of simplified methods (proprietary ones from literature) in order to study the thermal and mechanical behaviour of a PWR pressure vessel during a severe accident involving a corium localization in the vessel lower head. Using a part of this package, we can evaluate for instance successively: the heat flux at the inner surface of the vessel (conductive or convective pool of corium); the thermal exchange coefficient between the vessel and the outside (dry pit or flooded pit, watertight thermal insulation or not); the complete thermal evolution of the vessel (temperature profile, melting); the possible global plastic failure of the vessel; the creep behaviour in the vessel. These simplified methods are low cost alternative to finite element calculations which are yet used to validate the previous methods, waiting for experimental results to come. (authors)
International Nuclear Information System (INIS)
Dupas, P.; Schneiter, J.R.
1996-01-01
EDF has developed a software package of simplified methods (proprietary ones or from literature) in order to study the thermal and mechanical behavior of a PWR pressure vessel during a severe accident involving a corium localization in the vessel lower head. Using a part of this package, the authors can evaluate for instance successively: the heat flux at the inner surface of the vessel (conductive or convective pool of corium); the thermal exchange coefficient between the vessel and the outside (dry pit or flooded pit, watertight thermal insulation or not); the complete thermal evolution of the vessel (temperature profile, melting); the possible global plastic failure of the vessel; the creep behavior in the thickness of the vessel. These simplified methods are a cost effective alternative to finite element calculations which are yet used to validate the previous methods, waiting for experimental results to come
An assessment of simplified methods to determine damage from ship-to-ship collisions
International Nuclear Information System (INIS)
Parks, M.B.; Ammerman, D.J.
1996-01-01
Sandia National Laboratories (SNL) is studying the safety of shipping, radioactive materials (RAM) by sea, the SeaRAM project (McConnell, et al. 1995), which is sponsored by the US Department of Energy (DOE). The project is concerned with the potential effects of ship collisions and fires on onboard RAM packages. Existing methodologies are being assessed to determine their adequacy to predict the effect of ship collisions and fires on RAM packages and to estimate whether or not a given accident might lead to a release of radioactivity. The eventual goal is to develop a set of validated methods, which have been checked by comparison with test data and/or detailed finite element analyses, for predicting the consequences of ship collisions and fires. These methods could then be used to provide input for overall risk assessments of RAM sea transport. The emphasis of this paper is on methods for predicting- effects of ship collisions
Simplified Method for Preliminary EIA of WE Installations based on Newtechnology Classification
DEFF Research Database (Denmark)
Margheritini, Lucia
2010-01-01
The Environmental Impact Assessment (EIA) is an environmental management instrument implemented worldwide. Full scale WECs are expected to be subjects to EIA. The consents application process can be a very demanding for Wave Energy Converters (WECs) developers. The process is possibly aggravated...... depending on few strategic parameters to simplify and speed up the scoping procedure and to provide an easier understanding of the technologies to the authorities and bodies involved in the EIA of WECs....
Directory of Open Access Journals (Sweden)
Julia Chernova
2016-07-01
Full Text Available Abstract Background Within-person variation in dietary records can lead to biased estimates of the distribution of food intake. Quantile estimation is especially relevant in the case of skewed distributions and in the estimation of under- or over-consumption. The analysis of the intake distributions of occasionally-consumed foods presents further challenges due to the high frequency of zero records. Two-part mixed-effects models account for excess-zeros, daily variation and correlation arising from repeated individual dietary records. In practice, the application of the two-part model with random effects involves Monte Carlo (MC simulations. However, these can be time-consuming and the precision of MC estimates depends on the size of the simulated data which can hinder reproducibility of results. Methods We propose a new approach based on numerical integration as an alternative to MC simulations to estimate the distribution of occasionally-consumed foods in sub-populations. The proposed approach and MC methods are compared by analysing the alcohol intake distribution in a sub-population of individuals at risk of developing metabolic syndrome. Results The rate of convergence of the results of MC simulations to the results of our proposed method is model-specific, depends on the number of draws from the target distribution, and is relatively slower at the tails of the distribution. Our data analyses also show that model misspecification can lead to incorrect model parameter estimates. For example, under the wrong model assumption of zero correlation between the components, one of the predictors turned out as non-significant at 5 % significance level (p-value 0.062 but it was estimated as significant in the correctly specified model (p-value 0.016. Conclusions The proposed approach for the analysis of the intake distributions of occasionally-consumed foods provides a quicker and more precise alternative to MC simulation methods, particularly in the
Koval, Viacheslav
The seismic design provisions of the CSA-S6 Canadian Highway Bridge Design Code and the AASHTO LRFD Seismic Bridge Design Specifications have been developed primarily based on historical earthquake events that have occurred along the west coast of North America. For the design of seismic isolation systems, these codes include simplified analysis and design methods. The appropriateness and range of application of these methods are investigated through extensive parametric nonlinear time history analyses in this thesis. It was found that there is a need to adjust existing design guidelines to better capture the expected nonlinear response of isolated bridges. For isolated bridges located in eastern North America, new damping coefficients are proposed. The applicability limits of the code-based simplified methods have been redefined to ensure that the modified method will lead to conservative results and that a wider range of seismically isolated bridges can be covered by this method. The possibility of further improving current simplified code methods was also examined. By transforming the quantity of allocated energy into a displacement contribution, an idealized analytical solution is proposed as a new simplified design method. This method realistically reflects the effects of ground-motion and system design parameters, including the effects of a drifted oscillation center. The proposed method is therefore more appropriate than current existing simplified methods and can be applicable to isolation systems exhibiting a wider range of properties. A multi-level-hazard performance matrix has been adopted by different seismic provisions worldwide and will be incorporated into the new edition of the Canadian CSA-S6-14 Bridge Design code. However, the combined effect and optimal use of isolation and supplemental damping devices in bridges have not been fully exploited yet to achieve enhanced performance under different levels of seismic hazard. A novel Dual-Level Seismic
Simplified Method of Optimal Sizing of a Renewable Energy Hybrid System for Schools
Directory of Open Access Journals (Sweden)
Jiyeon Kim
2016-11-01
Full Text Available Schools are a suitable public building for renewable energy systems. Renewable energy hybrid systems (REHSs have recently been introduced in schools following a new national regulation that mandates renewable energy utilization. An REHS combines the common renewable-energy sources such as geothermal heat pumps, solar collectors for water heating, and photovoltaic systems with conventional energy systems (i.e., boilers and air-source heat pumps. Optimal design of an REHS by adequate sizing is not a trivial task because it usually requires intensive work including detailed simulation and demand/supply analysis. This type of simulation-based approach for optimization is difficult to implement in practice. To address this, this paper proposes simplified sizing equations for renewable-energy systems of REHSs. A conventional optimization process is used to calculate the optimal combinations of an REHS for cases of different numbers of classrooms and budgets. On the basis of the results, simplified sizing equations that use only the number of classrooms as the input are proposed by regression analysis. A verification test was carried out using an initial conventional optimization process. The results show that the simplified sizing equations predict similar sizing results to the initial process, consequently showing similar capital costs within a 2% error.
Dose estimation by biological methods
International Nuclear Information System (INIS)
Guerrero C, C.; David C, L.; Serment G, J.; Brena V, M.
1997-01-01
The human being is exposed to strong artificial radiation sources, mainly of two forms: the first is referred to the occupationally exposed personnel (POE) and the second, to the persons that require radiological treatment. A third form less common is by accidents. In all these conditions it is very important to estimate the absorbed dose. The classical biological dosimetry is based in the dicentric analysis. The present work is part of researches to the process to validate the In situ Fluorescent hybridation (FISH) technique which allows to analyse the aberrations on the chromosomes. (Author)
International Nuclear Information System (INIS)
Eide, S.A.; Smith, T.H.; Peatross, R.G.; Stepan, I.E.
1996-09-01
This report presents a simplified method to assess the health and safety risk of Environmental Management activities of the US Department of Energy (DOE). The method applies to all types of Environmental Management activities including waste management, environmental restoration, and decontamination and decommissioning. The method is particularly useful for planning or tradeoff studies involving multiple conceptual options because it combines rapid evaluation with a quantitative approach. The method is also potentially applicable to risk assessments of activities other than DOE Environmental Management activities if rapid quantitative results are desired
Internal Dosimetry Intake Estimation using Bayesian Methods
International Nuclear Information System (INIS)
Miller, G.; Inkret, W.C.; Martz, H.F.
1999-01-01
New methods for the inverse problem of internal dosimetry are proposed based on evaluating expectations of the Bayesian posterior probability distribution of intake amounts, given bioassay measurements. These expectation integrals are normally of very high dimension and hence impractical to use. However, the expectations can be algebraically transformed into a sum of terms representing different numbers of intakes, with a Poisson distribution of the number of intakes. This sum often rapidly converges, when the average number of intakes for a population is small. A simplified algorithm using data unfolding is described (UF code). (author)
Ochoa-Avendaño, J.; Garzon-Alvarado, D. A.; Linero, Dorian L.; Cerrolaza, M.
2017-01-01
This paper presents the formulation, implementation, and validation of a simplified qualitative model to determine the crack path of solids considering static loads, infinitesimal strain, and plane stress condition. This model is based on finite element method with a special meshing technique, where nonlinear link elements are included between the faces of the linear triangular elements. The stiffness loss of some link elements represents the crack opening. Three experimental tests of bending...
A simplified method for random vibration analysis of structures with random parameters
International Nuclear Information System (INIS)
Ghienne, Martin; Blanzé, Claude
2016-01-01
Piezoelectric patches with adapted electrical circuits or viscoelastic dissipative materials are two solutions particularly adapted to reduce vibration of light structures. To accurately design these solutions, it is necessary to describe precisely the dynamical behaviour of the structure. It may quickly become computationally intensive to describe robustly this behaviour for a structure with nonlinear phenomena, such as contact or friction for bolted structures, and uncertain variations of its parameters. The aim of this work is to propose a non-intrusive reduced stochastic method to characterize robustly the vibrational response of a structure with random parameters. Our goal is to characterize the eigenspace of linear systems with dynamic properties considered as random variables. This method is based on a separation of random aspects from deterministic aspects and allows us to estimate the first central moments of each random eigenfrequency with a single deterministic finite elements computation. The method is applied to a frame with several Young's moduli modeled as random variables. This example could be expanded to a bolted structure including piezoelectric devices. The method needs to be enhanced when random eigenvalues are closely spaced. An indicator with no additional computational cost is proposed to characterize the ’’proximity” of two random eigenvalues. (paper)
Wang, Xinwei; Chen, Zhe; Sun, Fangyuan; Zhang, Hang; Jiang, Yuyan; Tang, Dawei
2018-03-01
Heat transfer in nanostructures is of critical importance for a wide range of applications such as functional materials and thermal management of electronics. Time-domain thermoreflectance (TDTR) has been proved to be a reliable measurement technique for the thermal property determinations of nanoscale structures. However, it is difficult to determine more than three thermal properties at the same time. Heat transfer model simplifications can reduce the fitting variables and provide an alternative way for thermal property determination. In this paper, two simplified models are investigated and analyzed by the transform matrix method and simulations. TDTR measurements are performed on Al-SiO2-Si samples with different SiO2 thickness. Both theoretical and experimental results show that the simplified tri-layer model (STM) is reliable and suitable for thin film samples with a wide range of thickness. Furthermore, the STM can also extract the intrinsic thermal conductivity and interfacial thermal resistance from serial samples with different thickness.
DEFF Research Database (Denmark)
Vedel-Larsen, Esben; Fuglø, Jacob; Channir, Fouad
2010-01-01
, are variable and depend on cognitive function. This study compares the performance of a simplified Kalman filter with Sliding Window Averaging in tracking dynamical changes in single trial P300. The comparison is performed on simulated P300 data with added background noise consisting of both simulated and real...... background EEG in various input signal to noise ratios. While both methods can be applied to track dynamical changes, the simplified Kalman filter has an advantage over the Sliding Window Averaging, most notable in a better noise suppression when both are optimized for faster changing latency and amplitude...
A Method of Nuclear Software Reliability Estimation
International Nuclear Information System (INIS)
Park, Gee Yong; Eom, Heung Seop; Cheon, Se Woo; Jang, Seung Cheol
2011-01-01
A method on estimating software reliability for nuclear safety software is proposed. This method is based on the software reliability growth model (SRGM) where the behavior of software failure is assumed to follow the non-homogeneous Poisson process. Several modeling schemes are presented in order to estimate and predict more precisely the number of software defects based on a few of software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating the software test cases into the model. It is identified that this method is capable of accurately estimating the remaining number of software defects which are on-demand type directly affecting safety trip functions. The software reliability can be estimated from a model equation and one method of obtaining the software reliability is proposed
Method-related estimates of sperm vitality.
Cooper, Trevor G; Hellenkemper, Barbara
2009-01-01
Comparison of methods that estimate viability of human spermatozoa by monitoring head membrane permeability revealed that wet preparations (whether using positive or negative phase-contrast microscopy) generated significantly higher percentages of nonviable cells than did air-dried eosin-nigrosin smears. Only with the latter method did the sum of motile (presumed live) and stained (presumed dead) preparations never exceed 100%, making this the method of choice for sperm viability estimates.
Fukuda, David H; Smith-Ryan, Abbie E; Kendall, Kristina L; Moon, Jordan R; Stout, Jeffrey R
2013-12-01
The purpose of this investigation was to determine body composition classification using field-based testing measurements in healthy elderly men and women. The use of isoperformance curves is presented as a method for this determination. Baseline values from 107 healthy Caucasian men and women, over the age of 65years old, who participated in a separate longitudinal study, were used for this investigation. Field-based measurements of age, height, weight, body mass index (BMI), and handgrip strength were recorded on an individual basis. Relative skeletal muscle index (RSMI) and body fat percentage (FAT%) were determined by dual-energy X-ray absorptiometry (DXA) for each participant. Sarcopenia cut-off values for RSMI of 7.26kg·m(-2) for men and 5.45kg·m(-2) for women and elderly obesity cut-off values for FAT% of 27% for men and 38% for women were used. Individuals above the RSMI cut-off and below the FAT% cut-off were classified in the normal phenotype category, while individuals below the RSMI cut-off and above the FAT% cut-off were classified in the sarcopenic-obese phenotype category. Prediction equations for RSMI and FAT% from sex, BMI, and handgrip strength values were developed using multiple regression analysis. The prediction equations were validated using double cross-validation. The final regression equation developed to predict FAT% from sex, BMI, and handgrip strength resulted in a strong relationship (adjusted R(2)=0.741) to DXA values with a low standard error of the estimate (SEE=3.994%). The final regression equation developed to predict RSMI from the field-based testing measures also resulted in a strong relationship (adjusted R(2)=0.841) to DXA values with a low standard error of the estimate (SEE=0.544kg·m(-2)). Isoperformance curves were developed from the relationship between BMI and handgrip strength for men and women with the aforementioned clinical phenotype classification criteria. These visual representations were used to aid in the
International Nuclear Information System (INIS)
Ruivo, C.R.; Vaz, D.C.
2015-01-01
Highlights: • The transient thermal behaviour of external multilayer walls of buildings is studied. • Reference results for four representative walls, obtained with a numerical model, are provided. • Shortcomings of approaches based on the Mackey-and-Wright method are identified. • Handling full-feature excitations with Fourier series decomposition improves accuracy. • A simpler, yet accurate, promising novel approach to predict heat gain is proposed. - Abstract: Nowadays, simulation tools are available for calculating the thermal loads of multiple rooms of buildings, for given inputs. However, due to inaccuracies or uncertainties in some of the input data (e.g., thermal properties, air infiltrations flow rates, building occupancy), the evaluated thermal load may represent no more than just an estimate of the actual thermal load of the spaces. Accordingly, in certain practical situations, simplified methods may offer a more reasonable trade-off between effort and results accuracy than advanced software. Hence, despite the advances in computing power over the last decades, simplified methods for the evaluation of thermal loads are still of great interest nowadays, for both the practicing engineer and the graduating student, since these can be readily implemented or developed in common computational-tools, like a spreadsheet. The method of Mackey and Wright (M&W) is a simplified method that upon values of the decrement factor and time lag of a wall (or roof) estimates the instantaneous rate of heat transfer through its indoor surface. It assumes cyclic behaviour and shows good accuracy when the excitation and response have matching shapes, but it involves non negligible error otherwise, for example, in the case of walls of high thermal inertia. The aim of this study is to develop a simplified procedure that considerably improves the accuracy of the M&W method, particularly for excitations that noticeably depart from the sinusoidal shape, while not
Directory of Open Access Journals (Sweden)
Tweya Hannock
2012-07-01
Full Text Available Abstract Background Routine monitoring of patients on antiretroviral therapy (ART is crucial for measuring program success and accurate drug forecasting. However, compiling data from patient registers to measure retention in ART is labour-intensive. To address this challenge, we conducted a pilot study in Malawi to assess whether patient ART retention could be determined using pharmacy records as compared to estimates of retention based on standardized paper- or electronic based cohort reports. Methods Twelve ART facilities were included in the study: six used paper-based registers and six used electronic data systems. One ART facility implemented an electronic data system in quarter three and was included as a paper-based system facility in quarter two only. Routine patient retention cohort reports, paper or electronic, were collected from facilities for both quarter two [April–June] and quarter three [July–September], 2010. Pharmacy stock data were also collected from the 12 ART facilities over the same period. Numbers of ART continuation bottles recorded on pharmacy stock cards at the beginning and end of each quarter were documented. These pharmacy data were used to calculate the total bottles dispensed to patients in each quarter with intent to estimate the number of patients retained on ART. Information for time required to determine ART retention was gathered through interviews with clinicians tasked with compiling the data. Results Among ART clinics with paper-based systems, three of six facilities in quarter two and four of five facilities in quarter three had similar numbers of patients retained on ART comparing cohort reports to pharmacy stock records. In ART clinics with electronic systems, five of six facilities in quarter two and five of seven facilities in quarter three had similar numbers of patients retained on ART when comparing retention numbers from electronically generated cohort reports to pharmacy stock records. Among
Ikigai, H; Seki, K; Nishihara, S; Masuda, S
1988-01-01
A simplified method for preparation of concentrated exoproteins including protein A and alpha-toxin produced by Staphylococcus aureus was successfully devised. The concentrated proteins were obtained by cultivating S. aureus organisms on the surface of a liquid medium-containing cellophane bag enclosed in a sterilized glass flask. With the same amount of medium, the total amount of proteins obtained by the method presented here was identical with that obtained by conventional liquid culture. The concentration of proteins obtained by the method, however, was high enough to observe their distinct bands stained on polyacrylamide gel electrophoresis. This method was considered quite useful not only for large-scale cultivation for the purification of staphylococcal proteins but also for small-scale study using the proteins. The precise description of the method was presented and its possible usefulness was discussed.
The PEMFC-integrated CO oxidation — a novel method of simplifying the fuel cell plant
Rohland, Bernd; Plzak, Vojtech
Natural gas and methanol are the most economical fuels for residential fuel cell power generators as well as for mobile PEM-fuel cells. However, they have to be reformed with steam into hydrogen, which is to be cleaned from CO by shift-reaction and by partial oxidation to a level of no more than 30 ppm CO. This level is set by the Pt/Ru-C-anode of the PEMFC. A higher partial oxidation reaction rate for CO than those of Pt/Ru-C can be achieved in an oxidic Au-catalyst system. In the Fe 2O 3-Au system, a reaction rate of 2·10 -3 mol CO/s g Au at 1000 ppm CO and 5% "air bleed" at 80°C is achieved. This high rate allows to construct a catalyst-sheet for each cell within a PEMFC-stack. Practical and theoretical current/voltage characteristics of PEMFCs with catalyst-sheet are presented at 1000 ppm CO in hydrogen with 5% "air bleed". This gives the possibility of simplifying the gas processor of the plant.
A method of estimating log weights.
Charles N. Mann; Hilton H. Lysons
1972-01-01
This paper presents a practical method of estimating the weights of logs before they are yarded. Knowledge of log weights is required to achieve optimum loading of modern yarding equipment. Truckloads of logs are weighed and measured to obtain a local density index (pounds per cubic foot) for a species of logs. The density index is then used to estimate the weights of...
Directory of Open Access Journals (Sweden)
Wei Li
2012-01-01
Full Text Available An extended finite element method (XFEM for the forward model of 3D optical molecular imaging is developed with simplified spherical harmonics approximation (SPN. In XFEM scheme of SPN equations, the signed distance function is employed to accurately represent the internal tissue boundary, and then it is used to construct the enriched basis function of the finite element scheme. Therefore, the finite element calculation can be carried out without the time-consuming internal boundary mesh generation. Moreover, the required overly fine mesh conforming to the complex tissue boundary which leads to excess time cost can be avoided. XFEM conveniences its application to tissues with complex internal structure and improves the computational efficiency. Phantom and digital mouse experiments were carried out to validate the efficiency of the proposed method. Compared with standard finite element method and classical Monte Carlo (MC method, the validation results show the merits and potential of the XFEM for optical imaging.
Nonparametric methods for volatility density estimation
Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.
2009-01-01
Stochastic volatility modelling of financial processes has become increasingly popular. The proposed models usually contain a stationary volatility process. We will motivate and review several nonparametric methods for estimation of the density of the volatility process. Both models based on
Yuan, Shifei; Jiang, Lei; Yin, Chengliang; Wu, Hongjie; Zhang, Xi
2017-06-01
To guarantee the safety, high efficiency and long lifetime for lithium-ion battery, an advanced battery management system requires a physics-meaningful yet computationally efficient battery model. The pseudo-two dimensional (P2D) electrochemical model can provide physical information about the lithium concentration and potential distributions across the cell dimension. However, the extensive computation burden caused by the temporal and spatial discretization limits its real-time application. In this research, we propose a new simplified electrochemical model (SEM) by modifying the boundary conditions for electrolyte diffusion equations, which significantly facilitates the analytical solving process. Then to obtain a reduced order transfer function, the Padé approximation method is adopted to simplify the derived transcendental impedance solution. The proposed model with the reduced order transfer function can be briefly computable and preserve physical meanings through the presence of parameters such as the solid/electrolyte diffusion coefficients (Ds&De) and particle radius. The simulation illustrates that the proposed simplified model maintains high accuracy for electrolyte phase concentration (Ce) predictions, saying 0.8% and 0.24% modeling error respectively, when compared to the rigorous model under 1C-rate pulse charge/discharge and urban dynamometer driving schedule (UDDS) profiles. Meanwhile, this simplified model yields significantly reduced computational burden, which benefits its real-time application.
Spectrum estimation method based on marginal spectrum
International Nuclear Information System (INIS)
Cai Jianhua; Hu Weiwen; Wang Xianchun
2011-01-01
FFT method can not meet the basic requirements of power spectrum for non-stationary signal and short signal. A new spectrum estimation method based on marginal spectrum from Hilbert-Huang transform (HHT) was proposed. The procession of obtaining marginal spectrum in HHT method was given and the linear property of marginal spectrum was demonstrated. Compared with the FFT method, the physical meaning and the frequency resolution of marginal spectrum were further analyzed. Then the Hilbert spectrum estimation algorithm was discussed in detail, and the simulation results were given at last. The theory and simulation shows that under the condition of short data signal and non-stationary signal, the frequency resolution and estimation precision of HHT method is better than that of FFT method. (authors)
A simplified parsimonious higher order multivariate Markov chain model
Wang, Chao; Yang, Chuan-sheng
2017-09-01
In this paper, a simplified parsimonious higher-order multivariate Markov chain model (SPHOMMCM) is presented. Moreover, parameter estimation method of TPHOMMCM is give. Numerical experiments shows the effectiveness of TPHOMMCM.
A Simple Method to Estimate Large Fixed Effects Models Applied to Wage Determinants and Matching
Mittag, Nikolas
2016-01-01
Models with high dimensional sets of fixed effects are frequently used to examine, among others, linked employer-employee data, student outcomes and migration. Estimating these models is computationally difficult, so simplifying assumptions that are likely to cause bias are often invoked to make computation feasible and specification tests are rarely conducted. I present a simple method to estimate large two-way fixed effects (TWFE) and worker-firm match effect models without additional assum...
Directory of Open Access Journals (Sweden)
Tatsuhiro Gotanda
2016-01-01
Full Text Available Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were −32.336 and −33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range.
Ossés de Eicker, Margarita; Zah, Rainer; Triviño, Rubén; Hurni, Hans
The spatial accuracy of top-down traffic emission inventory maps obtained with a simplified disaggregation method based on street density was assessed in seven mid-sized Chilean cities. Each top-down emission inventory map was compared against a reference, namely a more accurate bottom-up emission inventory map from the same study area. The comparison was carried out using a combination of numerical indicators and visual interpretation. Statistically significant differences were found between the seven cities with regard to the spatial accuracy of their top-down emission inventory maps. In compact cities with a simple street network and a single center, a good accuracy of the spatial distribution of emissions was achieved with correlation values>0.8 with respect to the bottom-up emission inventory of reference. In contrast, the simplified disaggregation method is not suitable for complex cities consisting of interconnected nuclei, resulting in correlation valuessituation to get an overview on the spatial distribution of the emissions generated by traffic activities.
Tsuboyama, Shoko; Kodama, Yutaka
2014-01-01
The liverwort Marchantia polymorpha L. is being developed as an emerging model plant, and several transformation techniques were recently reported. Examples are biolistic- and Agrobacterium-mediated transformation methods. Here, we report a simplified method for Agrobacterium-mediated transformation of sporelings, and it is termed Agar-utilized Transformation with Pouring Solutions (AgarTrap). The procedure of the AgarTrap was carried out by simply exchanging appropriate solutions in a Petri dish, and completed within a week, successfully yielding sufficient numbers of independent transformants for molecular analysis (e.g. characterization of gene/protein function) in a single experiment. The AgarTrap method will promote future molecular biological study in M. polymorpha.
Stress estimation in reservoirs using an integrated inverse method
Mazuyer, Antoine; Cupillard, Paul; Giot, Richard; Conin, Marianne; Leroy, Yves; Thore, Pierre
2018-05-01
Estimating the stress in reservoirs and their surroundings prior to the production is a key issue for reservoir management planning. In this study, we propose an integrated inverse method to estimate such initial stress state. The 3D stress state is constructed with the displacement-based finite element method assuming linear isotropic elasticity and small perturbations in the current geometry of the geological structures. The Neumann boundary conditions are defined as piecewise linear functions of depth. The discontinuous functions are determined with the CMA-ES (Covariance Matrix Adaptation Evolution Strategy) optimization algorithm to fit wellbore stress data deduced from leak-off tests and breakouts. The disregard of the geological history and the simplified rheological assumptions mean that only the stress field, statically admissible and matching the wellbore data should be exploited. The spatial domain of validity of this statement is assessed by comparing the stress estimations for a synthetic folded structure of finite amplitude with a history constructed assuming a viscous response.
Methods for risk estimation in nuclear energy
Energy Technology Data Exchange (ETDEWEB)
Gauvenet, A [CEA, 75 - Paris (France)
1979-01-01
The author presents methods for estimating the different risks related to nuclear energy: immediate or delayed risks, individual or collective risks, risks of accidents and long-term risks. These methods have attained a highly valid level of elaboration and their application to other industrial or human problems is currently under way, especially in English-speaking countries.
Bayesian Inference Methods for Sparse Channel Estimation
DEFF Research Database (Denmark)
Pedersen, Niels Lovmand
2013-01-01
This thesis deals with sparse Bayesian learning (SBL) with application to radio channel estimation. As opposed to the classical approach for sparse signal representation, we focus on the problem of inferring complex signals. Our investigations within SBL constitute the basis for the development...... of Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation...... analysis of the complex prior representation, where we show that the ability to induce sparse estimates of a given prior heavily depends on the inference method used and, interestingly, whether real or complex variables are inferred. We also show that the Bayesian estimators derived from the proposed...
Khalind, Omed Saleem
2015-01-01
Unlike other security methods, steganography hides the very existence of secret messages rather than their content only. Both steganography and steganalysis are strongly related to each other, the new steganographic methods should be evaluated with current steganalysis methods and vice-versa. Since steganography is considered broken when the stego object is recognised, undetectability would be the most important property of any steganographic system. Digital image files are excellent media fo...
Comparison of methods for estimating premorbid intelligence
Bright, Peter; van der Linde, Ian
2018-01-01
To evaluate impact of neurological injury on cognitive performance it is typically necessary to derive a baseline (or ‘premorbid’) estimate of a patient’s general cognitive ability prior to the onset of impairment. In this paper, we consider a range of common methods for producing this estimate, including those based on current best performance, embedded ‘hold/no hold’ tests, demographic information, and word reading ability. Ninety-two neurologically healthy adult participants were assessed ...
Gupta, Preeti; Sidhartha, Elizabeth; Girard, Michael J. A.; Mari, Jean Martial; Wong, Tien-Yin; Cheng, Ching-Yu
2014-01-01
Purpose To evaluate a simplified method to measure choroidal thickness (CT) using commercially available enhanced depth imaging (EDI) spectral domain optical coherence tomography (SD-OCT). Methods We measured CT in 31 subjects without ocular diseases using Spectralis EDI SD-OCT. The choroid-scleral interface of the acquired images was first enhanced using a post-processing compensation algorithm. The enhanced images were then analysed using Photoshop. Two graders independently graded the images to assess inter-grader reliability. One grader re-graded the images after 2 weeks to determine intra-grader reliability. Statistical analysis was performed using intra-class correlation coefficient (ICC) and Bland-Altman plot analyses. Results Using adaptive compensation both the intra-grader reliability (ICC: 0.95 to 0.97) and inter-grader reliability (ICC: 0.93 to 0.97) were perfect for all five locations of CT. However, with the conventional technique of manual CT measurements using built-in callipers provided with the Heidelberg explorer software, the intra- (ICC: 0.87 to 0.94) and inter-grader reliability (ICC: 0.90 to 0.93) for all the measured locations is lower. Using adaptive compensation, the mean differences (95% limits of agreement) for intra- and inter-grader sub-foveal CT measurements were −1.3 (−3.33 to 30.8) µm and −1.2 (−36.6 to 34.2) µm, respectively. Conclusions The measurement of CT obtained from EDI SD-OCT using our simplified method was highly reliable and efficient. Our method is an easy and practical approach to improve the quality of choroidal images and the precision of CT measurement. PMID:24797674
Directory of Open Access Journals (Sweden)
Preeti Gupta
Full Text Available PURPOSE: To evaluate a simplified method to measure choroidal thickness (CT using commercially available enhanced depth imaging (EDI spectral domain optical coherence tomography (SD-OCT. METHODS: We measured CT in 31 subjects without ocular diseases using Spectralis EDI SD-OCT. The choroid-scleral interface of the acquired images was first enhanced using a post-processing compensation algorithm. The enhanced images were then analysed using Photoshop. Two graders independently graded the images to assess inter-grader reliability. One grader re-graded the images after 2 weeks to determine intra-grader reliability. Statistical analysis was performed using intra-class correlation coefficient (ICC and Bland-Altman plot analyses. RESULTS: Using adaptive compensation both the intra-grader reliability (ICC: 0.95 to 0.97 and inter-grader reliability (ICC: 0.93 to 0.97 were perfect for all five locations of CT. However, with the conventional technique of manual CT measurements using built-in callipers provided with the Heidelberg explorer software, the intra- (ICC: 0.87 to 0.94 and inter-grader reliability (ICC: 0.90 to 0.93 for all the measured locations is lower. Using adaptive compensation, the mean differences (95% limits of agreement for intra- and inter-grader sub-foveal CT measurements were -1.3 (-3.33 to 30.8 µm and -1.2 (-36.6 to 34.2 µm, respectively. CONCLUSIONS: The measurement of CT obtained from EDI SD-OCT using our simplified method was highly reliable and efficient. Our method is an easy and practical approach to improve the quality of choroidal images and the precision of CT measurement.
Simplified computational methods for elastic and elastic-plastic fracture problems
Atluri, Satya N.
1992-01-01
An overview is given of some of the recent (1984-1991) developments in computational/analytical methods in the mechanics of fractures. Topics covered include analytical solutions for elliptical or circular cracks embedded in isotropic or transversely isotropic solids, with crack faces being subjected to arbitrary tractions; finite element or boundary element alternating methods for two or three dimensional crack problems; a 'direct stiffness' method for stiffened panels with flexible fasteners and with multiple cracks; multiple site damage near a row of fastener holes; an analysis of cracks with bonded repair patches; methods for the generation of weight functions for two and three dimensional crack problems; and domain-integral methods for elastic-plastic or inelastic crack mechanics.
The implementation of a simplified spherical harmonics semi-analytic nodal method in PANTHER
International Nuclear Information System (INIS)
Hall, S.K.; Eaton, M.D.; Knight, M.P.
2013-01-01
Highlights: ► An SP N nodal method is proposed. ► Consistent CMFD derived and tested. ► Mark vacuum boundary conditions applied. ► Benchmarked against other diffusions and transport codes. - Abstract: In this paper an SP N nodal method is proposed which can utilise existing multi-group neutron diffusion solvers to obtain the solution. The semi-analytic nodal method is used in conjunction with a coarse mesh finite difference (CMFD) scheme to solve the resulting set of equations. This is compared against various nuclear benchmarks to show that the method is capable of computing an accurate solution for practical cases. A few different CMFD formulations are implemented and their performance compared. It is found that the effective diffusion coefficent (EDC) can provide additional stability and require less power iterations on a coarse mesh. A re-arrangement of the EDC is proposed that allows the iteration matrix to be computed at the beginning of a calculation. Successive nodal updates only modify the source term unlike existing CMFD methods which update the iteration matrix. A set of Mark vacuum boundary conditions are also derived which can be applied to the SP N nodal method extending its validity. This is possible due to a similarity transformation of the angular coupling matrix, which is used when applying the nodal method. It is found that the Marshak vacuum condition can also be derived, but would require the significant modification of existing neutron diffusion codes to implement it
Kansal, Rohit; Talwar, Sangeeta; Yadav, Seema; Chaudhary, Sarika; Nawal, Ruchika
2014-01-01
The preparation of the root canal system is essential for a successful outcome in root canal treatment. The development of rotary nickel titanium instruments is considered to be an important innovation in the field of endodontics. During few last years, several new instrument systems have been introduced but the quest for simplifying the endodontic instrumentation sequence has been ongoing for almost 20 years, resulting in more than 70 different engine-driven endodontic instrumentation system...
Simplified web-based decision support method for traffic management and work zone analysis.
2015-06-01
Traffic congestion mitigation is one of the key challenges that transportation planners and operations engineers face when : planning for construction and maintenance activities. There is a wide variety of approaches and methods that address work : z...
Simplified propagation of standard uncertainties
International Nuclear Information System (INIS)
Shull, A.H.
1997-01-01
An essential part of any measurement control program is adequate knowledge of the uncertainties of the measurement system standards. Only with an estimate of the standards'' uncertainties can one determine if the standard is adequate for its intended use or can one calculate the total uncertainty of the measurement process. Purchased standards usually have estimates of uncertainty on their certificates. However, when standards are prepared and characterized by a laboratory, variance propagation is required to estimate the uncertainty of the standard. Traditional variance propagation typically involves tedious use of partial derivatives, unfriendly software and the availability of statistical expertise. As a result, the uncertainty of prepared standards is often not determined or determined incorrectly. For situations meeting stated assumptions, easier shortcut methods of estimation are now available which eliminate the need for partial derivatives and require only a spreadsheet or calculator. A system of simplifying the calculations by dividing into subgroups of absolute and relative uncertainties is utilized. These methods also incorporate the International Standards Organization (ISO) concepts for combining systematic and random uncertainties as published in their Guide to the Expression of Measurement Uncertainty. Details of the simplified methods and examples of their use are included in the paper
Evaluating polymer degradation with complex mixtures using a simplified surface area method.
Steele, Kandace M; Pelham, Todd; Phalen, Robert N
2017-09-01
Chemical-resistant gloves, designed to protect workers from chemical hazards, are made from a variety of polymer materials such as plastic, rubber, and synthetic rubber. One material does not provide protection against all chemicals, thus proper polymer selection is critical. Standardized testing, such as chemical degradation tests, are used to aid in the selection process. The current methods of degradation ratings based on changes in weight or tensile properties can be expensive and data often do not exist for complex chemical mixtures. There are hundreds of thousands of chemical products on the market that do not have chemical resistance data for polymer selection. The method described in this study provides an inexpensive alternative to gravimetric analysis. This method uses surface area change to evaluate degradation of a polymer material. Degradation tests for 5 polymer types against 50 complex mixtures were conducted using both gravimetric and surface area methods. The percent change data were compared between the two methods. The resulting regression line was y = 0.48x + 0.019, in units of percent, and the Pearson correlation coefficient was r = 0.9537 (p ≤ 0.05), which indicated a strong correlation between percent weight change and percent surface area change. On average, the percent change for surface area was about half that of the weight change. Using this information, an equivalent rating system was developed for determining the chemical degradation of polymer gloves using surface area.
A simplified computing method of pile group to seismic loads using thin layer element
International Nuclear Information System (INIS)
Masao, T.; Hama, I.
1995-01-01
In the calculation of pile group, it is said that the results of response by thin layer method give the correct solution with the isotropic and homogeneous soil material in each layer, on the other hand this procedure spends huge computing time. Dynamic stiffness matrix of thin layer method is obtained from inversion of flexibility matrix between pile-i and pile-j. This flexibility matrix is full matrix and its size increase in proportion to the number of piles and thin layers. The greater part of run time is taken into the inversion of flexibility matrix against point loading. We propose the method of decreasing the run time for computing by reducing to banded matrix of flexibility matrix. (author)
Development of a simplified method for Tritium measurement in the environmental water
International Nuclear Information System (INIS)
Sakuma, Y.; Yamanishi, H.; Iida, T.; Koganezawa, T.; Kakiuchi, M.; Satake, H.
2002-01-01
In Japan the tritium concentrations in the environmental water figure out at approximately 0.5-2Bq/kg-H 2 O and tends to get a little lower than at the moment. The least detectable limit enabled to count by the liquid scintillation counter attained to merely 0.4Bq/kg-H 2 O. It can survey that it is likely to have been impossible to immediately measure a tritium concentration in an environmental water by the liquid scintillation method. Although there can be some alternative methods, the liquid scintillation together with electrolysis enrichment must be the most effective measurement because we do not need to change any useful managements. We already reported that an immediate counting by the liquid scintillation method for the measurement of environmental samples such as rain, river and tap waters, the membrane filtration was an available alternative way to the distillation of the low level water samples
A simplified method for determination of radioactive iron in whole-blood samples
DEFF Research Database (Denmark)
Bukhave, Klaus; Sørensen, Anne Dorthe; Hansen, M.
2001-01-01
in humans. The overall recovery of radioiron from blood is more than 90%, and the coefficient of variation, as judged by the variation in the ratio Fe-55/Fe-59 is in the order of 4%. Combined with whole-body counting of 59Fe and direct gamma -counting of Fe-59 on blood samples, this method represents......For studies on iron absorption in man radioisotopes represent an easy and simple tool. However, measurement of the orbital electron emitting radioiron, Fe-55, in blood is difficult and insufficiently described in the literature. The present study describes a relatively simple method...... for simultaneous determination of Fe-55 and Fe-59 in blood, using a dry-ashing procedure and recrystallization of the remaining iron. The detection Limit of the method permits measurements of 0.1 Bq/ml blood thus allowing detection of Less than 1% absorption from a 40 kBq dose, which is ethically acceptable...
Method for estimating road salt contamination of Norwegian lakes
Kitterød, Nils-Otto; Wike Kronvall, Kjersti; Turtumøygaard, Stein; Haaland, Ståle
2013-04-01
Consumption of road salt in Norway, used to improve winter road conditions, has been tripled during the last two decades, and there is a need to quantify limits for optimal use of road salt to avoid further environmental harm. The purpose of this study was to implement methodology to estimate chloride concentration in any given water body in Norway. This goal is feasible to achieve if the complexity of solute transport in the landscape is simplified. The idea was to keep computations as simple as possible to be able to increase spatial resolution of input functions. The first simplification we made was to treat all roads exposed to regular salt application as steady state sources of sodium chloride. This is valid if new road salt is applied before previous contamination is removed through precipitation. The main reasons for this assumption are the significant retention capacity of vegetation; organic matter; and soil. The second simplification we made was that the groundwater table is close to the surface. This assumption is valid for major part of Norway, which means that topography is sufficient to delineate catchment area at any location in the landscape. Given these two assumptions, we applied spatial functions of mass load (mass NaCl pr. time unit) and conditional estimates of normal water balance (volume of water pr. time unit) to calculate steady state chloride concentration along the lake perimeter. Spatial resolution of mass load and estimated concentration along the lake perimeter was 25 m x 25 m while water balance had 1 km x 1 km resolution. The method was validated for a limited number of Norwegian lakes and estimation results have been compared to observations. Initial results indicate significant overlap between measurements and estimations, but only for lakes where the road salt is the major contribution for chloride contamination. For lakes in catchments with high subsurface transmissivity, the groundwater table is not necessarily following the
Directory of Open Access Journals (Sweden)
Daniel Inns
2007-01-01
Full Text Available A simplified nanosphere lithography process has been developed which allows fast and low-waste maskings of Si surfaces for subsequent reactive ion etching (RIE texturing. Initially, a positive surface charge is applied to a wafer surface by dipping in a solution of aluminum nitrate. Dipping the positive-coated wafer into a solution of negatively charged silica beads (nanospheres results in the spheres becoming electrostatically attracted to the wafer surface. These nanospheres form an etch mask for RIE. After RIE texturing, the reflection of the surface is reduced as effectively as any other nanosphere lithography method, while this batch process used for masking is much faster, making it more industrially relevant.
A simple method to estimate interwell autocorrelation
Energy Technology Data Exchange (ETDEWEB)
Pizarro, J.O.S.; Lake, L.W. [Univ. of Texas, Austin, TX (United States)
1997-08-01
The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.
Simplified Method for the Characterization of Rectangular Straw Bales (RSB) Thermal Conductivity
Conti, Leonardo; Goli, Giacomo; Monti, Massimo; Pellegrini, Paolo; Rossi, Giuseppe; Barbari, Matteo
2017-10-01
This research aims to design and implement tools and methods focused at the assessment of the thermal properties of full size Rectangular Straw Bales (RSB) of various nature and origin, because their thermal behaviour is one of the key topics in market development of sustainable building materials. As a first approach a method based on a Hot-Box in agreement with the ASTM C1363 - 11 standard was adopted. This method was found to be difficult for the accurate measurement of energy flows. Instead, a method based on a constant energy input was developed. With this approach the thermal conductivity of a Rectangular Straw-Bale (RSB λ) can be determined by knowing the thermal conductivity of the materials used to build the chamber and the internal and external temperature of the samples and of the chamber. A measurement a metering chamber was built and placed inside a climate chamber, maintained at constant temperature. A known quantity of energy was introduced inside the metering chamber. A series of thermopiles detects the temperature of the internal and external surfaces of the metering chamber and of the specimens allowing to calculate the thermal conductivity of RSB in its natural shape. Different cereal samples were tested. The values were found consistent with those published in scientific literature.
Simplified methods of evaluating colonies for levels of Varroa Sensitive Hygiene (VSH)
Varroa sensitive hygiene (VSH) is a trait of honey bees, Apis mellifera, that supports resistance to varroa mites, Varroa destructor. Components of VSH were evaluated to identify simple methods for selection of the trait. Varroa mite population growth was measured in colonies with variable levels of...
A simplified method to recover urinary vesicles for clinical applications, and sample banking.
Musante, Luca; Tataruch, Dorota; Gu, Dongfeng; Benito-Martin, Alberto; Calzaferri, Giulio; Aherne, Sinead; Holthofer, Harry
2014-12-23
Urinary extracellular vesicles provide a novel source for valuable biomarkers for kidney and urogenital diseases: Current isolation protocols include laborious, sequential centrifugation steps which hampers their widespread research and clinical use. Furthermore, large individual urine sample volumes or sizable target cohorts are to be processed (e.g. for biobanking), the storage capacity is an additional problem. Thus, alternative methods are necessary to overcome such limitations. We have developed a practical vesicle isolation technique to yield easily manageable sample volumes in an exceptionally cost efficient way to facilitate their full utilization in less privileged environments and maximize the benefit of biobanking. Urinary vesicles were isolated by hydrostatic dialysis with minimal interference of soluble proteins or vesicle loss. Large volumes of urine were concentrated up to 1/100 of original volume and the dialysis step allowed equalization of urine physico-chemical characteristics. Vesicle fractions were found suitable to any applications, including RNA analysis. In the yield, our hydrostatic filtration dialysis system outperforms the conventional ultracentrifugation-based methods and the labour intensive and potentially hazardous step of ultracentrifugations are eliminated. Likewise, the need for trained laboratory personnel and heavy initial investment is avoided. Thus, our method qualifies as a method for laboratories working with urinary vesicles and biobanking.
Simplified Method for Predicting a Functional Class of Proteins in Transcription Factor Complexes
Piatek, Marek J.; Schramm, Michael C.; Burra, Dharani Dhar; BinShbreen, Abdulaziz; Jankovic, Boris R.; Chowdhary, Rajesh; Archer, John A.C.; Bajic, Vladimir B.
2013-01-01
initiation. Such information is not fully available, since not all proteins that act as TFs or TcoFs are yet annotated as such, due to generally partial functional annotation of proteins. In this study we have developed a method to predict, using only
A Simplified Method for Upscaling Composite Materials with High Contrast of the Conductivity
Ewing, R.; Iliev, O.; Lazarov, R.; Rybak, I.; Willems, J.
2009-01-01
A large class of industrial composite materials, such as metal foams, fibrous glass materials, mineral wools, and the like, are widely used in insulation and advanced heat exchangers. These materials are characterized by a substantial difference between the thermal properties of the highly conductive materials (glass or metal) and the insulator (air) as well as low volume fractions and complex network-like structures of the highly conductive components. In this paper we address the important issue for the engineering practice of developing fast, reliable, and accurate methods for computing the macroscopic (upscaled) thermal conductivities of such materials. We assume that the materials have constant macroscopic thermal conductivity tensors, which can be obtained by upscaling techniques based on the postprocessing of a number of linearly independent solutions of the steady-state heat equation on representative elementary volumes (REVs). We propose, theoretically justify, and computationally study a numerical method for computing the effective conductivities of materials for which the ratio δ of low and high conductivities satisfies δ ≪ 1. We show that in this case one needs to solve the heat equation in the region occupied by the highly conductive media only. Further, we prove that under certain conditions on the microscale geometry the proposed method gives an approximation that is O(δ)-close to the upscaled conductivity. Finally, we illustrate the accuracy and the limitations of the method on a number of numerical examples. © 2009 Society for Industrial and Applied Mathematics.
A Qualitative Method to Estimate HSI Display Complexity
International Nuclear Information System (INIS)
Hugo, Jacques; Gertman, David
2013-01-01
There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation
A Qualitative Method to Estimate HSI Display Complexity
Energy Technology Data Exchange (ETDEWEB)
Hugo, Jacques; Gertman, David [Idaho National Laboratory, Idaho (United States)
2013-04-15
There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation.
International Nuclear Information System (INIS)
Detroux, P.; Lafaille, J.P.
1991-01-01
After ten years of operation, the Belgian Nuclear Power Plants had to be seismically reassessed; especially, new requirements were imposed to the oldest units. The method, presented in this paper, is based on the principle that all the piping connected to the equipment is replaced by a clamped-hinged beam with or without concentrated mass and of a characteristic length depending on the diameter, schedule, mass per length of the connected piping and on the floor response spectra applicable at the location of the equipment. A theoretical justification of the method is presented for the simplest cases. The case of added concentrated mass is investigated. Finally, several comparisons with a full modal spectral analysis are presented
International Nuclear Information System (INIS)
Barbieri, R.S.; Rocha, J.C.; Terra, V.R.; Marques Netto, A.
1989-01-01
The conditions for gravimetric determination of zirconium or hafnium by glicoloc acids derivatives were studied by thermogravimetric analysis. The method utilized shownd that after precipitation, washing and drying of precipitates at 150 o C, the resulting solid was weighed in the form of [M (RCH(OH)COO) 4 ] (M = Zr, Hf; R + C 6 H 5 , β-C 10 H 7 , p-BrC 6 H 4 ). (author)
International Nuclear Information System (INIS)
Iijima, Tadashi
2005-01-01
We applied improved capacity spectrum method (ICSM) to a piping system with an asymmetric load-deformation relationship in a piping elbow. The capacity spectrum method can predict an inelastic response by balancing the structural capacity obtained from the load-deformation relationship with the seismic demand defined by an acceleration-displacement response spectrum. The ICSM employs (1) effective damping ratio and period that are based on a statistical methodology, (2) practical procedures necessary to obtain a balance between the structural capacity and the seismic demand. The effective damping ratio and period are defined so as to maximize the probability that predicted response errors lie inside the -10 to 20% range. However, without taking asymmetry into consideration the displacement calculated by using the load-deformation relationship on the stiffer side was 39% larger than that of a time history analysis by a direct integral method. On the other hand, when asymmetry was taken into account, the calculated displacement was only 14% larger than that of a time history analysis. Thus, we verified that the ICSM could predict the inelastic response with errors lying within the -10 to 20% range, by taking into account the asymmetric load-deformation relationship of the piping system. (author)
International Nuclear Information System (INIS)
Shultis, J.K.; Thompson, K.R.; Faw, R.E.
1986-01-01
Approximate computational models are developed to describe the spatial variation in the radiation field transmitted through a straight reactangular duct obliquely illuminated by monoenergetic gamma photons. These models account for single and multiple scattering from the duct walls and lips as well as for direct penetration by the photons. Results of calculations are compared to results from a recent benchmark duct streaming experiment, and empirical correction factors are obtained which enable the models to predict the transmitted exposure rates to within 20% of the experimental values
Efficient Methods of Estimating Switchgrass Biomass Supplies
Switchgrass (Panicum virgatum L.) is being developed as a biofuel feedstock for the United States. Efficient and accurate methods to estimate switchgrass biomass feedstock supply within a production area will be required by biorefineries. Our main objective was to determine the effectiveness of in...
Coalescent methods for estimating phylogenetic trees.
Liu, Liang; Yu, Lili; Kubatko, Laura; Pearl, Dennis K; Edwards, Scott V
2009-10-01
We review recent models to estimate phylogenetic trees under the multispecies coalescent. Although the distinction between gene trees and species trees has come to the fore of phylogenetics, only recently have methods been developed that explicitly estimate species trees. Of the several factors that can cause gene tree heterogeneity and discordance with the species tree, deep coalescence due to random genetic drift in branches of the species tree has been modeled most thoroughly. Bayesian approaches to estimating species trees utilizes two likelihood functions, one of which has been widely used in traditional phylogenetics and involves the model of nucleotide substitution, and the second of which is less familiar to phylogeneticists and involves the probability distribution of gene trees given a species tree. Other recent parametric and nonparametric methods for estimating species trees involve parsimony criteria, summary statistics, supertree and consensus methods. Species tree approaches are an appropriate goal for systematics, appear to work well in some cases where concatenation can be misleading, and suggest that sampling many independent loci will be paramount. Such methods can also be challenging to implement because of the complexity of the models and computational time. In addition, further elaboration of the simplest of coalescent models will be required to incorporate commonly known issues such as deviation from the molecular clock, gene flow and other genetic forces.
The use of maturity method in estimating concrete strength
International Nuclear Information System (INIS)
Salama, A.E.; Abd El-Baky, S.M.; Ali, E.E.; Ghanem, G.M.
2005-01-01
Prediction of the early age strength of concrete is essential for modernized concrete for construction as well as for manufacturing of structural parts. Safe and economic scheduling of such critical operations as form removal and re shoring, application of post-tensioning or other mechanical treatment, and in process transportation and rapid delivery of products all should be based upon a good grasp of the strength development of the concrete in use. For many years, it has been proposed that the strength of concrete can be related to a simple mathematical function of time and temperature so that strength could be assessed by calculation without mechanical testing. Such functions are used to compute what is called the m aturity o f concrete, and the computed value is believed to obtain a correlation with the strength of concrete. With its simplicity and low cost, the application of maturity concept as in situ testing method has received wide attention and found its use in engineering practice. This research work investigates the use of M aturity method' in estimating the concrete strength. An experimental program is designed to estimate the concrete strength by using the maturity method. Using different concrete mixes, with available local materials. Ordinary Portland Cement, crushed stone, silica fume, fly ash and admixtures with different contents are used . All the specimens were exposed to different curing temperatures (10, 25 and 40 degree C), in order to get a simplified expression of maturity that fits in with the influence of temperature. Mix designs and charts obtained from this research can be used as guide information for estimating concrete strength by using the maturity method
Simplified Method to Produce Human Bioactive Leukemia Inhibitory Factor in Escherichia coli
Directory of Open Access Journals (Sweden)
Houman Kahroba
2016-07-01
Full Text Available Background Human leukemia inhibitory factor (hLIF is a poly functional cytokine with numerous regulatory effects on different cells. Main application of hLIF is maintaining pluripotency of embryonic stem cells. hLIF indicated effective work in implantation rate of fertilized eggs and multiple sclerosis (MS treatment. Low production of hLIF in eukaryotic cells and prokaryotic host’s problems for human protein production convinced us to develop a simple way to reach high amount of this widely used clinical and research factor. Objectives In this study we want to purify recombinant human leukemia inhibitory factor in single simple method. Materials and Methods This is an experimental study, gene expression: human LIF gene was codon optimized for expression in Escherichia coli and attached his-tag tail to make it extractable. After construction and transformation of vector to E. coli, isopropyl β-D-1-thiogalactopyranoside (IPTG used for induction. Single step immobilized metal afﬁnity chromatography (IMAC used for purification confirmed by Sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS PAGE and western blotting. Bioactivity of the hLIF were tested by MTT assay with TF-1 cells and CISH gene stimulation in monocyte and TF-1 by real-time PCR. Induction by 0.4 mM of IPTG in 25°C for 3 hours indicated best result for soluble expression. SPSS indicated P ˂ 0.05 that is significant for our work. Results Cloning, expression, and extraction of bio active rhLIF was successfully achieved according MTT assay and real time PCR after treatment of TF-1 and monocyte cell lines. Conclusions We developed an effective single step purification method to produce bioactive recombinant hLIF in E. coli. For the first time we used CISH gene stimulating for bioactivity test for qualifying of recombinant hLIF for application.
Simplified Method for Predicting a Functional Class of Proteins in Transcription Factor Complexes
Piatek, Marek J.
2013-07-12
Background:Initiation of transcription is essential for most of the cellular responses to environmental conditions and for cell and tissue specificity. This process is regulated through numerous proteins, their ligands and mutual interactions, as well as interactions with DNA. The key such regulatory proteins are transcription factors (TFs) and transcription co-factors (TcoFs). TcoFs are important since they modulate the transcription initiation process through interaction with TFs. In eukaryotes, transcription requires that TFs form different protein complexes with various nuclear proteins. To better understand transcription regulation, it is important to know the functional class of proteins interacting with TFs during transcription initiation. Such information is not fully available, since not all proteins that act as TFs or TcoFs are yet annotated as such, due to generally partial functional annotation of proteins. In this study we have developed a method to predict, using only sequence composition of the interacting proteins, the functional class of human TF binding partners to be (i) TF, (ii) TcoF, or (iii) other nuclear protein. This allows for complementing the annotation of the currently known pool of nuclear proteins. Since only the knowledge of protein sequences is required in addition to protein interaction, the method should be easily applicable to many species.Results:Based on experimentally validated interactions between human TFs with different TFs, TcoFs and other nuclear proteins, our two classification systems (implemented as a web-based application) achieve high accuracies in distinguishing TFs and TcoFs from other nuclear proteins, and TFs from TcoFs respectively.Conclusion:As demonstrated, given the fact that two proteins are capable of forming direct physical interactions and using only information about their sequence composition, we have developed a completely new method for predicting a functional class of TF interacting protein partners
Simplified method for the determination of strontium-90 in large amounts of bone-ash
International Nuclear Information System (INIS)
Patti, F.; Jeanmaire, L.
1966-06-01
The principle of the determination is based on a 3-step process: 1) concentrating the strontium by attacking the ash with nitric acid; 2) elimination of residual phosphoric ions by a double precipitation of strontium oxalate; and 3) extraction of yttrium 90, counted in the oxalate form. The advantages of the method: -) using simple techniques it makes it possible to process 50 g of ash; -) the initial concentration of strontium considerably reduces the volume of the solutions as well as the size of precipitates handled. Fuming nitric acid is used in a specially designed burette. (authors) [fr
Kung, Woon-Man; Lin, Muh-Shi
2012-01-01
Polymethyl methacrylate (PMMA) is one of the most frequently used cranioplasty materials. However, limitations exist with PMMA cranioplasty including longer operative time, greater blood loss and a higher infection rate. To reduce these disadvantages, it is proposed to introduce a new surgical method for PMMA cranioplasty. Retrospective review of nine patients who received nine PMMA implants using combined cotton stacking and finger fracture method from January 2008 to July 2011. The definitive height of skull defect was quantified by computer-based image analysis of computed tomography (CT) scans. Aesthetic outcomes as measured by post-reduction radiographs and cranial index of symmetry (CIS), cranial nerve V and VII function and complications (wound infection, hardware extrusions, meningitis, osteomyelitis and brain abscess) were evaluated. The mean operation time for implant moulding was 24.56 ± 4.6 minutes and 178.0 ± 53 minutes for skin-to-skin. Average blood loss was 169 mL. All post-operative radiographs revealed excellent reduction. The mean CIS score was 95.86 ± 1.36%, indicating excellent symmetry. These results indicate the safety, practicability, excellent cosmesis, craniofacial symmetry and stability of this new surgical technique.
An optimized and simplified method for analysing urea and ammonia in freshwater aquaculture systems
DEFF Research Database (Denmark)
Larsen, Bodil Katrine; Dalsgaard, Anne Johanne Tang; Pedersen, Per Bovbjerg
2015-01-01
This study presents a simple urease method for analysis of ammonia and urea in freshwater aquaculture systems. Urea is hydrolysed into ammonia using urease followed by analysis of released ammonia using the salicylate-hypochlorite method. The hydrolysis of urea is performed at room temperature...... and without addition of a buffer. A number of tests were performed on water samples obtained from a commercial rainbow trout farm to determine the optimal urease concentration and time for complete hydrolysis. One mL of water sample was spiked with 1.3 mL urea at three different concentrations: 50 lg L 1, 100...... lg L 1 and 200 lg L 1 urea-N. In addition, five concentrations of urease were tested, ranging from 0.1 U mL 1 to 4 U mL 1. Samples were hydrolysed for various time periods ranging from 5 to 120 min. A urease concentration of 0.4 U mL 1 and a hydrolysis period of 120 min gave the best results, with 99...
A simplified method for active-site titration of lipases immobilised on hydrophobic supports.
Nalder, Tim D; Kurtovic, Ivan; Barrow, Colin J; Marshall, Susan N
2018-06-01
The aim of this work was to develop a simple and accurate protocol to measure the functional active site concentration of lipases immobilised on highly hydrophobic supports. We used the potent lipase inhibitor methyl 4-methylumbelliferyl hexylphosphonate to titrate the active sites of Candida rugosa lipase (CrL) bound to three highly hydrophobic supports: octadecyl methacrylate (C18), divinylbenzene crosslinked methacrylate (DVB) and styrene. The method uses correction curves to take into account the binding of the fluorophore (4-methylumbelliferone, 4-MU) by the support materials. We showed that the uptake of the detection agent by the three supports is not linear relative to the weight of the resin, and that the uptake occurs in an equilibrium that is independent of the total fluorophore concentration. Furthermore, the percentage of bound fluorophore varied among the supports, with 50 mg of C18 and styrene resins binding approximately 64 and 94%, respectively. When the uptake of 4-MU was calculated and corrected for, the total 4-MU released via inhibition (i.e. the concentration of functional lipase active sites) could be determined via a linear relationship between immobilised lipase weight and total inhibition. It was found that the functional active site concentration of immobilised CrL varied greatly among different hydrophobic supports, with 56% for C18, compared with 14% for DVB. The described method is a simple and robust approach to measuring functional active site concentration in immobilised lipase samples. Copyright © 2018 Elsevier Inc. All rights reserved.
Schmeier, Sebastian
2011-07-05
Background: Physical interactions between transcription factors (TFs) are necessary for forming regulatory protein complexes and thus play a crucial role in gene regulation. Currently, knowledge about the mechanisms of these TF interactions is incomplete and the number of known TF interactions is limited. Computational prediction of such interactions can help identify potential new TF interactions as well as contribute to better understanding the complex machinery involved in gene regulation. Methodology: We propose here such a method for the prediction of TF interactions. The method uses only the primary sequence information of the interacting TFs, resulting in a much greater simplicity of the prediction algorithm. Through an advanced feature selection process, we determined a subset of 97 model features that constitute the optimized model in the subset we considered. The model, based on quadratic discriminant analysis, achieves a prediction accuracy of 85.39% on a blind set of interactions. This result is achieved despite the selection for the negative data set of only those TF from the same type of proteins, i.e. TFs that function in the same cellular compartment (nucleus) and in the same type of molecular process (transcription initiation). Such selection poses significant challenges for developing models with high specificity, but at the same time better reflects real-world problems. Conclusions: The performance of our predictor compares well to those of much more complex approaches for predicting TF and general protein-protein interactions, particularly when taking the reduced complexity of model utilisation into account. © 2011 Schmeier et al.
Directory of Open Access Journals (Sweden)
Sebastian Schmeier
Full Text Available BACKGROUND: Physical interactions between transcription factors (TFs are necessary for forming regulatory protein complexes and thus play a crucial role in gene regulation. Currently, knowledge about the mechanisms of these TF interactions is incomplete and the number of known TF interactions is limited. Computational prediction of such interactions can help identify potential new TF interactions as well as contribute to better understanding the complex machinery involved in gene regulation. METHODOLOGY: We propose here such a method for the prediction of TF interactions. The method uses only the primary sequence information of the interacting TFs, resulting in a much greater simplicity of the prediction algorithm. Through an advanced feature selection process, we determined a subset of 97 model features that constitute the optimized model in the subset we considered. The model, based on quadratic discriminant analysis, achieves a prediction accuracy of 85.39% on a blind set of interactions. This result is achieved despite the selection for the negative data set of only those TF from the same type of proteins, i.e. TFs that function in the same cellular compartment (nucleus and in the same type of molecular process (transcription initiation. Such selection poses significant challenges for developing models with high specificity, but at the same time better reflects real-world problems. CONCLUSIONS: The performance of our predictor compares well to those of much more complex approaches for predicting TF and general protein-protein interactions, particularly when taking the reduced complexity of model utilisation into account.
A simplified method for obtaining high-purity perchlorate from groundwater for isotope analyses.
Energy Technology Data Exchange (ETDEWEB)
vonKiparski, G; Hillegonds, D
2011-04-04
Investigations into the occurrence and origin of perchlorate (ClO{sub 4}{sup -}) found in groundwater from across North America have been sparse until recent years, and there is mounting evidence that natural formation mechanisms are important. New opportunities for identifying groundwater perchlorate and its origin have arisen with the utilization of improved detection methods and sampling techniques. Additionally, application of the forensic potential of isotopic measurements has begun to elucidate sources, potential formation mechanisms and natural attenuation processes. Procedures developed appear to be amenable to enable high precision stable isotopic analyses, as well as lower precision AMS analyses of {sup 36}Cl. Immediate work is in analyzing perchlorate isotope standards and developing full analytical accuracy and uncertainty expectations. Field samples have also been collected, and will be analyzed when final qa/qc samples are deemed acceptable.
Thompson, James H.; Apel, Thomas R.
1990-07-01
A technique for modeling microstrip discontinuities is presented which is derived from the transmission line matrix method of solving three-dimensional electromagnetic problems. In this technique the microstrip patch under investigation is divided into an integer number of square and half-square (triangle) subsections. An equivalent lumped-element model is calculated for each subsection. These individual models are then interconnected as dictated by the geometry of the patch. The matrix of lumped elements is then solved using either of two microwave CAD software interfaces with each port properly defined. Closed-form expressions for the lumped-element representation of the individual subsections is presented and experimentally verified through the X-band frequency range. A model demonstrating the use of symmetry and block construction of a circuit element is discussed, along with computer program development and CAD software interface.
Energy Technology Data Exchange (ETDEWEB)
MacAlpine, Sara; Deline, Chris
2015-09-15
It is often difficult to model the effects of partial shading conditions on PV array performance, as shade losses are nonlinear and depend heavily on a system's particular configuration. This work describes and implements a simple method for modeling shade loss: a database of shade impact results (loss percentages), generated using a validated, detailed simulation tool and encompassing a wide variety of shading scenarios. The database is intended to predict shading losses in crystalline silicon PV arrays and is accessed using basic inputs generally available in any PV simulation tool. Performance predictions using the database are within 1-2% of measured data for several partially shaded PV systems, and within 1% of those predicted by the full, detailed simulation tool on an annual basis. The shade loss database shows potential to considerably improve performance prediction for partially shaded PV systems.
Fusion rule estimation using vector space methods
International Nuclear Information System (INIS)
Rao, N.S.V.
1997-01-01
In a system of N sensors, the sensor S j , j = 1, 2 .... N, outputs Y (j) element-of Re, according to an unknown probability distribution P (Y(j) /X) , corresponding to input X element-of [0, 1]. A training n-sample (X 1 , Y 1 ), (X 2 , Y 2 ), ..., (X n , Y n ) is given where Y i = (Y i (1) , Y i (2) , . . . , Y i N ) such that Y i (j) is the output of S j in response to input X i . The problem is to estimate a fusion rule f : Re N → [0, 1], based on the sample, such that the expected square error is minimized over a family of functions Y that constitute a vector space. The function f* that minimizes the expected error cannot be computed since the underlying densities are unknown, and only an approximation f to f* is feasible. We estimate the sample size sufficient to ensure that f provides a close approximation to f* with a high probability. The advantages of vector space methods are two-fold: (a) the sample size estimate is a simple function of the dimensionality of F, and (b) the estimate f can be easily computed by well-known least square methods in polynomial time. The results are applicable to the classical potential function methods and also (to a recently proposed) special class of sigmoidal feedforward neural networks
2010-07-01
... employee pensions-IRS Form 5305-SEP. 2520.104-48 Section 2520.104-48 Labor Regulations Relating to Labor... compliance for model simplified employee pensions—IRS Form 5305-SEP. Under the authority of section 110 of... Security Act of 1974 in the case of a simplified employee pension (SEP) described in section 408(k) of the...
Fang, Cheng; Butler, David Lee
2013-05-01
In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.
Reinforcing the role of the conventional C-arm - a novel method for simplified distal interlocking
Directory of Open Access Journals (Sweden)
Windolf Markus
2012-01-01
Full Text Available Abstract Background The common practice for insertion of distal locking screws of intramedullary nails is a freehand technique under fluoroscopic control. The process is technically demanding, time-consuming and afflicted to considerable radiation exposure of the patient and the surgical personnel. A new concept is introduced utilizing information from within conventional radiographic images to help accurately guide the surgeon to place the interlocking bolt into the interlocking hole. The newly developed technique was compared to conventional freehand in an operating room (OR like setting on human cadaveric lower legs in terms of operating time and radiation exposure. Methods The proposed concept (guided freehand, generally based on the freehand gold standard, additionally guides the surgeon by means of visible landmarks projected into the C-arm image. A computer program plans the correct drilling trajectory by processing the lens-shaped hole projections of the interlocking holes from a single image. Holes can be drilled by visually aligning the drill to the planned trajectory. Besides a conventional C-arm, no additional tracking or navigation equipment is required. Ten fresh frozen human below-knee specimens were instrumented with an Expert Tibial Nail (Synthes GmbH, Switzerland. The implants were distally locked by performing the newly proposed technique as well as the conventional freehand technique on each specimen. An orthopedic resident surgeon inserted four distal screws per procedure. Operating time, number of images and radiation time were recorded and statistically compared between interlocking techniques using non-parametric tests. Results A 58% reduction in number of taken images per screw was found for the guided freehand technique (7.4 ± 3.4 (mean ± SD compared to the freehand technique (17.6 ± 10.3 (p p = 0.001. Operating time per screw (from first shot to screw tightened was on average 22% reduced by guided freehand (p = 0
Formation of Au nano-patterns on various substrates using simplified nano-transfer printing method
Kim, Jong-Woo; Yang, Ki-Yeon; Hong, Sung-Hoon; Lee, Heon
2008-06-01
For future device applications, fabrication of the metal nano-patterns on various substrates, such as Si wafer, non-planar glass lens and flexible plastic films become important. Among various nano-patterning technologies, nano-transfer print method is one of the simplest techniques to fabricate metal nano-patterns. In nano-transfer printing process, thin Au layer is deposited on flexible PDMS mold, containing surface protrusion patterns, and the Au layer is transferred from PDMS mold to various substrates due to the difference of bonding strength of Au layer to PDMS mold and to the substrate. For effective transfer of Au layer, self-assembled monolayer, which has strong bonding to Au, is deposited on the substrate as a glue layer. In this study, complicated SAM layer coating process was replaced to simple UV/ozone treatment, which can activates the surface and form the -OH radicals. Using simple UV/ozone treatments on both Au and substrate, Au nano-pattern can be successfully transferred to as large as 6 in. diameter Si wafer, without SAM coating process. High fidelity transfer of Au nano-patterns to non-planar glass lens and flexible PET film was also demonstrated.
Sharuga, S. M.; Reams, M.
2016-12-01
Traditional approaches to marine conservation and management are increasingly being found as inadequate; and, consequently, more complex ecosystem-based approaches to protecting marine ecosystems are growing in popularity. Ecosystem-based approaches, however, can be particularly challenging at a local level where resources and knowledge of specific marine conservation components may be limited. Marine conservation areas are known by a variety of names globally, but can be divided into four general types: Marine Protected Areas (MPAs), Marine Reserves, Fishery Reserves, and Ecological Reserves (i.e. "no take zones"). Each type of conservation area involves specific objectives, program elements and likely socioeconomic consequences. As an aid to community stakeholders and decision makers considering establishment of a marine conservation area, a simple method to compare and score the objectives and attributes of these four approaches is presented. A range of evaluation criteria are considered, including conservation of biodiversity and habitat, effective fishery management, overall cost-effectiveness, fairness to current users, enhancement of recreational activities, fairness to taxpayers, and conservation of genetic diversity. Environmental and socioeconomic costs and benefits of each type of conservation area are also considered. When exploring options for managing the marine environment, particular resource conservation needs must be evaluated individually on a case-by-case basis and the type of conservation area established must be tailored accordingly. However, MPAs are often more successful than other conservation areas because they offer a compromise between the needs of society and the environment, and therefore represent a viable option for ecosystem-based management.
Directory of Open Access Journals (Sweden)
Santanu Panda
Full Text Available In farm animals, there is no suitable cell line available to understand liver-specific functions. This has limited our understanding of liver function and metabolism in farm animals. Culturing and maintenance of functionally active hepatocytes is difficult, since they survive no more than few days. Establishing primary culture of hepatocytes can help in studying cellular metabolism, drug toxicity, hepatocyte specific gene function and regulation. Here we provide a simple in vitro method for isolation and short-term culture of functionally active buffalo hepatocytes.Buffalo hepatocytes were isolated from caudate lobes by using manual enzymatic perfusion and mechanical disruption of liver tissue. Hepatocyte yield was (5.3 ± 0.66×107 cells per gram of liver tissue with a viability of 82.3 ± 3.5%. Freshly isolated hepatocytes were spherical with well contrasted border. After 24 hours of seeding onto fibroblast feeder layer and different extracellular matrices like dry collagen, matrigel and sandwich collagen coated plates, hepatocytes formed confluent monolayer with frequent clusters. Cultured hepatocytes exhibited typical cuboidal and polygonal shape with restored cellular polarity. Cells expressed hepatocyte-specific marker genes or proteins like albumin, hepatocyte nuclear factor 4α, glucose-6-phosphatase, tyrosine aminotransferase, cytochromes, cytokeratin and α1-antitrypsin. Hepatocytes could be immunostained with anti-cytokeratins, anti-albumin and anti α1-antitrypsin antibodies. Abundant lipid droplets were detected in the cytosol of hepatocytes using oil red stain. In vitro cultured hepatocytes could be grown for five days and maintained for up to nine days on buffalo skin fibroblast feeder layer. Cultured hepatocytes were viable for functional studies.We developed a convenient and cost effective technique for hepatocytes isolation for short-term culture that exhibited morphological and functional characteristics of active hepatocytes
Reinforcing the role of the conventional C-arm--a novel method for simplified distal interlocking.
Windolf, Markus; Schroeder, Josh; Fliri, Ladina; Dicht, Benno; Liebergall, Meir; Richards, R Geoff
2012-01-25
shot to screw tightened) was on average 22% reduced by guided freehand (p = 0.018). In an experimental setting, the newly developed guided freehand technique for distal interlocking has proven to markedly reduce radiation exposure when compared to the conventional freehand technique. The method utilizes established clinical workflows and does not require cost intensive add-on devices or extensive training. The underlying principle carries potential to assist implant positioning in numerous other applications within orthopedics and trauma from screw insertions to placement of plates, nails or prostheses.
Reliability of Estimation Pile Load Capacity Methods
Directory of Open Access Journals (Sweden)
Yudhi Lastiasih
2014-04-01
Full Text Available None of numerous previous methods for predicting pile capacity is known how accurate any of them are when compared with the actual ultimate capacity of piles tested to failure. The author’s of the present paper have conducted such an analysis, based on 130 data sets of field loading tests. Out of these 130 data sets, only 44 could be analysed, of which 15 were conducted until the piles actually reached failure. The pile prediction methods used were: Brinch Hansen’s method (1963, Chin’s method (1970, Decourt’s Extrapolation Method (1999, Mazurkiewicz’s method (1972, Van der Veen’s method (1953, and the Quadratic Hyperbolic Method proposed by Lastiasih et al. (2012. It was obtained that all the above methods were sufficiently reliable when applied to data from pile loading tests that loaded to reach failure. However, when applied to data from pile loading tests that loaded without reaching failure, the methods that yielded lower values for correction factor N are more recommended. Finally, the empirical method of Reese and O’Neill (1988 was found to be reliable enough to be used to estimate the Qult of a pile foundation based on soil data only.
Update and Improve Subsection NH - Simplified Elastic and Inelastic Design Analysis Methods
Energy Technology Data Exchange (ETDEWEB)
Jeries J. Abou-Hanna; Douglas L. Marriott; Timothy E. McGreevy
2009-06-27
The objective of this subtask is to develop a template for the 'Ideal' high temperature design Code, in which individual topics can be identified and worked on separately in order to provide the detail necessary to comprise a comprehensive Code. Like all ideals, this one may not be attainable as a practical matter. The purpose is to set a goal for what is believed the 'Ideal' design Code should address, recognizing that some elements are not mutually exclusive and that the same objectives can be achieved in different way. Most, if not all existing Codes may therefore be found to be lacking in some respects, but this does not mean necessarily that they are not comprehensive. While this subtask does attempt to list the elements which individually or in combination are considered essential in such a Code, the authors do not presume to recommend how these elements should be implemented or even, that they should all be implemented at all. The scope of this subtask is limited to compiling the list of elements thought to be necessary or at minimum, useful in such an 'Ideal' Code; suggestions are provided as to their relationship to one another. Except for brief descriptions, where these are needed for clarification, neither this repot, nor Task 9 as a whole, attempts to address details of the contents of all these elements. Some, namely primary load limits (elastic, limit load, reference stress), and ratcheting (elastic, e-p, reference stress) are dealt with specifically in other subtasks of Task 9. All others are merely listed; the expectation is that they will either be the focus of attention of other active DOE-ASME GenIV Materials Tasks, e.g. creep-fatigue, or to be considered in future DOE-ASME GenIV Materials Tasks. Since the focus of this Task is specifically approximate methods, the authors have deemed it necessary to include some discussion on what is meant by 'approximate'. However, the topic will be addressed in one or
Update and Improve Subsection NH - Simplified Elastic and Inelastic Design Analysis Methods
International Nuclear Information System (INIS)
Abou-Hanna, Jeries J.; Marriott, Douglas L.; McGreevy, Timothy E.
2009-01-01
The objective of this subtask is to develop a template for the 'Ideal' high temperature design Code, in which individual topics can be identified and worked on separately in order to provide the detail necessary to comprise a comprehensive Code. Like all ideals, this one may not be attainable as a practical matter. The purpose is to set a goal for what is believed the 'Ideal' design Code should address, recognizing that some elements are not mutually exclusive and that the same objectives can be achieved in different way. Most, if not all existing Codes may therefore be found to be lacking in some respects, but this does not mean necessarily that they are not comprehensive. While this subtask does attempt to list the elements which individually or in combination are considered essential in such a Code, the authors do not presume to recommend how these elements should be implemented or even, that they should all be implemented at all. The scope of this subtask is limited to compiling the list of elements thought to be necessary or at minimum, useful in such an 'Ideal' Code; suggestions are provided as to their relationship to one another. Except for brief descriptions, where these are needed for clarification, neither this repot, nor Task 9 as a whole, attempts to address details of the contents of all these elements. Some, namely primary load limits (elastic, limit load, reference stress), and ratcheting (elastic, e-p, reference stress) are dealt with specifically in other subtasks of Task 9. All others are merely listed; the expectation is that they will either be the focus of attention of other active DOE-ASME GenIV Materials Tasks, e.g. creep-fatigue, or to be considered in future DOE-ASME GenIV Materials Tasks. Since the focus of this Task is specifically approximate methods, the authors have deemed it necessary to include some discussion on what is meant by 'approximate'. However, the topic will be addressed in one or more later subtasks. This report describes
A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT
MIKOSCH, T; WANG, QA
We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.
Methods to estimate the genetic risk
International Nuclear Information System (INIS)
Ehling, U.H.
1989-01-01
The estimation of the radiation-induced genetic risk to human populations is based on the extrapolation of results from animal experiments. Radiation-induced mutations are stochastic events. The probability of the event depends on the dose; the degree of the damage dose not. There are two main approaches in making genetic risk estimates. One of these, termed the direct method, expresses risk in terms of expected frequencies of genetic changes induced per unit dose. The other, referred to as the doubling dose method or the indirect method, expresses risk in relation to the observed incidence of genetic disorders now present in man. The advantage of the indirect method is that not only can Mendelian mutations be quantified, but also other types of genetic disorders. The disadvantages of the method are the uncertainties in determining the current incidence of genetic disorders in human and, in addition, the estimasion of the genetic component of congenital anomalies, anomalies expressed later and constitutional and degenerative diseases. Using the direct method we estimated that 20-50 dominant radiation-induced mutations would be expected in 19 000 offspring born to parents exposed in Hiroshima and Nagasaki, but only a small proportion of these mutants would have been detected with the techniques used for the population study. These methods were used to predict the genetic damage from the fallout of the reactor accident at Chernobyl in the vicinity of Southern Germany. The lack of knowledge for the interaction of chemicals with ionizing radiation and the discrepancy between the high safety standards for radiation protection and the low level of knowledge for the toxicological evaluation of chemical mutagens will be emphasized. (author)
Energy Technology Data Exchange (ETDEWEB)
Logue, Jennifer M. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Turner, William J. N. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Trinity College Dublin, Dublin (Ireland); Walker, Iain S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Singer, Brett C. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2015-01-19
Changing the air exchange rate of a home (the sum of the infiltration and mechanical ventilation airflow rates) affects the annual thermal conditioning energy. Large-scale changes to air exchange rates of the housing stock can significantly alter the residential sector's energy consumption. However, the complexity of existing residential energy models is a barrier to the accurate quantification of the impact of policy changes on a state or national level. The Incremental Ventilation Energy (IVE) model developed in this study combines the output of simple air exchange models with a limited set of housing characteristics to estimate the associated change in energy demand of homes. The IVE model was designed specifically to enable modellers to use existing databases of housing characteristics to determine the impact of ventilation policy change on a population scale. The IVE model estimates of energy change when applied to US homes with limited parameterisation are shown to be comparable to the estimates of a well-validated, complex residential energy model.
Larsen, Cand.scient Thomas; Ravn, Senior scientist Helle; Axelsen, Senior Scientist Jørgen
2004-01-01
A new and simplified method for extraction of ergosterol (ergoste-5,7,22-trien-3-beta-ol) from fungi in soil and litter was developed using pre-soaking extraction and paraffin oil for recovery. Recoveries of ergosterol were in the range of 94 - 100% depending on the solvent to oil ratio. Extraction efficiencies equal to heat-assisted extraction treatments were obtained with pre-soaked extraction. Ergosterol was detected with thin-layer chromatography (TLC) using fluorodensitometry with a quan...
International Nuclear Information System (INIS)
Aoki, Takayuki; Takagi, Toshiyuki; Kodama, Noriko
2014-01-01
Safety risk importance of components in nuclear power plants has been evaluated based on the probabilistic risk assessment and used for the decisions in various plant managements. But economic risk importance of the components has not been discussed very much. Therefore, this paper discusses risk importance of the components from the viewpoint of plant economic efficiency and proposes a simplified evaluation method of the economic risk importance (or economic maintenance importance). As a result of consideration, the followings were obtained. (1) A unit cost of power generation is selected as a performance indicator and can be related to a failure rate of components in nuclear power plant which is a result of maintenance. (2) The economic maintenance importance has to major factors, i.e. repair cost at component failure and production loss associated with plant outage due to component failure. (3) The developed method enables easy understanding of economic impacts of plant shutdown or power reduction due to component failures on the plane which adopts the repair cost in vertical axis and the production loss in horizontal axis. (author)
Directory of Open Access Journals (Sweden)
Lu Jia
2011-10-01
Full Text Available Abstract Background Although a variety of methods and expensive kits are available, molecular cloning can be a time-consuming and frustrating process. Results Here we report a highly simplified, reliable, and efficient PCR-based cloning technique to insert any DNA fragment into a plasmid vector or into a gene (cDNA in a vector at any desired position. With this method, the vector and insert are PCR amplified separately, with only 18 cycles, using a high fidelity DNA polymerase. The amplified insert has the ends with ~16-base overlapping with the ends of the amplified vector. After DpnI digestion of the mixture of the amplified vector and insert to eliminate the DNA templates used in PCR reactions, the mixture is directly transformed into competent E. coli cells to obtain the desired clones. This technique has many advantages over other cloning methods. First, it does not need gel purification of the PCR product or linearized vector. Second, there is no need of any cloning kit or specialized enzyme for cloning. Furthermore, with reduced number of PCR cycles, it also decreases the chance of random mutations. In addition, this method is highly effective and reproducible. Finally, since this cloning method is also sequence independent, we demonstrated that it can be used for chimera construction, insertion, and multiple mutations spanning a stretch of DNA up to 120 bp. Conclusion Our FastCloning technique provides a very simple, effective, reliable, and versatile tool for molecular cloning, chimera construction, insertion of any DNA sequences of interest and also for multiple mutations in a short stretch of a cDNA.
International Nuclear Information System (INIS)
Al-Ejeh, Fares; Darby, Jocelyn M.; Thierry, Benjamin; Brown, Michael P.
2009-01-01
Introduction: Antibodies covalently conjugated with chelators such as 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA) are required for radioimmunoscintigraphy and radioimmunotherapy, which are of growing importance in cancer medicine. Method: Here, we report a suite of simple methods that provide a preclinical assessment package for evaluating the effects of DOTA conjugation on the in vitro and in vivo performance of monoclonal antibodies. We exemplify the use of these methods by investigating the effects of DOTA conjugation on the biochemical properties of the DAB4 clone of the La/SSB-specific murine monoclonal autoantibody, APOMAB (registered) , which is a novel malignant cell death ligand. Results: We have developed a 96-well microtiter-plate assay to measure directly the concentration of DOTA and other chelators in antibody-chelator conjugate solutions. Coupled with a commercial assay for measuring protein concentration, the dual microtiter-plate method can rapidly determine chelator/antibody ratios in the same plate. The biochemical properties of DAB4 immunoconjugates were altered as the DOTA/Ab ratio increased so that: (i) mass/charge ratio decreased; (ii) hydrodynamic radius increased; (iii) antibody immunoactivity decreased; (iv) rate of chelation of metal ions and specific radioactivity both increased and in vivo, (v) tumor uptake decreased as nonspecific uptake by liver and spleen increased. Conclusion: This simplified suite of methods readily identifies biochemical characteristics of the DOTA-immunoconjugates such as hydrodynamic diameter and decreased mass/charge ratio associated with compromised immunotargeting efficiency and, thus, may prove useful for optimizing conjugation procedures in order to maximize immunoconjugate-mediated radioimmunoscintigraphy and radioimmunotherapy.
Energy Technology Data Exchange (ETDEWEB)
Duerigen, Susan
2013-05-15
The superior advantage of a nodal method for reactor cores with hexagonal fuel assemblies discretized as cells consisting of equilateral triangles is its mesh refinement capability. In this thesis, a diffusion and a simplified P{sub 3} (or SP{sub 3}) neutron transport nodal method are developed based on trigonal geometry. Both models are implemented in the reactor dynamics code DYN3D. As yet, no other well-established nodal core analysis code comprises an SP{sub 3} transport theory model based on trigonal meshes. The development of two methods based on different neutron transport approximations but using identical underlying spatial trigonal discretization allows a profound comparative analysis of both methods with regard to their mathematical derivations, nodal expansion approaches, solution procedures, and their physical performance. The developed nodal approaches can be regarded as a hybrid NEM/AFEN form. They are based on the transverse-integration procedure, which renders them computationally efficient, and they use a combination of polynomial and exponential functions to represent the neutron flux moments of the SP{sub 3} and diffusion equations, which guarantees high accuracy. The SP{sub 3} equations are derived in within-group form thus being of diffusion type. On this basis, the conventional diffusion solver structure can be retained also for the solution of the SP{sub 3} transport problem. The verification analysis provides proof of the methodological reliability of both trigonal DYN3D models. By means of diverse hexagonal academic benchmark and realistic detailed-geometry full-transport-theory problems, the superiority of the SP{sub 3} transport over the diffusion model is demonstrated in cases with pronounced anisotropy effects, which is, e.g., highly relevant to the modeling of fuel assemblies comprising absorber material.
International Nuclear Information System (INIS)
Teng Gaojun; He Shicheng; Deng Gang; Guo Jinhe; Fang Wen; Zhu Guangyu
2005-01-01
The objective of this study was to simplify the opacifying mixing process of the bone cement and contrast used for percutaneous vertebroplasty (PVP). We performed a biomechanical study of polymethyl methacrylate (PMMA) (Corinplast TM 3) using three different mixtures of PMMA, monomer, and contrast: group I, 2:1; group II, 3:2; group III, 3:2:1 ratio of powder/monomer/iodinated contrast (Omnipaque). In vitro biomechanical testing of ultimate compressive strength was carried out in all samples. Following the conclusion of a proper bone cement mixture regimen drawn from the in vitro study, PVP was performed in 125 patients: 58 with cancer, 12 with hemangioma, and 54 with osteoporotic fracture. The ultimate compressive strength in group III was decreased by 38% compared to groups II and I. Proper fluoroscopic visualization was achieved in all PVP procedures using this mixture. There were no major complications associated with injection of the cement mixture. Complete (CR) and partial response (PR) was obtained in 64% and 32.8%, respectively. No further vertebral collapse occurred during follow-up. The regimen using iodinated contrast for cement visualization during PVP provides a simple and convenient new method for mixing. Although the biomechanical strength is altered by the contrast medium added, it seems insignificant in clinical practice based on the authors' limited experience
Directory of Open Access Journals (Sweden)
J. Ochoa-Avendaño
2017-01-01
Full Text Available This paper presents the formulation, implementation, and validation of a simplified qualitative model to determine the crack path of solids considering static loads, infinitesimal strain, and plane stress condition. This model is based on finite element method with a special meshing technique, where nonlinear link elements are included between the faces of the linear triangular elements. The stiffness loss of some link elements represents the crack opening. Three experimental tests of bending beams are simulated, where the cracking pattern calculated with the proposed numerical model is similar to experimental result. The advantages of the proposed model compared to discrete crack approaches with interface elements can be the implementation simplicity, the numerical stability, and the very low computational cost. The simulation with greater values of the initial stiffness of the link elements does not affect the discontinuity path and the stability of the numerical solution. The exploded mesh procedure presented in this model avoids a complex nonlinear analysis and regenerative or adaptive meshes.
Møller, Pål; Clark, Neal; Mæhle, Lovise
2011-05-01
A method for SImplified rapid Segregation Analysis (SISA) to assess penetrance and expression of genetic variants in pedigrees of any complexity is presented. For this purpose the probability for recombination between the variant and the gene is zero. An assumption is that the variant of undetermined significance (VUS) is introduced into the family once only. If so, all family members in between two members demonstrated to carry a VUS, are obligate carriers. Probabilities for cosegregation of disease and VUS by chance, penetrance, and expression, may be calculated. SISA return values do not include person identifiers and need no explicit informed consent. There will be no ethical complications in submitting SISA return values to central databases. Values for several families may be combined. Values for a family may be updated by the contributor. SISA is used to consider penetrance whenever sequencing demonstrates a VUS in the known cancer-predisposing genes. Any family structure at hand in a genetic clinic may be used. One may include an extended lineage in a family through demonstrating the same VUS in a distant relative, and thereby identifying all obligate carriers in between. Such extension is a way to escape the selection biases through expanding the families outside the clusters used to select the families. © 2011 Wiley-Liss, Inc.
Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio
2010-01-01
The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and nume...
Directory of Open Access Journals (Sweden)
Gladia Toledo Mayarí
2010-06-01
Full Text Available Se realizó una investigación de innovación tecnológica, de corte transversal, con el objetivo de presentar un método simplificado para determinar el potencial de crecimiento en pacientes tributarios de tratamiento ortodóncico, en una muestra de 150 pacientes entre 8 y 16 años, que ingresaron en la Clínica de Ortodoncia de la Facultad de Estomatología de la Habana, entre los años 2004 y 2006. A cada paciente se le realizó una radiografía de la mano izquierda y por primera vez en Cuba se estudiaron en la misma muestra, tres métodos de evaluación del potencial de crecimiento (métodos TW2, Grave y Brown, y determinación de los estadios de maduración de la falange media del tercer dedo. Una vez determinados éstos, se calcularon la correlación y la concordancia entre los mismos. Hubo altos coeficientes de correlación (hembras rho= 0,888 y varones rho= 0,921 y de concordancia (hembras Kappa= 1,000 y varones Kappa= 0,964. Se concluyó que la evaluación del potencial de crecimiento que presentaron los pacientes de Ortodoncia puede ser efectuada mediante la realización de una radiografía de la falange media del tercer dedo de la mano izquierda, lo cual constituye un útil método simplificado de evaluación.A cross-sectional technological innovation research was conducted to tender a simplified method to determine the potential growth of 150 Orthodontics patients aged between 8 and 16 admitted in the Orthodontics Clinic from the Stomatology of Ciudad de La Habana in 2004 and 2006. Each patient underwent left hand X-ray and for the first time in Cuba and in the same sample it was possible to study three assessment methods of potential growth (TW2 Method, Grave and Brown and stage determination of maturation of middle phalanx of third finger. After determination, we estimated the correlation and concordance among them, noting high correlation coefficients (rho females= 0,888 and rho= males 0,921 and of concordance (Kappa females= 1
Directory of Open Access Journals (Sweden)
Hong Chen
2013-01-01
Full Text Available Based on the decomposition of the evolution processes of the urban expressway capacity after traffic accidents and the influence factors analysis, an approach for estimating the capacity has been proposed. Firstly, the approach introduces the Decision Tree ID algorithm, solves the accident delay time of different accident types by the Information Gain Value, and determines congestion dissipation time by the Traffic Flow Wave Theory. Secondly, taking the accident delay time as the observation cycle, the maximum number of the vehicles through the accident road per unit time was considered as its capacity. Finally, the attenuation simulation of the capacity for different accident types was calculated by the VISSIM software. The simulation results suggest that capacity attenuation of vehicle anchor is minimal and the rate is 30.074%; the next is vehicles fire, rear-end, and roll-over, and the rate is 38.389%, 40.204%, and 43.130%, respectively; the capacity attenuation of vehicle collision is the largest, and the rate is 50.037%. Moreover, the further research shows that the accident delay time is proportional to congestion dissipation time, time difference, and the ratio between them, but it is an inverse relationship with the residual capacity of urban expressway.
Ribeiro Fontoura, Jessica; Allasia, Daniel; Herbstrith Froemming, Gabriel; Freitas Ferreira, Pedro; Tassi, Rutineia
2016-04-01
Evapotranspiration is a key process of hydrological cycle and a sole term that links land surface water balance and land surface energy balance. Due to the higher information requirements of the Penman-Monteith method and the existing data uncertainty, simplified empirical methods for calculating potential and actual evapotranspiration are widely used in hydrological models. This is especially important in Brazil, where the monitoring of meteorological data is precarious. In this study were compared different methods for estimating evapotranspiration for Rio Grande do Sul, the Southernmost State of Brazil, aiming to suggest alternatives to the recommended method (Penman-Monteith-FAO 56) for estimate daily reference evapotranspiration (ETo) when meteorological data is missing or not available. The input dataset included daily and hourly-observed data from conventional and automatic weather stations respectively maintained by the National Weather Institute of Brazil (INMET) from the period of 1 January 2007 to 31 January 2010. Dataset included maximum temperature (Tmax, °C), minimum temperature (Tmin, °C), mean relative humidity (%), wind speed at 2 m height (u2, m s-1), daily solar radiation (Rs, MJ m- 2) and atmospheric pressure (kPa) that were grouped at daily time-step. Was tested the Food and Agriculture Organization of the United Nations (FAO) Penman-Monteith method (PM) at its full form, against PM assuming missing several variables not normally available in Brazil in order to calculate daily reference ETo. Missing variables were estimated as suggested in FAO56 publication or from climatological means. Furthermore, PM was also compared against the following simplified empirical methods: Hargreaves-Samani, Priestley-Taylor, Mccloud, McGuiness-Bordne, Romanenko, Radiation-Temperature, Tanner-Pelton. The statistical analysis indicates that even if just Tmin and Tmax are available, it is better to use PM estimating missing variables from syntetic data than
Chesson, Harrell W.; Markowitz, Lauri E.; Hariri, Susan; Ekwueme, Donatus U.; Saraiya, Mona
2016-01-01
Introduction: The objective of this study was to assess the incremental costs and benefits of the 9-valent HPV vaccine (9vHPV) compared with the quadrivalent HPV vaccine (4vHPV). Like 4vHPV, 9vHPV protects against HPV types 6, 11, 16, and 18. 9vHPV also protects against 5 additional HPV types 31, 33, 45, 52, and 58. Methods: We adapted a previously published model of the impact and cost-effectiveness of 4vHPV to include the 5 additional HPV types in 9vHPV. The vaccine strategies we examined w...
International Nuclear Information System (INIS)
Liu Qing; Zhu Jiamin; Hong Bihai
2008-01-01
A modified variable-coefficient projective Riccati equation method is proposed and applied to a (2 + 1)-dimensional simplified and generalized Broer-Kaup system. It is shown that the method presented by Huang and Zhang [Huang DJ, Zhang HQ. Chaos, Solitons and Fractals 2005; 23:601] is a special case of our method. The results obtained in the paper include many new formal solutions besides the all solutions found by Huang and Zhang
Directory of Open Access Journals (Sweden)
Mary J Warrell
2008-04-01
Full Text Available The need for economical rabies post-exposure prophylaxis (PEP is increasing in developing countries. Implementation of the two currently approved economical intradermal (ID vaccine regimens is restricted due to confusion over different vaccines, regimens and dosages, lack of confidence in intradermal technique, and pharmaceutical regulations. We therefore compared a simplified 4-site economical PEP regimen with standard methods.Two hundred and fifty-four volunteers were randomly allocated to a single blind controlled trial. Each received purified vero cell rabies vaccine by one of four PEP regimens: the currently accepted 2-site ID; the 8-site regimen using 0.05 ml per ID site; a new 4-site ID regimen (on day 0, approximately 0.1 ml at 4 ID sites, using the whole 0.5 ml ampoule of vaccine; on day 7, 0.1 ml ID at 2 sites and at one site on days 28 and 90; or the standard 5-dose intramuscular regimen. All ID regimens required the same total amount of vaccine, 60% less than the intramuscular method. Neutralising antibody responses were measured five times over a year in 229 people, for whom complete data were available.All ID regimens showed similar immunogenicity. The intramuscular regimen gave the lowest geometric mean antibody titres. Using the rapid fluorescent focus inhibition test, some sera had unexpectedly high antibody levels that were not attributable to previous vaccination. The results were confirmed using the fluorescent antibody virus neutralisation method.This 4-site PEP regimen proved as immunogenic as current regimens, and has the advantages of requiring fewer clinic visits, being more practicable, and having a wider margin of safety, especially in inexperienced hands, than the 2-site regimen. It is more convenient than the 8-site method, and can be used economically with vaccines formulated in 1.0 or 0.5 ml ampoules. The 4-site regimen now meets all requirements of immunogenicity for PEP and can be introduced without further
Warrell, Mary J; Riddell, Anna; Yu, Ly-Mee; Phipps, Judith; Diggle, Linda; Bourhy, Hervé; Deeks, Jonathan J; Fooks, Anthony R; Audry, Laurent; Brookes, Sharon M; Meslin, François-Xavier; Moxon, Richard; Pollard, Andrew J; Warrell, David A
2008-04-23
The need for economical rabies post-exposure prophylaxis (PEP) is increasing in developing countries. Implementation of the two currently approved economical intradermal (ID) vaccine regimens is restricted due to confusion over different vaccines, regimens and dosages, lack of confidence in intradermal technique, and pharmaceutical regulations. We therefore compared a simplified 4-site economical PEP regimen with standard methods. Two hundred and fifty-four volunteers were randomly allocated to a single blind controlled trial. Each received purified vero cell rabies vaccine by one of four PEP regimens: the currently accepted 2-site ID; the 8-site regimen using 0.05 ml per ID site; a new 4-site ID regimen (on day 0, approximately 0.1 ml at 4 ID sites, using the whole 0.5 ml ampoule of vaccine; on day 7, 0.1 ml ID at 2 sites and at one site on days 28 and 90); or the standard 5-dose intramuscular regimen. All ID regimens required the same total amount of vaccine, 60% less than the intramuscular method. Neutralising antibody responses were measured five times over a year in 229 people, for whom complete data were available. All ID regimens showed similar immunogenicity. The intramuscular regimen gave the lowest geometric mean antibody titres. Using the rapid fluorescent focus inhibition test, some sera had unexpectedly high antibody levels that were not attributable to previous vaccination. The results were confirmed using the fluorescent antibody virus neutralisation method. This 4-site PEP regimen proved as immunogenic as current regimens, and has the advantages of requiring fewer clinic visits, being more practicable, and having a wider margin of safety, especially in inexperienced hands, than the 2-site regimen. It is more convenient than the 8-site method, and can be used economically with vaccines formulated in 1.0 or 0.5 ml ampoules. The 4-site regimen now meets all requirements of immunogenicity for PEP and can be introduced without further studies. Controlled
3D imaging of optically cleared tissue using a simplified CLARITY method and on-chip microscopy
Zhang, Yibo; Shin, Yoonjung; Sung, Kevin; Yang, Sam; Chen, Harrison; Wang, Hongda; Teng, Da; Rivenson, Yair; Kulkarni, Rajan P.; Ozcan, Aydogan
2017-01-01
High-throughput sectioning and optical imaging of tissue samples using traditional immunohistochemical techniques can be costly and inaccessible in resource-limited areas. We demonstrate three-dimensional (3D) imaging and phenotyping in optically transparent tissue using lens-free holographic on-chip microscopy as a low-cost, simple, and high-throughput alternative to conventional approaches. The tissue sample is passively cleared using a simplified CLARITY method and stained using 3,3′-diaminobenzidine to target cells of interest, enabling bright-field optical imaging and 3D sectioning of thick samples. The lens-free computational microscope uses pixel super-resolution and multi-height phase recovery algorithms to digitally refocus throughout the cleared tissue and obtain a 3D stack of complex-valued images of the sample, containing both phase and amplitude information. We optimized the tissue-clearing and imaging system by finding the optimal illumination wavelength, tissue thickness, sample preparation parameters, and the number of heights of the lens-free image acquisition and implemented a sparsity-based denoising algorithm to maximize the imaging volume and minimize the amount of the acquired data while also preserving the contrast-to-noise ratio of the reconstructed images. As a proof of concept, we achieved 3D imaging of neurons in a 200-μm-thick cleared mouse brain tissue over a wide field of view of 20.5 mm2. The lens-free microscope also achieved more than an order-of-magnitude reduction in raw data compared to a conventional scanning optical microscope imaging the same sample volume. Being low cost, simple, high-throughput, and data-efficient, we believe that this CLARITY-enabled computational tissue imaging technique could find numerous applications in biomedical diagnosis and research in low-resource settings.
3D imaging of optically cleared tissue using a simplified CLARITY method and on-chip microscopy
Zhang, Yibo
2017-08-12
High-throughput sectioning and optical imaging of tissue samples using traditional immunohistochemical techniques can be costly and inaccessible in resource-limited areas. We demonstrate three-dimensional (3D) imaging and phenotyping in optically transparent tissue using lens-free holographic on-chip microscopy as a low-cost, simple, and high-throughput alternative to conventional approaches. The tissue sample is passively cleared using a simplified CLARITY method and stained using 3,3′-diaminobenzidine to target cells of interest, enabling bright-field optical imaging and 3D sectioning of thick samples. The lens-free computational microscope uses pixel super-resolution and multi-height phase recovery algorithms to digitally refocus throughout the cleared tissue and obtain a 3D stack of complex-valued images of the sample, containing both phase and amplitude information. We optimized the tissue-clearing and imaging system by finding the optimal illumination wavelength, tissue thickness, sample preparation parameters, and the number of heights of the lens-free image acquisition and implemented a sparsity-based denoising algorithm to maximize the imaging volume and minimize the amount of the acquired data while also preserving the contrast-to-noise ratio of the reconstructed images. As a proof of concept, we achieved 3D imaging of neurons in a 200-μm-thick cleared mouse brain tissue over a wide field of view of 20.5 mm2. The lens-free microscope also achieved more than an order-of-magnitude reduction in raw data compared to a conventional scanning optical microscope imaging the same sample volume. Being low cost, simple, high-throughput, and data-efficient, we believe that this CLARITY-enabled computational tissue imaging technique could find numerous applications in biomedical diagnosis and research in low-resource settings.
Method of estimation of scanning system quality
Larkin, Eugene; Kotov, Vladislav; Kotova, Natalya; Privalov, Alexander
2018-04-01
Estimation of scanner parameters is an important part in developing electronic document management system. This paper suggests considering the scanner as a system that contains two main channels: a photoelectric conversion channel and a channel for measuring spatial coordinates of objects. Although both of channels consist of the same elements, the testing of their parameters should be executed separately. The special structure of the two-dimensional reference signal is offered for this purpose. In this structure, the fields for testing various parameters of the scanner are sp atially separated. Characteristics of the scanner are associated with the loss of information when a document is digitized. The methods to test grayscale transmitting ability, resolution and aberrations level are offered.
Energy Technology Data Exchange (ETDEWEB)
MacKinnon, Robert J.; Kuhlman, Kristopher L
2016-05-01
We present a method of control variates for calculating improved estimates for mean performance quantities of interest, E(PQI) , computed from Monte Carlo probabilistic simulations. An example of a PQI is the concentration of a contaminant at a particular location in a problem domain computed from simulations of transport in porous media. To simplify the presentation, the method is described in the setting of a one- dimensional elliptical model problem involving a single uncertain parameter represented by a probability distribution. The approach can be easily implemented for more complex problems involving multiple uncertain parameters and in particular for application to probabilistic performance assessment of deep geologic nuclear waste repository systems. Numerical results indicate the method can produce estimates of E(PQI)having superior accuracy on coarser meshes and reduce the required number of simulations needed to achieve an acceptable estimate.
Koven, C. D.; Schuur, E.; Schaedel, C.; Bohn, T. J.; Burke, E.; Chen, G.; Chen, X.; Ciais, P.; Grosse, G.; Harden, J. W.; Hayes, D. J.; Hugelius, G.; Jafarov, E. E.; Krinner, G.; Kuhry, P.; Lawrence, D. M.; MacDougall, A.; Marchenko, S. S.; McGuire, A. D.; Natali, S.; Nicolsky, D.; Olefeldt, D.; Peng, S.; Romanovsky, V. E.; Schaefer, K. M.; Strauss, J.; Treat, C. C.; Turetsky, M. R.
2015-12-01
We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation-Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a 3-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100.
A Method for Estimating Surveillance Video Georeferences
Directory of Open Access Journals (Sweden)
Aleksandar Milosavljević
2017-07-01
Full Text Available The integration of a surveillance camera video with a three-dimensional (3D geographic information system (GIS requires the georeferencing of that video. Since a video consists of separate frames, each frame must be georeferenced. To georeference a video frame, we rely on the information about the camera view at the moment that the frame was captured. A camera view in 3D space is completely determined by the camera position, orientation, and field-of-view. Since the accurate measuring of these parameters can be extremely difficult, in this paper we propose a method for their estimation based on matching video frame coordinates of certain point features with their 3D geographic locations. To obtain these coordinates, we rely on high-resolution orthophotos and digital elevation models (DEM of the area of interest. Once an adequate number of points are matched, Levenberg–Marquardt iterative optimization is applied to find the most suitable video frame georeference, i.e., position and orientation of the camera.
DEFF Research Database (Denmark)
Liu, Mingzhe; Wittchen, Kim Bjarne; Heiselberg, Per
2014-01-01
The research aims to develop a simplified calculation method for intelligent glazed facade under different control conditions (night shutter, solar shading and natural ventilation) to simulate the energy performance and indoor environment of an office room installed with the intelligent facade......, it is possible to calculate the whole year performance of a room or building with intelligent glazed façade, which makes it a less time consuming tool to investigate the performance of the intelligent façade under different control strategies in the design stage with acceptable accuracy. Results showed good....... The method took the angle dependence of the solar characteristic into account, including the simplified hourly building model developed according to EN 13790 to evaluate the influence of the controlled façade on both the indoor environment (indoor air temperature, solar transmittance through the façade...
Energy Technology Data Exchange (ETDEWEB)
Eripret, C.; Franco, C.; Gilles, P.
1995-12-31
The J-based criteria give reasonable predictions of the failure behaviour of ductile cracked metallic structures, even if the material characterization may be sensitive to the size of the specimens. However in cracked welds, this phenomenon due to stress triaxiality effects could be enhanced. Furthermore, the application of conventional methods of toughness measurement (ESIS or ASTM standard) have evidenced a strong influence of the portion of the weld metal in the specimen. Several authors have shown the inadequacy of the simplified J-estimation methods developed for homogeneous materials. These heterogeneity effects mainly related to the mismatch ratio (ratio of weld metal yield strength upon base metal yield strength) as well as to the geometrical parameter h/W-a (weld width upon ligament size). In order to make decisive progress in this field, the Atomic Energy Commission (CEA), the PWR manufacturer FRAMATOME, and the French utility (EDF) have launched a large research program on cracked piping welds behaviour. As part of this program, a new J-estimation scheme, so called ARAMIS, has been developed to account for the influence of both materials, i.e. base metal and weld metal, on the structural resistance of cracked welds. It has been shown that, when the mismatch is high, and when the ligament size is small compared to the weld width, a classical J-based method using the softer material properties is very conservative. On the opposite the ARAMIS method provides a good estimate of J, because it predicts pretty well the shift of the cracked weld limit load, due to the presence of the weld. the influence of geometrical parameters such as crack size, weld width, or specimen length is property accounted for. (authors). 23 refs., 8 figs., 1 tab., 1 appendix.
The MIRD method of estimating absorbed dose
International Nuclear Information System (INIS)
Weber, D.A.
1991-01-01
The estimate of absorbed radiation dose from internal emitters provides the information required to assess the radiation risk associated with the administration of radiopharmaceuticals for medical applications. The MIRD (Medical Internal Radiation Dose) system of dose calculation provides a systematic approach to combining the biologic distribution data and clearance data of radiopharmaceuticals and the physical properties of radionuclides to obtain dose estimates. This tutorial presents a review of the MIRD schema, the derivation of the equations used to calculate absorbed dose, and shows how the MIRD schema can be applied to estimate dose from radiopharmaceuticals used in nuclear medicine
Psychological methods of subjective risk estimates
International Nuclear Information System (INIS)
Zimolong, B.
1980-01-01
Reactions to situations involving risks can be divided into the following parts/ perception of danger, subjective estimates of the risk and risk taking with respect to action. Several investigations have compared subjective estimates of the risk with an objective measure of that risk. In general there was a mis-match between subjective and objective measures of risk, especially, objective risk involved in routine activities is most commonly underestimated. This implies, for accident prevention, that attempts must be made to induce accurate subjective risk estimates by technical and behavioural measures. (orig.) [de
A Generalized Autocovariance Least-Squares Method for Covariance Estimation
DEFF Research Database (Denmark)
Åkesson, Bernt Magnus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad
2007-01-01
A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter.......A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter....
PERFORMANCE ANALYSIS OF METHODS FOR ESTIMATING ...
African Journals Online (AJOL)
2014-12-31
Dec 31, 2014 ... speed is the most significant parameter of the wind energy. ... wind-powered generators and applied to estimate potential power output at various ...... Wind and Solar Power Systems, U.S. Merchant Marine Academy Kings.
Estimation methods for special nuclear materials holdup
International Nuclear Information System (INIS)
Pillay, K.K.S.; Picard, R.R.
1984-01-01
The potential value of statistical models for the estimation of residual inventories of special nuclear materials was examined using holdup data from processing facilities and through controlled experiments. Although the measurement of hidden inventories of special nuclear materials in large facilities is a challenging task, reliable estimates of these inventories can be developed through a combination of good measurements and the use of statistical models. 7 references, 5 figures
Statistical methods of estimating mining costs
Long, K.R.
2011-01-01
Until it was defunded in 1995, the U.S. Bureau of Mines maintained a Cost Estimating System (CES) for prefeasibility-type economic evaluations of mineral deposits and estimating costs at producing and non-producing mines. This system had a significant role in mineral resource assessments to estimate costs of developing and operating known mineral deposits and predicted undiscovered deposits. For legal reasons, the U.S. Geological Survey cannot update and maintain CES. Instead, statistical tools are under development to estimate mining costs from basic properties of mineral deposits such as tonnage, grade, mineralogy, depth, strip ratio, distance from infrastructure, rock strength, and work index. The first step was to reestimate "Taylor's Rule" which relates operating rate to available ore tonnage. The second step was to estimate statistical models of capital and operating costs for open pit porphyry copper mines with flotation concentrators. For a sample of 27 proposed porphyry copper projects, capital costs can be estimated from three variables: mineral processing rate, strip ratio, and distance from nearest railroad before mine construction began. Of all the variables tested, operating costs were found to be significantly correlated only with strip ratio.
DEFF Research Database (Denmark)
Liu, Mingzhe; Wittchen, Kim Bjarne; Heiselberg, Per
2013-01-01
The study aims to develop a simplified calculation method to simulate the performance of double glazing fac¸ ade with night insulation. This paper describes the method to calculate the thermal properties (Uvalue) and comfort performance (internal surface temperature of glazing) of the double...... with night insulation is calculated and compared with that of the facade without the night insulation. Based on standards EN 410 and EN 673, the method takes the thermal mass of glazing and the infiltration between the insulation layer and glazing into account. Furthermore it is capable of implementing whole...
System and method for traffic signal timing estimation
Dumazert, Julien; Claudel, Christian G.
2015-01-01
A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.
System and method for traffic signal timing estimation
Dumazert, Julien
2015-12-30
A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.
A new rapid method for rockfall energies and distances estimation
Giacomini, Anna; Ferrari, Federica; Thoeni, Klaus; Lambert, Cedric
2016-04-01
Rockfalls are characterized by long travel distances and significant energies. Over the last decades, three main methods have been proposed in the literature to assess the rockfall runout: empirical, process-based and GIS-based methods (Dorren, 2003). Process-based methods take into account the physics of rockfall by simulating the motion of a falling rock along a slope and they are generally based on a probabilistic rockfall modelling approach that allows for taking into account the uncertainties associated with the rockfall phenomenon. Their application has the advantage of evaluating the energies, bounce heights and distances along the path of a falling block, hence providing valuable information for the design of mitigation measures (Agliardi et al., 2009), however, the implementation of rockfall simulations can be time-consuming and data-demanding. This work focuses on the development of a new methodology for estimating the expected kinetic energies and distances of the first impact at the base of a rock cliff, subject to the conditions that the geometry of the cliff and the properties of the representative block are known. The method is based on an extensive two-dimensional sensitivity analysis, conducted by means of kinematic simulations based on probabilistic modelling of two-dimensional rockfall trajectories (Ferrari et al., 2016). To take into account for the uncertainty associated with the estimation of the input parameters, the study was based on 78400 rockfall scenarios performed by systematically varying the input parameters that are likely to affect the block trajectory, its energy and distance at the base of the rock wall. The variation of the geometry of the rock cliff (in terms of height and slope angle), the roughness of the rock surface and the properties of the outcropping material were considered. A simplified and idealized rock wall geometry was adopted. The analysis of the results allowed finding empirical laws that relate impact energies
DEFF Research Database (Denmark)
Liu, Mingzhe; Wittchen, Kim Bjarne; Heiselberg, Per
2014-01-01
The research aims to develop a simplified calculation method for double glazing facade to calculate its thermal and solar properties (U and g value) together with comfort performance (internal surface temperature of the glazing). Double glazing is defined as 1D model with nodes representing......, taking the thermal mass of the glazing into account. In addition, angle and spectral dependency of solar characteristic is also considered during the calculation. By using the method, it is possible to calculate whole year performance at different time steps, which makes it a time economical and accurate...
Software Estimation: Developing an Accurate, Reliable Method
2011-08-01
based and size-based estimates is able to accurately plan, launch, and execute on schedule. Bob Sinclair, NAWCWD Chris Rickets , NAWCWD Brad Hodgins...Office by Carnegie Mellon University. SMPSP and SMTSP are service marks of Carnegie Mellon University. 1. Rickets , Chris A, “A TSP Software Maintenance...Life Cycle”, CrossTalk, March, 2005. 2. Koch, Alan S, “TSP Can Be the Building blocks for CMMI”, CrossTalk, March, 2005. 3. Hodgins, Brad, Rickets
International Nuclear Information System (INIS)
Carter, J.A.; Walker, R.L.; Eby, R.E.; Pritchard, C.A.
1976-01-01
In this simplified technique a basic anion resin is employed to selectively adsorb plutonium and uranium from 8M HNO 3 solutions containing dissolved spent reactor fuels. After a few beads of the resin are equilibrated with solution, a single bead is used for establishing the isotopic composition of plutonium and uranium. The resin-bead separation essentially removes all possible isobaric interference from such elements as americium and curium and at the same time eliminates most fission-product contamination in the mass spectrometer. Small aliquots of dissolver solution that contain 10 -6 g U and 10 -8 g Pu are adequate for preparing about ten resin beads. By employing a single focusing tandem magnet-type mass spectrometer, equipped with pulse counting for ion detection, simultaneous plutonium and uranium assays are obtained. The quantity of each element per bead may be as low as 10 -9 to 10 -10 g. The carburized bead, which forms as the filament is heated, acts as a reducing point source and emits a predominance of metallic ions as compared with oxide ion emission from direct solution loadings. In addition to isotopic abundance, the technique of isotope dilution can ve coupled with the ion-exchange bead separation and used effectively for measuring the total quantity of U and Pu. The technique possesses many advantages such as reduced radiation hazards from the infinitely smaller samples, thus less shielding and transport cost for sample handling; greatly simplified chemical preparations that eliminate fission products and actinide isobaric interferences; and the minor isotopes are more precisely established. (author)
Bin mode estimation methods for Compton camera imaging
International Nuclear Information System (INIS)
Ikeda, S.; Odaka, H.; Uemura, M.; Takahashi, T.; Watanabe, S.; Takeda, S.
2014-01-01
We study the image reconstruction problem of a Compton camera which consists of semiconductor detectors. The image reconstruction is formulated as a statistical estimation problem. We employ a bin-mode estimation (BME) and extend an existing framework to a Compton camera with multiple scatterers and absorbers. Two estimation algorithms are proposed: an accelerated EM algorithm for the maximum likelihood estimation (MLE) and a modified EM algorithm for the maximum a posteriori (MAP) estimation. Numerical simulations demonstrate the potential of the proposed methods
Empirical methods for estimating future climatic conditions
International Nuclear Information System (INIS)
Anon.
1990-01-01
Applying the empirical approach permits the derivation of estimates of the future climate that are nearly independent of conclusions based on theoretical (model) estimates. This creates an opportunity to compare these results with those derived from the model simulations of the forthcoming changes in climate, thus increasing confidence in areas of agreement and focusing research attention on areas of disagreements. The premise underlying this approach for predicting anthropogenic climate change is based on associating the conditions of the climatic optimums of the Holocene, Eemian, and Pliocene with corresponding stages of the projected increase of mean global surface air temperature. Provided that certain assumptions are fulfilled in matching the value of the increased mean temperature for a certain epoch with the model-projected change in global mean temperature in the future, the empirical approach suggests that relationships leading to the regional variations in air temperature and other meteorological elements could be deduced and interpreted based on use of empirical data describing climatic conditions for past warm epochs. Considerable care must be taken, of course, in making use of these spatial relationships, especially in accounting for possible large-scale differences that might, in some cases, result from different factors contributing to past climate changes than future changes and, in other cases, might result from the possible influences of changes in orography and geography on regional climatic conditions over time
Statistically Efficient Methods for Pitch and DOA Estimation
DEFF Research Database (Denmark)
Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2013-01-01
, it was recently considered to estimate the DOA and pitch jointly. In this paper, we propose two novel methods for DOA and pitch estimation. They both yield maximum-likelihood estimates in white Gaussian noise scenar- ios, where the SNR may be different across channels, as opposed to state-of-the-art methods......Traditionally, direction-of-arrival (DOA) and pitch estimation of multichannel, periodic sources have been considered as two separate problems. Separate estimation may render the task of resolving sources with similar DOA or pitch impossible, and it may decrease the estimation accuracy. Therefore...
Sensitivity Analysis of a Simplified Fire Dynamic Model
DEFF Research Database (Denmark)
Sørensen, Lars Schiøtt; Nielsen, Anker
2015-01-01
This paper discusses a method for performing a sensitivity analysis of parameters used in a simplified fire model for temperature estimates in the upper smoke layer during a fire. The results from the sensitivity analysis can be used when individual parameters affecting fire safety are assessed...
portfolio optimization based on nonparametric estimation methods
Directory of Open Access Journals (Sweden)
mahsa ghandehari
2017-03-01
Full Text Available One of the major issues investors are facing with in capital markets is decision making about select an appropriate stock exchange for investing and selecting an optimal portfolio. This process is done through the risk and expected return assessment. On the other hand in portfolio selection problem if the assets expected returns are normally distributed, variance and standard deviation are used as a risk measure. But, the expected returns on assets are not necessarily normal and sometimes have dramatic differences from normal distribution. This paper with the introduction of conditional value at risk ( CVaR, as a measure of risk in a nonparametric framework, for a given expected return, offers the optimal portfolio and this method is compared with the linear programming method. The data used in this study consists of monthly returns of 15 companies selected from the top 50 companies in Tehran Stock Exchange during the winter of 1392 which is considered from April of 1388 to June of 1393. The results of this study show the superiority of nonparametric method over the linear programming method and the nonparametric method is much faster than the linear programming method.
Simplified scheme or radioactive plume calculations
International Nuclear Information System (INIS)
Gibson, T.A.; Montan, D.N.
1976-01-01
A simplified mathematical scheme to estimate external whole-body γ radiation exposure rates from gaseous radioactive plumes was developed for the Rio Blanco Gas Field Nuclear Stimulation Experiment. The method enables one to calculate swiftly, in the field, downwind exposure rates knowing the meteorological conditions and γ radiation exposure rates measured by detectors positioned near the plume source. The method is straightforward and easy to use under field conditions without the help of mini-computers. It is applicable to a wide range of radioactive plume situations. It should be noted that the Rio Blanco experiment was detonated on May 17, 1973, and no seep or release of radioactive material occurred
Estimation of subcriticality of TCA using 'indirect estimation method for calculation error'
International Nuclear Information System (INIS)
Naito, Yoshitaka; Yamamoto, Toshihiro; Arakawa, Takuya; Sakurai, Kiyoshi
1996-01-01
To estimate the subcriticality of neutron multiplication factor in a fissile system, 'Indirect Estimation Method for Calculation Error' is proposed. This method obtains the calculational error of neutron multiplication factor by correlating measured values with the corresponding calculated ones. This method was applied to the source multiplication and to the pulse neutron experiments conducted at TCA, and the calculation error of MCNP 4A was estimated. In the source multiplication method, the deviation of measured neutron count rate distributions from the calculated ones estimates the accuracy of calculated k eff . In the pulse neutron method, the calculation errors of prompt neutron decay constants give the accuracy of the calculated k eff . (author)
Directory of Open Access Journals (Sweden)
Zhong-ye Tian
2014-01-01
Full Text Available The seismic responses of a long-span cable-stayed bridge under uniform excitation and traveling wave excitation in the longitudinal direction are, respectively, computed. The numerical results show that the bridge’s peak seismic responses vary significantly as the apparent wave velocity decreases. Therefore, the traveling wave effect must be considered in the seismic design of long-span bridges. The bridge’s peak seismic responses do not vary monotonously with the apparent wave velocity due to the traveling wave resonance. A new traveling wave excitation method that can simplify the multisupport excitation process into a two-support excitation process is developed.
Thermodynamic properties of organic compounds estimation methods, principles and practice
Janz, George J
1967-01-01
Thermodynamic Properties of Organic Compounds: Estimation Methods, Principles and Practice, Revised Edition focuses on the progression of practical methods in computing the thermodynamic characteristics of organic compounds. Divided into two parts with eight chapters, the book concentrates first on the methods of estimation. Topics presented are statistical and combined thermodynamic functions; free energy change and equilibrium conversions; and estimation of thermodynamic properties. The next discussions focus on the thermodynamic properties of simple polyatomic systems by statistical the
Directory of Open Access Journals (Sweden)
B. Merckx
2012-01-01
Full Text Available The thermal conductivity measurement by a simplified transient hot-wire technique is applied to geomaterials in order to show the relationships which can exist between effective thermal conductivity, texture, and moisture of the materials. After a validation of the used “one hot-wire” technique in water, toluene, and glass-bead assemblages, the investigations were performed (1 in glass-bead assemblages of different diameters in dried, water, and acetone-saturated states in order to observe the role of grain sizes and saturation on the effective thermal conductivity, (2 in a compacted earth brick at different moisture states, and (3 in a lime-hemp concrete during 110 days following its manufacture. The lime-hemp concrete allows the measurements during the setting, desiccation and carbonation steps. The recorded Δ/ln( diagrams allow the calculation of one effective thermal conductivity in the continuous and homogeneous fluids and two effective thermal conductivities in the heterogeneous solids. The first one measured in the short time acquisitions (<1 s mainly depends on the contact between the wire and grains and thus microtexture and hydrated state of the material. The second one, measured for longer time acquisitions, characterizes the mean effective thermal conductivity of the material.
System and method for correcting attitude estimation
Josselson, Robert H. (Inventor)
2010-01-01
A system includes an angular rate sensor disposed in a vehicle for providing angular rates of the vehicle, and an instrument disposed in the vehicle for providing line-of-sight control with respect to a line-of-sight reference. The instrument includes an integrator which is configured to integrate the angular rates of the vehicle to form non-compensated attitudes. Also included is a compensator coupled across the integrator, in a feed-forward loop, for receiving the angular rates of the vehicle and outputting compensated angular rates of the vehicle. A summer combines the non-compensated attitudes and the compensated angular rates of the to vehicle to form estimated vehicle attitudes for controlling the instrument with respect to the line-of-sight reference. The compensator is configured to provide error compensation to the instrument free-of any feedback loop that uses an error signal. The compensator may include a transfer function providing a fixed gain to the received angular rates of the vehicle. The compensator may, alternatively, include a is transfer function providing a variable gain as a function of frequency to operate on the received angular rates of the vehicle.
Control and estimation methods over communication networks
Mahmoud, Magdi S
2014-01-01
This book provides a rigorous framework in which to study problems in the analysis, stability and design of networked control systems. Four dominant sources of difficulty are considered: packet dropouts, communication bandwidth constraints, parametric uncertainty, and time delays. Past methods and results are reviewed from a contemporary perspective, present trends are examined, and future possibilities proposed. Emphasis is placed on robust and reliable design methods. New control strategies for improving the efficiency of sensor data processing and reducing associated time delay are presented. The coverage provided features: · an overall assessment of recent and current fault-tolerant control algorithms; · treatment of several issues arising at the junction of control and communications; · key concepts followed by their proofs and efficient computational methods for their implementation; and · simulation examples (including TrueTime simulations) to...
Bayesian methods to estimate urban growth potential
Smith, Jordan W.; Smart, Lindsey S.; Dorning, Monica; Dupéy, Lauren Nicole; Méley, Andréanne; Meentemeyer, Ross K.
2017-01-01
Urban growth often influences the production of ecosystem services. The impacts of urbanization on landscapes can subsequently affect landowners’ perceptions, values and decisions regarding their land. Within land-use and land-change research, very few models of dynamic landscape-scale processes like urbanization incorporate empirically-grounded landowner decision-making processes. Very little attention has focused on the heterogeneous decision-making processes that aggregate to influence broader-scale patterns of urbanization. We examine the land-use tradeoffs faced by individual landowners in one of the United States’ most rapidly urbanizing regions − the urban area surrounding Charlotte, North Carolina. We focus on the land-use decisions of non-industrial private forest owners located across the region’s development gradient. A discrete choice experiment is used to determine the critical factors influencing individual forest owners’ intent to sell their undeveloped properties across a series of experimentally varied scenarios of urban growth. Data are analyzed using a hierarchical Bayesian approach. The estimates derived from the survey data are used to modify a spatially-explicit trend-based urban development potential model, derived from remotely-sensed imagery and observed changes in the region’s socioeconomic and infrastructural characteristics between 2000 and 2011. This modeling approach combines the theoretical underpinnings of behavioral economics with spatiotemporal data describing a region’s historical development patterns. By integrating empirical social preference data into spatially-explicit urban growth models, we begin to more realistically capture processes as well as patterns that drive the location, magnitude and rates of urban growth.
Simplified tritium permeation model
International Nuclear Information System (INIS)
Longhurst, G.R.
1993-01-01
In this model I seek to provide a simplified approach to solving permeation problems addressed by TMAP4. I will assume that there are m one-dimensional segments with thickness L i , i = 1, 2, hor-ellipsis, m, joined in series with an implantation flux, J i , implanting at the single depth, δ, in the first segment. From material properties and heat transfer considerations, I calculate temperatures at each face of each segment, and from those temperatures I find local diffusivities and solubilities. I assume recombination coefficients K r1 and K r2 are known at the upstream and downstream faces, respectively, but the model will generate Baskes recombination coefficient values on demand. Here I first develop the steady-state concentration equations and then show how trapping considerations can lead to good estimates of permeation transient times
Comparison of methods for estimating carbon in harvested wood products
International Nuclear Information System (INIS)
Claudia Dias, Ana; Louro, Margarida; Arroja, Luis; Capela, Isabel
2009-01-01
There is a great diversity of methods for estimating carbon storage in harvested wood products (HWP) and, therefore, it is extremely important to agree internationally on the methods to be used in national greenhouse gas inventories. This study compares three methods for estimating carbon accumulation in HWP: the method suggested by Winjum et al. (Winjum method), the tier 2 method proposed by the IPCC Good Practice Guidance for Land Use, Land-Use Change and Forestry (GPG LULUCF) (GPG tier 2 method) and a method consistent with GPG LULUCF tier 3 methods (GPG tier 3 method). Carbon accumulation in HWP was estimated for Portugal under three accounting approaches: stock-change, production and atmospheric-flow. The uncertainty in the estimates was also evaluated using Monte Carlo simulation. The estimates of carbon accumulation in HWP obtained with the Winjum method differed substantially from the estimates obtained with the other methods, because this method tends to overestimate carbon accumulation with the stock-change and the production approaches and tends to underestimate carbon accumulation with the atmospheric-flow approach. The estimates of carbon accumulation provided by the GPG methods were similar, but the GPG tier 3 method reported the lowest uncertainties. For the GPG methods, the atmospheric-flow approach produced the largest estimates of carbon accumulation, followed by the production approach and the stock-change approach, by this order. A sensitivity analysis showed that using the ''best'' available data on production and trade of HWP produces larger estimates of carbon accumulation than using data from the Food and Agriculture Organization. (author)
Simplified elastoplastic fatigue analysis
International Nuclear Information System (INIS)
Autrusson, B.; Acker, D.; Hoffmann, A.
1987-01-01
Oligocyclic fatigue behaviour is a function of the local strain range. The design codes ASME section III, RCC-M, Code Case N47, RCC-MR, and the Guide issued by PNC propose simplified methods to evaluate the local strain range. After having briefly described these simplified methods, we tested them by comparing the results of experimental strains with those predicted by these rules. The experiments conducted for this study involved perforated plates under tensile stress, notched or reinforced beams under four-point bending stress, grooved specimens under tensile-compressive stress, and embedded grooved beams under bending stress. They display a relative conservatism depending on each case. The evaluation of the strains of rather inaccurate and sometimes lacks conservatism. So far, the proposal is to use the finite element codes with a simple model. The isotropic model with the cyclic consolidation curve offers a good representation of the real equivalent strain. There is obviously no question of representing the cycles and the entire loading history, but merely of calculating the maximum variation in elastoplastic equivalent deformations with a constant-rate loading. The results presented testify to the good prediction of the strains with this model. The maximum equivalent strain will be employed to evaluate fatigue damage
New methods of testing nonlinear hypothesis using iterative NLLS estimator
Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.
2017-11-01
This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.
Novel method for quantitative estimation of biofilms
DEFF Research Database (Denmark)
Syal, Kirtimaan
2017-01-01
Biofilm protects bacteria from stress and hostile environment. Crystal violet (CV) assay is the most popular method for biofilm determination adopted by different laboratories so far. However, biofilm layer formed at the liquid-air interphase known as pellicle is extremely sensitive to its washing...... and staining steps. Early phase biofilms are also prone to damage by the latter steps. In bacteria like mycobacteria, biofilm formation occurs largely at the liquid-air interphase which is susceptible to loss. In the proposed protocol, loss of such biofilm layer was prevented. In place of inverting...... and discarding the media which can lead to the loss of the aerobic biofilm layer in CV assay, media was removed from the formed biofilm with the help of a syringe and biofilm layer was allowed to dry. The staining and washing steps were avoided, and an organic solvent-tetrahydrofuran (THF) was deployed...
Novel Method for 5G Systems NLOS Channels Parameter Estimation
Directory of Open Access Journals (Sweden)
Vladeta Milenkovic
2017-01-01
Full Text Available For the development of new 5G systems to operate in mm bands, there is a need for accurate radio propagation modelling at these bands. In this paper novel approach for NLOS channels parameter estimation will be presented. Estimation will be performed based on LCR performance measure, which will enable us to estimate propagation parameters in real time and to avoid weaknesses of ML and moment method estimation approaches.
Investigation on method of estimating the excitation spectrum of vibration source
International Nuclear Information System (INIS)
Zhang Kun; Sun Lei; Lin Song
2010-01-01
In practical engineer area, it is hard to obtain the excitation spectrum of the auxiliary machines of nuclear reactor through direct measurement. To solve this problem, the general method of estimating the excitation spectrum of vibration source through indirect measurement is proposed. First, the dynamic transfer matrix between the virtual excitation points and the measure points is obtained through experiment. The matrix combined with the response spectrum at the measure points under practical work condition can be used to calculate the excitation spectrum acts on the virtual excitation points. Then a simplified method is proposed which is based on the assumption that the vibration machine can be regarded as rigid body. The method treats the centroid as the excitation point and the dynamic transfer matrix is derived by using the sub structure mobility synthesis method. Thus, the excitation spectrum can be obtained by the inverse of the transfer matrix combined with the response spectrum at the measure points. Based on the above method, a computing example is carried out to estimate the excitation spectrum acts on the centroid of a electrical pump. By comparing the input excitation and the estimated excitation, the reliability of this method is verified. (authors)
VHTRC experiment for verification test of H∞ reactivity estimation method
International Nuclear Information System (INIS)
Fujii, Yoshio; Suzuki, Katsuo; Akino, Fujiyoshi; Yamane, Tsuyoshi; Fujisaki, Shingo; Takeuchi, Motoyoshi; Ono, Toshihiko
1996-02-01
This experiment was performed at VHTRC to acquire the data for verifying the H∞ reactivity estimation method. In this report, the experimental method, the measuring circuits and data processing softwares are described in details. (author)
Carbon footprint: current methods of estimation.
Pandey, Divya; Agrawal, Madhoolika; Pandey, Jai Shanker
2011-07-01
Increasing greenhouse gaseous concentration in the atmosphere is perturbing the environment to cause grievous global warming and associated consequences. Following the rule that only measurable is manageable, mensuration of greenhouse gas intensiveness of different products, bodies, and processes is going on worldwide, expressed as their carbon footprints. The methodologies for carbon footprint calculations are still evolving and it is emerging as an important tool for greenhouse gas management. The concept of carbon footprinting has permeated and is being commercialized in all the areas of life and economy, but there is little coherence in definitions and calculations of carbon footprints among the studies. There are disagreements in the selection of gases, and the order of emissions to be covered in footprint calculations. Standards of greenhouse gas accounting are the common resources used in footprint calculations, although there is no mandatory provision of footprint verification. Carbon footprinting is intended to be a tool to guide the relevant emission cuts and verifications, its standardization at international level are therefore necessary. Present review describes the prevailing carbon footprinting methods and raises the related issues.
DEFF Research Database (Denmark)
Stensballe, A; Jensen, Ole Nørregaard
2001-01-01
/ionization-time of flight mass spectrometry (MALDI-TOF-MS) is used as the first protein screening method in many laboratories because of its inherent simplicity, mass accuracy, sensitivity and relatively high sample throughput. We present a simplified sample preparation method for MALDI-MS that enables in-gel digestion...... for protein identification similar to that obtained by the traditional protocols for in-gel digestion and MALDI peptide mass mapping of human proteins, i.e. approximately 60%. The overall performance of the novel on-probe digestion method is comparable with that of the standard in-gel sample preparation...... protocol while being less labour intensive and more cost-effective due to minimal consumption of reagents, enzymes and consumables. Preliminary data obtained on a MALDI quadrupole-TOF tandem mass spectrometer demonstrated the utility of the on-probe digestion protocol for peptide mass mapping and peptide...
THE METHODS FOR ESTIMATING REGIONAL PROFESSIONAL MOBILE RADIO MARKET POTENTIAL
Directory of Open Access Journals (Sweden)
Y.À. Korobeynikov
2008-12-01
Full Text Available The paper represents the author’s methods of estimating regional professional mobile radio market potential, that belongs to high-tech b2b markets. These methods take into consideration such market peculiarities as great range and complexity of products, technological constraints and infrastructure development for the technological systems operation. The paper gives an estimation of professional mobile radio potential in Perm region. This estimation is already used by one of the systems integrator for its strategy development.
Evaluation and reliability of bone histological age estimation methods
African Journals Online (AJOL)
Human age estimation at death plays a vital role in forensic anthropology and bioarchaeology. Researchers used morphological and histological methods to estimate human age from their skeletal remains. This paper discussed different histological methods that used human long bones and ribs to determine age ...
A Comparative Study of Distribution System Parameter Estimation Methods
Energy Technology Data Exchange (ETDEWEB)
Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup
2016-07-17
In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.
A Fast LMMSE Channel Estimation Method for OFDM Systems
Directory of Open Access Journals (Sweden)
Zhou Wen
2009-01-01
Full Text Available A fast linear minimum mean square error (LMMSE channel estimation method has been proposed for Orthogonal Frequency Division Multiplexing (OFDM systems. In comparison with the conventional LMMSE channel estimation, the proposed channel estimation method does not require the statistic knowledge of the channel in advance and avoids the inverse operation of a large dimension matrix by using the fast Fourier transform (FFT operation. Therefore, the computational complexity can be reduced significantly. The normalized mean square errors (NMSEs of the proposed method and the conventional LMMSE estimation have been derived. Numerical results show that the NMSE of the proposed method is very close to that of the conventional LMMSE method, which is also verified by computer simulation. In addition, computer simulation shows that the performance of the proposed method is almost the same with that of the conventional LMMSE method in terms of bit error rate (BER.
Investigation of MLE in nonparametric estimation methods of reliability function
International Nuclear Information System (INIS)
Ahn, Kwang Won; Kim, Yoon Ik; Chung, Chang Hyun; Kim, Kil Yoo
2001-01-01
There have been lots of trials to estimate a reliability function. In the ESReDA 20 th seminar, a new method in nonparametric way was proposed. The major point of that paper is how to use censored data efficiently. Generally there are three kinds of approach to estimate a reliability function in nonparametric way, i.e., Reduced Sample Method, Actuarial Method and Product-Limit (PL) Method. The above three methods have some limits. So we suggest an advanced method that reflects censored information more efficiently. In many instances there will be a unique maximum likelihood estimator (MLE) of an unknown parameter, and often it may be obtained by the process of differentiation. It is well known that the three methods generally used to estimate a reliability function in nonparametric way have maximum likelihood estimators that are uniquely exist. So, MLE of the new method is derived in this study. The procedure to calculate a MLE is similar just like that of PL-estimator. The difference of the two is that in the new method, the mass (or weight) of each has an influence of the others but the mass in PL-estimator not
Joint Pitch and DOA Estimation Using the ESPRIT method
DEFF Research Database (Denmark)
Wu, Yuntao; Amir, Leshem; Jensen, Jesper Rindom
2015-01-01
In this paper, the problem of joint multi-pitch and direction-of-arrival (DOA) estimation for multi-channel harmonic sinusoidal signals is considered. A spatio-temporal matrix signal model for a uniform linear array is defined, and then the ESPRIT method based on subspace techniques that exploits...... the invariance property in the time domain is first used to estimate the multi pitch frequencies of multiple harmonic signals. Followed by the estimated pitch frequencies, the DOA estimations based on the ESPRIT method are also presented by using the shift invariance structure in the spatial domain. Compared...... to the existing stateof-the-art algorithms, the proposed method based on ESPRIT without 2-D searching is computationally more efficient but performs similarly. An asymptotic performance analysis of the DOA and pitch estimation of the proposed method are also presented. Finally, the effectiveness of the proposed...
Directory of Open Access Journals (Sweden)
Julio R Gómez Sarduy
2011-06-01
Full Text Available En este trabajo se presenta un método para estimar las conductancias y capacitancias de un modelo térmico simplificado del motor asincrónico, utilizando una técnica de baja invasividad. El procedimiento permite predecir el incremento de temperatura del estator del motor asincrónico, tanto para régimen dinámico como en condiciones de estabilidad térmica. Se basa en la estimación paramétrica mediante un modelo de referencia, utilizando como optimizador un algoritmo genético (AG. Se logra en definitiva obtener los parámetros del modelo térmico con un ensayo más sencillo que lo requerido por otros métodos experimentales complejos o cálculos analíticos basados en datos de diseño. El procedimiento propuesto se puede llevar a cabo en condiciones propias de la industria y resulta atractivo su empleo en el análisis de calentamiento de estas máquinas. El método se valida a partir de un estudio de caso reportado en la literatura y se aplica a un caso real en la industria, lográndose una buena precisión.In this paper, an asynchronous motor simplified thermal model method for conductances and capacitances estimation is presented. A low invasive technique is used. The developed procedure allows the stator temperature rise prediction, not only for dynamic regimes, but also in case of thermal stability. A parametric estimation is done through a reference model, using a genetic algorithm (GA as optimizing method. The thermal model parameters are finally obtained with an easer experimental work, than the required by other complex experimental methods or by analytical calculations based on design data. The proposed procedure can be carry out in the particular conditions of industrial environment. Its application is specially useful for asynchronous machine thermal analysis. Using the data of a study case reported in literature, the method validation is done, and is applied in an industrial real case, with good precision resulted from it.
Asiri, Sharefa M.
2016-10-20
In this paper, modulating functions-based method is proposed for estimating space–time-dependent unknowns in one-dimensional partial differential equations. The proposed method simplifies the problem into a system of algebraic equations linear in unknown parameters. The well-posedness of the modulating functions-based solution is proved. The wave and the fifth-order KdV equations are used as examples to show the effectiveness of the proposed method in both noise-free and noisy cases.
Reverse survival method of fertility estimation: An evaluation
Directory of Open Access Journals (Sweden)
Thomas Spoorenberg
2014-07-01
Full Text Available Background: For the most part, demographers have relied on the ever-growing body of sample surveys collecting full birth history to derive total fertility estimates in less statistically developed countries. Yet alternative methods of fertility estimation can return very consistent total fertility estimates by using only basic demographic information. Objective: This paper evaluates the consistency and sensitivity of the reverse survival method -- a fertility estimation method based on population data by age and sex collected in one census or a single-round survey. Methods: A simulated population was first projected over 15 years using a set of fertility and mortality age and sex patterns. The projected population was then reverse survived using the Excel template FE_reverse_4.xlsx, provided with Timæus and Moultrie (2012. Reverse survival fertility estimates were then compared for consistency to the total fertility rates used to project the population. The sensitivity was assessed by introducing a series of distortions in the projection of the population and comparing the difference implied in the resulting fertility estimates. Results: The reverse survival method produces total fertility estimates that are very consistent and hardly affected by erroneous assumptions on the age distribution of fertility or by the use of incorrect mortality levels, trends, and age patterns. The quality of the age and sex population data that is 'reverse survived' determines the consistency of the estimates. The contribution of the method for the estimation of past and present trends in total fertility is illustrated through its application to the population data of five countries characterized by distinct fertility levels and data quality issues. Conclusions: Notwithstanding its simplicity, the reverse survival method of fertility estimation has seldom been applied. The method can be applied to a large body of existing and easily available population data
Phase-Inductance-Based Position Estimation Method for Interior Permanent Magnet Synchronous Motors
Directory of Open Access Journals (Sweden)
Xin Qiu
2017-12-01
Full Text Available This paper presents a phase-inductance-based position estimation method for interior permanent magnet synchronous motors (IPMSMs. According to the characteristics of phase induction of IPMSMs, the corresponding relationship of the rotor position and the phase inductance is obtained. In order to eliminate the effect of the zero-sequence component of phase inductance and reduce the rotor position estimation error, the phase inductance difference is employed. With the iterative computation of inductance vectors, the position plane is further subdivided, and the rotor position is extracted by comparing the amplitudes of inductance vectors. To decrease the consumption of computer resources and increase the practicability, a simplified implementation is also investigated. In this method, the rotor position information is achieved easily, with several basic math operations and logical comparisons of phase inductances, without any coordinate transformation or trigonometric function calculation. Based on this position estimation method, the field orientated control (FOC strategy is established, and the detailed implementation is also provided. A series of experiment results from a prototype demonstrate the correctness and feasibility of the proposed method.
International Nuclear Information System (INIS)
Zimmerman, T.
1990-10-01
Historically a variety of methods have been used to measure the equivalent noise charge (ENC) of amplifier/shaper systems for high energy physics. Some of these methods require several pieces of special test equipment and a fair amount of effort. The advent of digitizing oscilloscopes with statistics capabilities makes it possible to perform certain types of noise measurements accurately with very little effort. This paper describes the noise measurement method of a time invariant amplifier/shaper and of a time variant correlated sampling system, using a Tektronix DSA602 Digitizing Signal Analyzer. 4 figs
DEFF Research Database (Denmark)
Jendresen, Christian Bille; Kilstrup, Mogens; Martinussen, Jan
2011-01-01
-pyrophosphate (PRPP), and inorganic pyrophosphate (PPi) in cell extracts. The method uses one-dimensional thin-layer chromatography (TLC) and radiolabeled biological samples. Nucleotides are resolved at the level of ionic charge in an optimized acidic ammonium formate and chloride solvent, permitting...... quantification of NTPs. The method is significantly simpler and faster than both current two-dimensional methods and high-performance liquid chromatography (HPLC)-based procedures, allowing a higher throughput while common sources of inaccuracies and technical problems are avoided. For determination of PPi...
Consumptive use of upland rice as estimated by different methods
International Nuclear Information System (INIS)
Chhabda, P.R.; Varade, S.B.
1985-01-01
The consumptive use of upland rice (Oryza sativa Linn.) grown during the wet season (kharif) as estimated by modified Penman, radiation, pan-evaporation and Hargreaves methods showed a variation from computed consumptive use estimated by the gravimetric method. The variability increased with an increase in the irrigation interval, and decreased with an increase in the level of N applied. The average variability was less in pan-evaporation method, which could reliably be used for estimating water requirement of upland rice if percolation losses are considered
Owens, A. R.; Kópházi, J.; Welch, J. A.; Eaton, M. D.
2017-04-01
In this paper a hanging-node, discontinuous Galerkin, isogeometric discretisation of the multigroup, discrete ordinates (SN) equations is presented in which each energy group has its own mesh. The equations are discretised using Non-Uniform Rational B-Splines (NURBS), which allows the coarsest mesh to exactly represent the geometry for a wide range of engineering problems of interest; this would not be the case using straight-sided finite elements. Information is transferred between meshes via the construction of a supermesh. This is a non-trivial task for two arbitrary meshes, but is significantly simplified here by deriving every mesh from a common coarsest initial mesh. In order to take full advantage of this flexible discretisation, goal-based error estimators are derived for the multigroup, discrete ordinates equations with both fixed (extraneous) and fission sources, and these estimators are used to drive an adaptive mesh refinement (AMR) procedure. The method is applied to a variety of test cases for both fixed and fission source problems. The error estimators are found to be extremely accurate for linear NURBS discretisations, with degraded performance for quadratic discretisations owing to a reduction in relative accuracy of the "exact" adjoint solution required to calculate the estimators. Nevertheless, the method seems to produce optimal meshes in the AMR process for both linear and quadratic discretisations, and is ≈×100 more accurate than uniform refinement for the same amount of computational effort for a 67 group deep penetration shielding problem.
Unemployment estimation: Spatial point referenced methods and models
Pereira, Soraia
2017-06-26
Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to
A SOFTWARE RELIABILITY ESTIMATION METHOD TO NUCLEAR SAFETY SOFTWARE
Directory of Open Access Journals (Sweden)
GEE-YONG PARK
2014-02-01
Full Text Available A method for estimating software reliability for nuclear safety software is proposed in this paper. This method is based on the software reliability growth model (SRGM, where the behavior of software failure is assumed to follow a non-homogeneous Poisson process. Two types of modeling schemes based on a particular underlying method are proposed in order to more precisely estimate and predict the number of software defects based on very rare software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating software test cases as a covariate into the model. It was identified that these models are capable of reasonably estimating the remaining number of software defects which directly affects the reactor trip functions. The software reliability might be estimated from these modeling equations, and one approach of obtaining software reliability value is proposed in this paper.
Population Estimation with Mark and Recapture Method Program
International Nuclear Information System (INIS)
Limohpasmanee, W.; Kaewchoung, W.
1998-01-01
Population estimation is the important information which required for the insect control planning especially the controlling with SIT. Moreover, It can be used to evaluate the efficiency of controlling method. Due to the complexity of calculation, the population estimation with mark and recapture methods were not used widely. So that, this program is developed with Qbasic on the purpose to make it accuracy and easier. The program evaluation consists with 6 methods; follow Seber's, Jolly-seber's, Jackson's Ito's, Hamada's and Yamamura's methods. The results are compared with the original methods, found that they are accuracy and more easier to applied
Ore reserve estimation: a summary of principles and methods
International Nuclear Information System (INIS)
Marques, J.P.M.
1985-01-01
The mining industry has experienced substantial improvements with the increasing utilization of computerized and electronic devices throughout the last few years. In the ore reserve estimation field the main methods have undergone recent advances in order to improve their overall efficiency. This paper presents the three main groups of ore reserve estimation methods presently used worldwide: Conventional, Statistical and Geostatistical, and elaborates a detaited description and comparative analysis of each. The Conventional Methods are the oldest, less complex and most employed ones. The Geostatistical Methods are the most recent precise and more complex ones. The Statistical Methods are intermediate to the others in complexity, diffusion and chronological order. (D.J.M.) [pt
Ruan, Yuhui; Lin, Hong; Yao, Jinrong; Chen, Zhengrong; Shao, Zhengzhong
2011-03-10
In this work, we developed a simple and flexible method to manufacture a 3D porous scaffold based on the blend of regenerated silk fibroin (RSF) and chitosan (CS). No crosslinker or other toxic reagents were used in this method. The pores of resulted 3D scaffolds were connected with each other, and their sizes could be easily controlled by the concentration of the mixed solution. Compared with pure RSF scaffolds, the water absorptivities of these RSF/CS blend scaffolds with significantly enhanced mechanical properties were greatly increased. The results of MTT and RT-PCR tests indicated that the chondrocytes grew very well in these blend RSF/CS porous scaffolds. This suggested that the RSF/CS blend scaffold prepared by this new method could be a promising candidate for applications in tissue engineering. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Methods for design flood estimation in South Africa | Smithers ...
African Journals Online (AJOL)
The estimation of design floods is necessary for the design of hydraulic structures and to quantify the risk of failure of the structures. Most of the methods used for design flood estimation in South Africa were developed in the late 1960s and early 1970s and are in need of updating with more than 40 years of additional data ...
Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method
Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey
2013-01-01
Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…
Performance of sampling methods to estimate log characteristics for wildlife.
Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton
2004-01-01
Accurate estimation of the characteristics of log resources, or coarse woody debris (CWD), is critical to effective management of wildlife and other forest resources. Despite the importance of logs as wildlife habitat, methods for sampling logs have traditionally focused on silvicultural and fire applications. These applications have emphasized estimates of log volume...
International Nuclear Information System (INIS)
Cutrim, J.H.; Kizivat, V.
1984-01-01
A simplified method to calculate the stresses in straight pipes due to laminar flow of a stratified medium with two different temperatures is presented. It is based on the equilibrium equations and conservative assumptions as usual in practice. Numerical results are obtained for the 'banana' and 'pera' modes of deformation due to thermal stratification; the former case appears to be most important. In order to be able to perform such a fatigue damage analysis in practice under several complex load conditions, an existing program for fatigue damage analysis was provided with more substantial details. All the assumptions crucial for the use of ASME code were retained. The inclusion of stresses due to stratifications in the fatigue damage analysis is completed through extension of ASME NB 3650. (Author) [pt
Estimation of pump operational state with model-based methods
International Nuclear Information System (INIS)
Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha
2010-01-01
Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.
Liu, Lihui; Shang, Wenjuan; Han, Chao; Zhang, Qing; Yao, Yao; Ma, Xiaoqian; Wang, Minghao; Yu, Hongtao; Duan, Yu; Sun, Jie; Chen, Shufen; Huang, Wei
2018-02-28
Graphene as one of the most promising transparent electrode materials has been successfully applied in organic light-emitting diodes (OLEDs). However, traditional poly(methyl methacrylate) (PMMA) transfer method usually results in hardly removed polymeric residues on the graphene surface, which induces unwanted leakage current, poor diode behavior, and even device failure. In this work, we proposed a facile and efficient two-in-one method to obtain clean graphene and fabricate OLEDs, in which the poly(9,9-di-n-octylfluorene-alt-(1,4-phenylene-(4-sec-butylphenyl)imino)-1,4-phenylene) (TFB) layer was inserted between the graphene and PMMA film both as a protector during the graphene transfer and a hole-injection layer in OLEDs. Finally, green OLED devices were successfully fabricated on the PMMA-free graphene/TFB film, and the device luminous efficiency was increased from 64.8 to 74.5 cd/A by using the two-in-one method. Therefore, the proposed two-in-one graphene transfer method realizes a high-efficient graphene transfer and device fabrication process, which is also compatible with the roll-to-roll manufacturing. It is expected that this work can enlighten the design and fabrication of the graphene-based optoelectronic devices.
Directory of Open Access Journals (Sweden)
Jose Carlos Bernedo Alcazar
Full Text Available The cathodic polarization seems to be an electrochemical method capable of modifying and coat biomolecules on titanium surfaces, improving the surface activity and promoting better biological responses.The aim of the systematic review is to assess the scientific literature to evaluate the cellular response produced by treatment of titanium surfaces by applying the cathodic polarization technique.The literature search was performed in several databases including PubMed, Web of Science, Scopus, Science Direct, Scielo and EBSCO Host, until June 2016, with no limits used. Eligibility criteria were used and quality assessment was performed following slightly modified ARRIVE and SYRCLE guidelines for cellular studies and animal research.Thirteen studies accomplished the inclusion criteria and were considered in the review. The quality of reporting studies in animal models was low and for the in vitro studies it was high. The in vitro and in vivo results reported that the use of cathodic polarization promoted hydride surfaces, effective deposition, and adhesion of the coated biomolecules. In the experimental groups that used the electrochemical method, cellular viability, proliferation, adhesion, differentiation, or bone growth were better or comparable with the control groups.The use of the cathodic polarization method to modify titanium surfaces seems to be an interesting method that could produce active layers and consequently enhance cellular response, in vitro and in vivo animal model studies.
A Fast Soft Bit Error Rate Estimation Method
Directory of Open Access Journals (Sweden)
Ait-Idir Tarik
2010-01-01
Full Text Available We have suggested in a previous publication a method to estimate the Bit Error Rate (BER of a digital communications system instead of using the famous Monte Carlo (MC simulation. This method was based on the estimation of the probability density function (pdf of soft observed samples. The kernel method was used for the pdf estimation. In this paper, we suggest to use a Gaussian Mixture (GM model. The Expectation Maximisation algorithm is used to estimate the parameters of this mixture. The optimal number of Gaussians is computed by using Mutual Information Theory. The analytical expression of the BER is therefore simply given by using the different estimated parameters of the Gaussian Mixture. Simulation results are presented to compare the three mentioned methods: Monte Carlo, Kernel and Gaussian Mixture. We analyze the performance of the proposed BER estimator in the framework of a multiuser code division multiple access system and show that attractive performance is achieved compared with conventional MC or Kernel aided techniques. The results show that the GM method can drastically reduce the needed number of samples to estimate the BER in order to reduce the required simulation run-time, even at very low BER.
Methods of multicriterion estimations in system total quality management
Directory of Open Access Journals (Sweden)
Nikolay V. Diligenskiy
2011-05-01
Full Text Available In this article the method of multicriterion comparative estimation of efficiency (Data Envelopment Analysis and possibility of its application in system of total quality management is considered.
Estimation methods for nonlinear state-space models in ecology
DEFF Research Database (Denmark)
Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro
2011-01-01
The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...
Methods for design flood estimation in South Africa
African Journals Online (AJOL)
2012-07-04
Jul 4, 2012 ... 1970s and are in need of updating with more than 40 years of additional data ... This paper reviews methods used for design flood estimation in South Africa and .... transposition of past experience, or a deterministic approach,.
A simple method for estimating the convection- dispersion equation ...
African Journals Online (AJOL)
Jane
2011-08-31
Aug 31, 2011 ... approach of modeling solute transport in porous media uses the deterministic ... Methods of estimating CDE transport parameters can be divided into statistical ..... diffusion-type model for longitudinal mixing of fluids in flow.
Lynd, Amy; Ranson, Hilary; McCall, P J; Randle, Nadine P; Black, William C; Walker, Edward D; Donnelly, Martin J
2005-01-01
Background A single base pair mutation in the sodium channel confers knock-down resistance to pyrethroids in many insect species. Its occurrence in Anopheles mosquitoes may have important implications for malaria vector control especially considering the current trend for large scale pyrethroid-treated bednet programmes. Screening Anopheles gambiae populations for the kdr mutation has become one of the mainstays of programmes that monitor the development of insecticide resistance. The screening is commonly performed using a multiplex Polymerase Chain Reaction (PCR) which, since it is reliant on a single nucleotide polymorphism, can be unreliable. Here we present a reliable and potentially high throughput method for screening An. gambiae for the kdr mutation. Methods A Hot Ligation Oligonucleotide Assay (HOLA) was developed to detect both the East and West African kdr alleles in the homozygous and heterozygous states, and was optimized for use in low-tech developing world laboratories. Results from the HOLA were compared to results from the multiplex PCR for field and laboratory mosquito specimens to provide verification of the robustness and sensitivity of the technique. Results and Discussion The HOLA assay, developed for detection of the kdr mutation, gives a bright blue colouration for a positive result whilst negative reactions remain colourless. The results are apparent within a few minutes of adding the final substrate and can be scored by eye. Heterozygotes are scored when a sample gives a positive reaction to the susceptible probe and the kdr probe. The technique uses only basic laboratory equipment and skills and can be carried out by anyone familiar with the Enzyme-linked immunosorbent assay (ELISA) technique. A comparison to the multiplex PCR method showed that the HOLA assay was more reliable, and scoring of the plates was less ambiguous. Conclusion The method is capable of detecting both the East and West African kdr alleles in the homozygous and
Directory of Open Access Journals (Sweden)
Walker Edward D
2005-03-01
Full Text Available Abstract Background A single base pair mutation in the sodium channel confers knock-down resistance to pyrethroids in many insect species. Its occurrence in Anopheles mosquitoes may have important implications for malaria vector control especially considering the current trend for large scale pyrethroid-treated bednet programmes. Screening Anopheles gambiae populations for the kdr mutation has become one of the mainstays of programmes that monitor the development of insecticide resistance. The screening is commonly performed using a multiplex Polymerase Chain Reaction (PCR which, since it is reliant on a single nucleotide polymorphism, can be unreliable. Here we present a reliable and potentially high throughput method for screening An. gambiae for the kdr mutation. Methods A Hot Ligation Oligonucleotide Assay (HOLA was developed to detect both the East and West African kdr alleles in the homozygous and heterozygous states, and was optimized for use in low-tech developing world laboratories. Results from the HOLA were compared to results from the multiplex PCR for field and laboratory mosquito specimens to provide verification of the robustness and sensitivity of the technique. Results and Discussion The HOLA assay, developed for detection of the kdr mutation, gives a bright blue colouration for a positive result whilst negative reactions remain colourless. The results are apparent within a few minutes of adding the final substrate and can be scored by eye. Heterozygotes are scored when a sample gives a positive reaction to the susceptible probe and the kdr probe. The technique uses only basic laboratory equipment and skills and can be carried out by anyone familiar with the Enzyme-linked immunosorbent assay (ELISA technique. A comparison to the multiplex PCR method showed that the HOLA assay was more reliable, and scoring of the plates was less ambiguous. Conclusion The method is capable of detecting both the East and West African kdr alleles
Methods for the estimation of uranium ore reserves
International Nuclear Information System (INIS)
1985-01-01
The Manual is designed mainly to provide assistance in uranium ore reserve estimation methods to mining engineers and geologists with limited experience in estimating reserves, especially to those working in developing countries. This Manual deals with the general principles of evaluation of metalliferous deposits but also takes into account the radioactivity of uranium ores. The methods presented have been generally accepted in the international uranium industry
Evaluation of three paediatric weight estimation methods in Singapore.
Loo, Pei Ying; Chong, Shu-Ling; Lek, Ngee; Bautista, Dianne; Ng, Kee Chong
2013-04-01
Rapid paediatric weight estimation methods in the emergency setting have not been evaluated for South East Asian children. This study aims to assess the accuracy and precision of three such methods in Singapore children: Broselow-Luten (BL) tape, Advanced Paediatric Life Support (APLS) (estimated weight (kg) = 2 (age + 4)) and Luscombe (estimated weight (kg) = 3 (age) + 7) formulae. We recruited 875 patients aged 1-10 years in a Paediatric Emergency Department in Singapore over a 2-month period. For each patient, true weight and height were determined. True height was cross-referenced to the BL tape markings and used to derive estimated weight (virtual BL tape method), while patient's round-down age (in years) was used to derive estimated weights using APLS and Luscombe formulae, respectively. The percentage difference between the true and estimated weights was calculated. For each method, the bias and extent of agreement were quantified using Bland-Altman method (mean percentage difference (MPD) and 95% limits of agreement (LOA)). The proportion of weight estimates within 10% of true weight (p₁₀) was determined. The BL tape method marginally underestimated weights (MPD +0.6%; 95% LOA -26.8% to +28.1%; p₁₀ 58.9%). The APLS formula underestimated weights (MPD +7.6%; 95% LOA -26.5% to +41.7%; p₁₀ 45.7%). The Luscombe formula overestimated weights (MPD -7.4%; 95% LOA -51.0% to +36.2%; p₁₀ 37.7%). Of the three methods we evaluated, the BL tape method provided the most accurate and precise weight estimation for Singapore children. The APLS and Luscombe formulae underestimated and overestimated the children's weights, respectively, and were considerably less precise. © 2013 The Authors. Journal of Paediatrics and Child Health © 2013 Paediatrics and Child Health Division (Royal Australasian College of Physicians).
International Nuclear Information System (INIS)
Menke, K.H.; Kohlberger, G.; Koenemund, A.
1979-01-01
A modified method for radiometrical determination of vitamin B 12 is described, which in difference to the known methods is based on measurement of free B 12 after absorption to albumin-coated charcoal instead of measurement of intrinsic factor B 12 -complex. The conditions for extraction from serum, milk, rumen-liquor and urine have been investigated and the effect of pH on IF-B 12 -binding in presence of these body fluids examined. Parallel microbiological determinations (O.m.- and L.1.-test) were in good correlation (r = 0,93-0,97) to radiometrically determined B 12 -contents in milk and rumen-liquor, but not to that in serum of dairy cows (r = 0,54-0,82). The analytical procedures are given in detail. (orig.) [de
Czech Academy of Sciences Publication Activity Database
Bulant, P.; Klimeš, L.; Pšenčík, Ivan; Vavryčuk, Václav
2004-01-01
Roč. 48, č. 4 (2004), s. 675-688 ISSN 0039-3169 R&D Projects: GA ČR GA205/04/1104; GA AV ČR IAA3012309; GA AV ČR KSK3012103 Institutional research plan: CEZ:AV0Z3012916 Keywords : coupling ray theory * quasi-isotropic approximation * ray methods Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 0.447, year: 2004
A Channelization-Based DOA Estimation Method for Wideband Signals
Directory of Open Access Journals (Sweden)
Rui Guo
2016-07-01
Full Text Available In this paper, we propose a novel direction of arrival (DOA estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR using direct wideband radio frequency (RF digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method.
Methods for Estimation of Market Power in Electric Power Industry
Turcik, M.; Oleinikova, I.; Junghans, G.; Kolcun, M.
2012-01-01
The article is related to a topical issue of the newly-arisen market power phenomenon in the electric power industry. The authors point out to the importance of effective instruments and methods for credible estimation of the market power on liberalized electricity market as well as the forms and consequences of market power abuse. The fundamental principles and methods of the market power estimation are given along with the most common relevant indicators. Furthermore, in the work a proposal for determination of the relevant market place taking into account the specific features of power system and a theoretical example of estimating the residual supply index (RSI) in the electricity market are given.
Directory of Open Access Journals (Sweden)
Szwed Łukasz P.
2014-09-01
Full Text Available Malt extracts and malt concentrates have a broad range of application in food industry. Those products are obtained by methods similar to brewing worts. The possible reduction of cost can be achieved by application of malt substitutes likewise in brewing industry. As the malt concentrates for food industry do not have to fulfill strict norms for beer production it is possible to produce much cheaper products. It was proved that by means of mathematic optimization it is possible to determine the optimal share of unmalted material for cheap yet effective production of wort.
Stock price estimation using ensemble Kalman Filter square root method
Karya, D. F.; Katias, P.; Herlambang, T.
2018-04-01
Shares are securities as the possession or equity evidence of an individual or corporation over an enterprise, especially public companies whose activity is stock trading. Investment in stocks trading is most likely to be the option of investors as stocks trading offers attractive profits. In determining a choice of safe investment in the stocks, the investors require a way of assessing the stock prices to buy so as to help optimize their profits. An effective method of analysis which will reduce the risk the investors may bear is by predicting or estimating the stock price. Estimation is carried out as a problem sometimes can be solved by using previous information or data related or relevant to the problem. The contribution of this paper is that the estimates of stock prices in high, low, and close categorycan be utilized as investors’ consideration for decision making in investment. In this paper, stock price estimation was made by using the Ensemble Kalman Filter Square Root method (EnKF-SR) and Ensemble Kalman Filter method (EnKF). The simulation results showed that the resulted estimation by applying EnKF method was more accurate than that by the EnKF-SR, with an estimation error of about 0.2 % by EnKF and an estimation error of 2.6 % by EnKF-SR.
Xie, Xianchuan; Gong, Shu; Wang, Xiaorong; Wu, Yinxing; Zhao, Li
2011-01-01
A rapid, reliable and sensitive reverse-phase high-performance liquid chromatography method with fluorescence detection (RP-FLD-HPLC) was developed and validated for simultaneous analysis of the abamectin (ABA), emamectin (EMA) benzoate and ivermectin (IVM) residues in rice. After extraction with acetonitrile/water (2 : 1) with sonication, the avermectin (AVMs) residues were directly derivatised by N-methylimidazole (N-NMIM) and trifluoroacetic anhydride (TFAA) and then analysed on RP-FLD-HPLC. A good linear relationship (r(2 )> 0.99) was obtained for three AVMs ranging from 0.01 to 5 microg ml(-1), i.e. 0.01-5.0 microg g(-1) in rice matrix. The limit of detection (LOD) and the limit of quantification (LOQ) were between 0.001 and 0.002 microg g(-1) and between 0.004 and 0.006 microg g(-1), respectively. Recoveries were from 81.9% to 105.4% and precision less than 12.4%. The proposed method was successfully applied to routine analysis of the AVMs residues in rice.
A Computationally Efficient Method for Polyphonic Pitch Estimation
Directory of Open Access Journals (Sweden)
Ruohua Zhou
2009-01-01
Full Text Available This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.
Directory of Open Access Journals (Sweden)
Cullotta S
2006-01-01
Full Text Available Methodological approaches able to integrate data from sample plots with cartographic processes are widely applied. Based on mathematic-statistical techniques, the spatial analysis allows the exploration and spatialization of geographic data. Starting from the punctual information on land use types obtained from the dataset of the first phase of the ongoing new Italian NFI (INFC, a spatialization of land cover classes was carried out using the Inverse Distance Weighting (IDW method. In order to validate the obtained results, an overlay with other vectorial land use data was carried out. In particular, the overlay compared data at different scales, evaluating differences in terms of degree of correspondence between the interpolated and reference land cover.
Cosmological helium production simplified
International Nuclear Information System (INIS)
Bernstein, J.; Brown, L.S.; Feinberg, G.
1988-01-01
We present a simplified model of helium synthesis in the early universe. The purpose of the model is to explain clearly the physical ideas relevant to the cosmological helium synthesis, in a manner that does not overlay these ideas with complex computer calculations. The model closely follows the standard calculation, except that it neglects the small effect of Fermi-Dirac statistics for the leptons. We also neglect the temperature difference between photons and neutrinos during the period in which neutrons and protons interconvert. These approximations allow us to express the neutron-proton conversion rates in a closed form, which agrees to 10% accuracy or better with the exact rates. Using these analytic expressions for the rates, we reduce the calculation of the neutron-proton ratio as a function of temperature to a simple numerical integral. We also estimate the effect of neutron decay on the helium abundance. Our result for this quantity agrees well with precise computer calculations. We use our semi-analytic formulas to determine how the predicted helium abundance varies with such parameters as the neutron life-time, the baryon to photon ratio, the number of neutrino species, and a possible electron-neutrino chemical potential. 19 refs., 1 fig., 1 tab
Comparing Methods for Estimating Direct Costs of Adverse Drug Events.
Gyllensten, Hanna; Jönsson, Anna K; Hakkarainen, Katja M; Svensson, Staffan; Hägg, Staffan; Rehnberg, Clas
2017-12-01
To estimate how direct health care costs resulting from adverse drug events (ADEs) and cost distribution are affected by methodological decisions regarding identification of ADEs, assigning relevant resource use to ADEs, and estimating costs for the assigned resources. ADEs were identified from medical records and diagnostic codes for a random sample of 4970 Swedish adults during a 3-month study period in 2008 and were assessed for causality. Results were compared for five cost evaluation methods, including different methods for identifying ADEs, assigning resource use to ADEs, and for estimating costs for the assigned resources (resource use method, proportion of registered cost method, unit cost method, diagnostic code method, and main diagnosis method). Different levels of causality for ADEs and ADEs' contribution to health care resource use were considered. Using the five methods, the maximum estimated overall direct health care costs resulting from ADEs ranged from Sk10,000 (Sk = Swedish krona; ~€1,500 in 2016 values) using the diagnostic code method to more than Sk3,000,000 (~€414,000) using the unit cost method in our study population. The most conservative definitions for ADEs' contribution to health care resource use and the causality of ADEs resulted in average costs per patient ranging from Sk0 using the diagnostic code method to Sk4066 (~€500) using the unit cost method. The estimated costs resulting from ADEs varied considerably depending on the methodological choices. The results indicate that costs for ADEs need to be identified through medical record review and by using detailed unit cost data. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Ying, Zhiqin
2016-05-20
A wet-chemical surface texturing technique, including a two-step metal-catalyzed chemical etching (MCCE) and an extra alkaline treatment, has been proven as an efficient way to fabricate high-efficiency black multicrystalline (mc) silicon solar cells, whereas it is limited by the production capacity and the cost cutting due to the complicated process. Here, we demonstrated that with careful control of the composition in etching solution, low-aspect-ratio bowl-like nanostructures with atomically smooth surfaces could be directly achieved by improved one-step MCCE and with no posttreatment, like alkali solution. The doublet surface texture of implementing this nanobowl structure upon the industrialized acidic-textured surface showed concurrent improvement in optical and electrical properties for realizing 18.23% efficiency mc-Si solar cells (156 mm × 156 mm), which is sufficiently higher than 17.7% of the solely acidic-textured cells in the same batch. The one-step MCCE method demonstrated in this study may provide a cost-effective way to manufacture high-performance mc-Si solar cells for the present photovoltaic industry. © 2016 IEEE.
Optical Method for Estimating the Chlorophyll Contents in Plant Leaves.
Pérez-Patricio, Madaín; Camas-Anzueto, Jorge Luis; Sanchez-Alegría, Avisaí; Aguilar-González, Abiel; Gutiérrez-Miceli, Federico; Escobar-Gómez, Elías; Voisin, Yvon; Rios-Rojas, Carlos; Grajales-Coutiño, Ruben
2018-02-22
This work introduces a new vision-based approach for estimating chlorophyll contents in a plant leaf using reflectance and transmittance as base parameters. Images of the top and underside of the leaf are captured. To estimate the base parameters (reflectance/transmittance), a novel optical arrangement is proposed. The chlorophyll content is then estimated by using linear regression where the inputs are the reflectance and transmittance of the leaf. Performance of the proposed method for chlorophyll content estimation was compared with a spectrophotometer and a Soil Plant Analysis Development (SPAD) meter. Chlorophyll content estimation was realized for Lactuca sativa L., Azadirachta indica , Canavalia ensiforme , and Lycopersicon esculentum . Experimental results showed that-in terms of accuracy and processing speed-the proposed algorithm outperformed many of the previous vision-based approach methods that have used SPAD as a reference device. On the other hand, the accuracy reached is 91% for crops such as Azadirachta indica , where the chlorophyll value was obtained using the spectrophotometer. Additionally, it was possible to achieve an estimation of the chlorophyll content in the leaf every 200 ms with a low-cost camera and a simple optical arrangement. This non-destructive method increased accuracy in the chlorophyll content estimation by using an optical arrangement that yielded both the reflectance and transmittance information, while the required hardware is cheap.
Optical Method for Estimating the Chlorophyll Contents in Plant Leaves
Directory of Open Access Journals (Sweden)
Madaín Pérez-Patricio
2018-02-01
Full Text Available This work introduces a new vision-based approach for estimating chlorophyll contents in a plant leaf using reflectance and transmittance as base parameters. Images of the top and underside of the leaf are captured. To estimate the base parameters (reflectance/transmittance, a novel optical arrangement is proposed. The chlorophyll content is then estimated by using linear regression where the inputs are the reflectance and transmittance of the leaf. Performance of the proposed method for chlorophyll content estimation was compared with a spectrophotometer and a Soil Plant Analysis Development (SPAD meter. Chlorophyll content estimation was realized for Lactuca sativa L., Azadirachta indica, Canavalia ensiforme, and Lycopersicon esculentum. Experimental results showed that—in terms of accuracy and processing speed—the proposed algorithm outperformed many of the previous vision-based approach methods that have used SPAD as a reference device. On the other hand, the accuracy reached is 91% for crops such as Azadirachta indica, where the chlorophyll value was obtained using the spectrophotometer. Additionally, it was possible to achieve an estimation of the chlorophyll content in the leaf every 200 ms with a low-cost camera and a simple optical arrangement. This non-destructive method increased accuracy in the chlorophyll content estimation by using an optical arrangement that yielded both the reflectance and transmittance information, while the required hardware is cheap.
Training Methods for Image Noise Level Estimation on Wavelet Components
Directory of Open Access Journals (Sweden)
A. De Stefano
2004-12-01
Full Text Available The estimation of the standard deviation of noise contaminating an image is a fundamental step in wavelet-based noise reduction techniques. The method widely used is based on the mean absolute deviation (MAD. This model-based method assumes specific characteristics of the noise-contaminated image component. Three novel and alternative methods for estimating the noise standard deviation are proposed in this work and compared with the MAD method. Two of these methods rely on a preliminary training stage in order to extract parameters which are then used in the application stage. The sets used for training and testing, 13 and 5 images, respectively, are fully disjoint. The third method assumes specific statistical distributions for image and noise components. Results showed the prevalence of the training-based methods for the images and the range of noise levels considered.
A Group Contribution Method for Estimating Cetane and Octane Numbers
Energy Technology Data Exchange (ETDEWEB)
Kubic, William Louis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Process Modeling and Analysis Group
2016-07-28
Much of the research on advanced biofuels is devoted to the study of novel chemical pathways for converting nonfood biomass into liquid fuels that can be blended with existing transportation fuels. Many compounds under consideration are not found in the existing fuel supplies. Often, the physical properties needed to assess the viability of a potential biofuel are not available. The only reliable information available may be the molecular structure. Group contribution methods for estimating physical properties from molecular structure have been used for more than 60 years. The most common application is estimation of thermodynamic properties. More recently, group contribution methods have been developed for estimating rate dependent properties including cetane and octane numbers. Often, published group contribution methods are limited in terms of types of function groups and range of applicability. In this study, a new, broadly-applicable group contribution method based on an artificial neural network was developed to estimate cetane number research octane number, and motor octane numbers of hydrocarbons and oxygenated hydrocarbons. The new method is more accurate over a greater range molecular weights and structural complexity than existing group contribution methods for estimating cetane and octane numbers.
Estimation of arsenic in nail using silver diethyldithiocarbamate method
Directory of Open Access Journals (Sweden)
Habiba Akhter Bhuiyan
2015-08-01
Full Text Available Spectrophotometric method of arsenic estimation in nails has four steps: a washing of nails, b digestion of nails, c arsenic generation, and finally d reading absorbance using spectrophotometer. Although the method is a cheapest one, widely used and effective, it is time consuming, laborious and need caution while using four acids.
Comparison of estimation methods for fitting weibull distribution to ...
African Journals Online (AJOL)
Comparison of estimation methods for fitting weibull distribution to the natural stand of Oluwa Forest Reserve, Ondo State, Nigeria. ... Journal of Research in Forestry, Wildlife and Environment ... The result revealed that maximum likelihood method was more accurate in fitting the Weibull distribution to the natural stand.
A simple and rapid method to estimate radiocesium in man
International Nuclear Information System (INIS)
Kindl, P.; Steger, F.
1990-09-01
A simple and rapid method for monitoring internal contamination of radiocesium in man was developed. This method is based on measurements of the γ-rays emitted from the muscular parts between the thights by a simple NaJ(Tl)-system. The experimental procedure, the calibration, the estimation of the body activity and results are explained and discussed. (Authors)
On the Methods for Estimating the Corneoscleral Limbus.
Jesus, Danilo A; Iskander, D Robert
2017-08-01
The aim of this study was to develop computational methods for estimating limbus position based on the measurements of three-dimensional (3-D) corneoscleral topography and ascertain whether corneoscleral limbus routinely estimated from the frontal image corresponds to that derived from topographical information. Two new computational methods for estimating the limbus position are proposed: One based on approximating the raw anterior eye height data by series of Zernike polynomials and one that combines the 3-D corneoscleral topography with the frontal grayscale image acquired with the digital camera in-built in the profilometer. The proposed methods are contrasted against a previously described image-only-based procedure and to a technique of manual image annotation. The estimates of corneoscleral limbus radius were characterized with a high precision. The group average (mean ± standard deviation) of the maximum difference between estimates derived from all considered methods was 0.27 ± 0.14 mm and reached up to 0.55 mm. The four estimating methods lead to statistically significant differences (nonparametric ANOVA (the Analysis of Variance) test, p 0.05). Precise topographical limbus demarcation is possible either from the frontal digital images of the eye or from the 3-D topographical information of corneoscleral region. However, the results demonstrated that the corneoscleral limbus estimated from the anterior eye topography does not always correspond to that obtained through image-only based techniques. The experimental findings have shown that 3-D topography of anterior eye, in the absence of a gold standard, has the potential to become a new computational methodology for estimating the corneoscleral limbus.
Motion estimation using point cluster method and Kalman filter.
Senesh, M; Wolf, A
2009-05-01
The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal
An improved method for estimating the frequency correlation function
Chelli, Ali; Pä tzold, Matthias
2012-01-01
For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.
An improved method for estimating the frequency correlation function
Chelli, Ali
2012-04-01
For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.
The estimation of the measurement results with using statistical methods
International Nuclear Information System (INIS)
Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" data-affiliation=" (State Enterprise Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" >Velychko, O; UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" data-affiliation=" (State Scientific Institution UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" >Gordiyenko, T
2015-01-01
The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed
The estimation of the measurement results with using statistical methods
Velychko, O.; Gordiyenko, T.
2015-02-01
The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed.
Mateos, José Carlos Pachón; Mateos, Enrique I Pachón; Peña, Tomas G Santillana; Lobo, Tasso Julio; Mateos, Juán Carlos Pachón; Vargas, Remy Nelson A; Pachón, Carlos Thiene C; Acosta, Juán Carlos Zerpa
2015-01-01
Introduction Although rare, the atrioesophageal fistula is one of the most feared complications in radiofrequency catheter ablation of atrial fibrillation due to the high risk of mortality. Objective This is a prospective controlled study, performed during regular radiofrequency catheter ablation of atrial fibrillation, to test whether esophageal displacement by handling the transesophageal echocardiography transducer could be used for esophageal protection. Methods Seven hundred and four patients (158 F/546M [22.4%/77.6%]; 52.8±14 [17-84] years old), with mean EF of 0.66±0.8 and drug-refractory atrial fibrillation were submitted to hybrid radiofrequency catheter ablation (conventional pulmonary vein isolation plus AF-Nests and background tachycardia ablation) with displacement of the esophagus as far as possible from the radiofrequency target by transesophageal echocardiography transducer handling. The esophageal luminal temperature was monitored without and with displacement in 25 patients. Results The mean esophageal displacement was 4 to 9.1cm (5.9±0.8 cm). In 680 of the 704 patients (96.6%), it was enough to allow complete and safe radiofrequency delivery (30W/40ºC/irrigated catheter or 50W/60ºC/8 mm catheter) without esophagus overlapping. The mean esophageal luminal temperature changes with versus without esophageal displacement were 0.11±0.13ºC versus 1.1±0.4ºC respectively, P<0.01. The radiofrequency had to be halted in 68% of the patients without esophageal displacement because of esophageal luminal temperature increase. There was no incidence of atrioesophageal fistula suspected or confirmed. Only two superficial bleeding caused by transesophageal echocardiography transducer insertion were observed. Conclusion Mechanical esophageal displacement by transesophageal echocardiography transducer during radiofrequency catheter ablation was able to prevent a rise in esophageal luminal temperature, helping to avoid esophageal thermal lesion. In most
Evaluation of non cyanide methods for hemoglobin estimation
Directory of Open Access Journals (Sweden)
Vinaya B Shah
2011-01-01
Full Text Available Background: The hemoglobincyanide method (HiCN method for measuring hemoglobin is used extensively worldwide; its advantages are the ready availability of a stable and internationally accepted reference standard calibrator. However, its use may create a problem, as the waste disposal of large volumes of reagent containing cyanide constitutes a potential toxic hazard. Aims and Objective: As an alternative to drabkin`s method of Hb estimation, we attempted to estimate hemoglobin by other non-cyanide methods: alkaline hematin detergent (AHD-575 using Triton X-100 as lyser and alkaline- borax method using quarternary ammonium detergents as lyser. Materials and Methods: The hemoglobin (Hb results on 200 samples of varying Hb concentrations obtained by these two cyanide free methods were compared with a cyanmethemoglobin method on a colorimeter which is light emitting diode (LED based. Hemoglobin was also estimated in one hundred blood donors and 25 blood samples of infants and compared by these methods. Statistical analysis used was Pearson`s correlation coefficient. Results: The response of the non cyanide method is linear for serially diluted blood samples over the Hb concentration range from 3gm/dl -20 gm/dl. The non cyanide methods has a precision of + 0.25g/dl (coefficient of variation= (2.34% and is suitable for use with fixed wavelength or with colorimeters at wavelength- 530 nm and 580 nm. Correlation of these two methods was excellent (r=0.98. The evaluation has shown it to be as reliable and reproducible as HiCN for measuring hemoglobin at all concentrations. The reagents used in non cyanide methods are non-biohazardous and did not affect the reliability of data determination and also the cost was less than HiCN method. Conclusions: Thus, non cyanide methods of Hb estimation offer possibility of safe and quality Hb estimation and should prove useful for routine laboratory use. Non cyanide methods is easily incorporated in hemobloginometers
Adaptive Methods for Permeability Estimation and Smart Well Management
Energy Technology Data Exchange (ETDEWEB)
Lien, Martha Oekland
2005-04-01
The main focus of this thesis is on adaptive regularization methods. We consider two different applications, the inverse problem of absolute permeability estimation and the optimal control problem of estimating smart well management. Reliable estimates of absolute permeability are crucial in order to develop a mathematical description of an oil reservoir. Due to the nature of most oil reservoirs, mainly indirect measurements are available. In this work, dynamic production data from wells are considered. More specifically, we have investigated into the resolution power of pressure data for permeability estimation. The inversion of production data into permeability estimates constitutes a severely ill-posed problem. Hence, regularization techniques are required. In this work, deterministic regularization based on adaptive zonation is considered, i.e. a solution approach with adaptive multiscale estimation in conjunction with level set estimation is developed for coarse scale permeability estimation. A good mathematical reservoir model is a valuable tool for future production planning. Recent developments within well technology have given us smart wells, which yield increased flexibility in the reservoir management. In this work, we investigate into the problem of finding the optimal smart well management by means of hierarchical regularization techniques based on multiscale parameterization and refinement indicators. The thesis is divided into two main parts, where Part I gives a theoretical background for a collection of research papers that has been written by the candidate in collaboration with others. These constitutes the most important part of the thesis, and are presented in Part II. A brief outline of the thesis follows below. Numerical aspects concerning calculations of derivatives will also be discussed. Based on the introduction to regularization given in Chapter 2, methods for multiscale zonation, i.e. adaptive multiscale estimation and refinement
Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio
The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and numerical analyses based on the finite-difference time-domain (FDTD) method. We consider the anatomical European human phantoms and plane-wave in the 2GHz mobile phone frequency band. The measured whole-body averaged SAR results obtained by the proposed method are compared with the results of the FDTD analyses.
Simple method for quick estimation of aquifer hydrogeological parameters
Ma, C.; Li, Y. Y.
2017-08-01
Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.
Application of Density Estimation Methods to Datasets from a Glider
2014-09-30
humpback and sperm whales as well as different dolphin species. OBJECTIVES The objective of this research is to extend existing methods for cetacean...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources...estimation from single sensor datasets. Required steps for a cue counting approach, where a cue has been defined as a clicking event (Küsel et al., 2011), to
Information-theoretic methods for estimating of complicated probability distributions
Zong, Zhi
2006-01-01
Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur
Assessment of Methods for Estimating Risk to Birds from ...
The U.S. EPA Ecological Risk Assessment Support Center (ERASC) announced the release of the final report entitled, Assessment of Methods for Estimating Risk to Birds from Ingestion of Contaminated Grit Particles. This report evaluates approaches for estimating the probability of ingestion by birds of contaminated particles such as pesticide granules or lead particles (i.e. shot or bullet fragments). In addition, it presents an approach for using this information to estimate the risk of mortality to birds from ingestion of lead particles. Response to ERASC Request #16
Plant-available soil water capacity: estimation methods and implications
Directory of Open Access Journals (Sweden)
Bruno Montoani Silva
2014-04-01
Full Text Available The plant-available water capacity of the soil is defined as the water content between field capacity and wilting point, and has wide practical application in planning the land use. In a representative profile of the Cerrado Oxisol, methods for estimating the wilting point were studied and compared, using a WP4-T psychrometer and Richards chamber for undisturbed and disturbed samples. In addition, the field capacity was estimated by the water content at 6, 10, 33 kPa and by the inflection point of the water retention curve, calculated by the van Genuchten and cubic polynomial models. We found that the field capacity moisture determined at the inflection point was higher than by the other methods, and that even at the inflection point the estimates differed, according to the model used. By the WP4-T psychrometer, the water content was significantly lower found the estimate of the permanent wilting point. We concluded that the estimation of the available water holding capacity is markedly influenced by the estimation methods, which has to be taken into consideration because of the practical importance of this parameter.
An Estimation Method for number of carrier frequency
Directory of Open Access Journals (Sweden)
Xiong Peng
2015-01-01
Full Text Available This paper proposes a method that utilizes AR model power spectrum estimation based on Burg algorithm to estimate the number of carrier frequency in single pulse. In the modern electronic and information warfare, the pulse signal form of radar is complex and changeable, among which single pulse with multi-carrier frequencies is the most typical one, such as the frequency shift keying (FSK signal, the frequency shift keying with linear frequency (FSK-LFM hybrid modulation signal and the frequency shift keying with bi-phase shift keying (FSK-BPSK hybrid modulation signal. In view of this kind of single pulse which has multi-carrier frequencies, this paper adopts a method which transforms the complex signal into AR model, then takes power spectrum based on Burg algorithm to show the effect. Experimental results show that the estimation method still can determine the number of carrier frequencies accurately even when the signal noise ratio (SNR is very low.
New methods for estimating follow-up rates in cohort studies
Directory of Open Access Journals (Sweden)
Xiaonan Xue
2017-12-01
Full Text Available Abstract Background The follow-up rate, a standard index of the completeness of follow-up, is important for assessing the validity of a cohort study. A common method for estimating the follow-up rate, the “Percentage Method”, defined as the fraction of all enrollees who developed the event of interest or had complete follow-up, can severely underestimate the degree of follow-up. Alternatively, the median follow-up time does not indicate the completeness of follow-up, and the reverse Kaplan-Meier based method and Clark’s Completeness Index (CCI also have limitations. Methods We propose a new definition for the follow-up rate, the Person-Time Follow-up Rate (PTFR, which is the observed person-time divided by total person-time assuming no dropouts. The PTFR cannot be calculated directly since the event times for dropouts are not observed. Therefore, two estimation methods are proposed: a formal person-time method (FPT in which the expected total follow-up time is calculated using the event rate estimated from the observed data, and a simplified person-time method (SPT that avoids estimation of the event rate by assigning full follow-up time to all events. Simulations were conducted to measure the accuracy of each method, and each method was applied to a prostate cancer recurrence study dataset. Results Simulation results showed that the FPT has the highest accuracy overall. In most situations, the computationally simpler SPT and CCI methods are only slightly biased. When applied to a retrospective cohort study of cancer recurrence, the FPT, CCI and SPT showed substantially greater 5-year follow-up than the Percentage Method (92%, 92% and 93% vs 68%. Conclusions The Person-time methods correct a systematic error in the standard Percentage Method for calculating follow-up rates. The easy to use SPT and CCI methods can be used in tandem to obtain an accurate and tight interval for PTFR. However, the FPT is recommended when event rates and
A New Method for Estimation of Velocity Vectors
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt; Munk, Peter
1998-01-01
The paper describes a new method for determining the velocity vector of a remotely sensed object using either sound or electromagnetic radiation. The movement of the object is determined from a field with spatial oscillations in both the axial direction of the transducer and in one or two...... directions transverse to the axial direction. By using a number of pulse emissions, the inter-pulse movement can be estimated and the velocity found from the estimated movement and the time between pulses. The method is based on the principle of using transverse spatial modulation for making the received...
A comparison of analysis methods to estimate contingency strength.
Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T
2018-05-09
To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.
Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods
Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.
2014-12-01
Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.
Statistical methods of parameter estimation for deterministically chaotic time series
Pisarenko, V. F.; Sornette, D.
2004-03-01
We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A “segmentation fitting” maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x1 considered as an additional unknown parameter. The segmentation fitting method, called “piece-wise” ML, is similar in spirit but simpler and has smaller bias than the “multiple shooting” previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically).
Wave Velocity Estimation in Heterogeneous Media
Asiri, Sharefa M.; Laleg-Kirati, Taous-Meriem
2016-01-01
In this paper, modulating functions-based method is proposed for estimating space-time dependent unknown velocity in the wave equation. The proposed method simplifies the identification problem into a system of linear algebraic equations. Numerical
Power system frequency estimation based on an orthogonal decomposition method
Lee, Chih-Hung; Tsai, Men-Shen
2018-06-01
In recent years, several frequency estimation techniques have been proposed by which to estimate the frequency variations in power systems. In order to properly identify power quality issues under asynchronously-sampled signals that are contaminated with noise, flicker, and harmonic and inter-harmonic components, a good frequency estimator that is able to estimate the frequency as well as the rate of frequency changes precisely is needed. However, accurately estimating the fundamental frequency becomes a very difficult task without a priori information about the sampling frequency. In this paper, a better frequency evaluation scheme for power systems is proposed. This method employs a reconstruction technique in combination with orthogonal filters, which may maintain the required frequency characteristics of the orthogonal filters and improve the overall efficiency of power system monitoring through two-stage sliding discrete Fourier transforms. The results showed that this method can accurately estimate the power system frequency under different conditions, including asynchronously sampled signals contaminated by noise, flicker, and harmonic and inter-harmonic components. The proposed approach also provides high computational efficiency.
International Nuclear Information System (INIS)
2002-01-01
This report describes the simplified models for predicting the response of high-damping natural rubber bearings (HDNRB) to earthquake ground motions and benchmark problems for assessing the accuracy of finite element analyses in designing base-isolators. (author)
Hydrological model uncertainty due to spatial evapotranspiration estimation methods
Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub
2016-05-01
Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.
DEFF Research Database (Denmark)
Jensen, Jørgen Juncher
2007-01-01
In on-board decision support systems efficient procedures are needed for real-time estimation of the maximum ship responses to be expected within the next few hours, given on-line information on the sea state and user defined ranges of possible headings and speeds. For linear responses standard...... frequency domain methods can be applied. To non-linear responses like the roll motion, standard methods like direct time domain simulations are not feasible due to the required computational time. However, the statistical distribution of non-linear ship responses can be estimated very accurately using...... the first-order reliability method (FORM), well-known from structural reliability problems. To illustrate the proposed procedure, the roll motion is modelled by a simplified non-linear procedure taking into account non-linear hydrodynamic damping, time-varying restoring and wave excitation moments...
Estimating building energy consumption using extreme learning machine method
International Nuclear Information System (INIS)
Naji, Sareh; Keivani, Afram; Shamshirband, Shahaboddin; Alengaram, U. Johnson; Jumaat, Mohd Zamin; Mansor, Zulkefli; Lee, Malrey
2016-01-01
The current energy requirements of buildings comprise a large percentage of the total energy consumed around the world. The demand of energy, as well as the construction materials used in buildings, are becoming increasingly problematic for the earth's sustainable future, and thus have led to alarming concern. The energy efficiency of buildings can be improved, and in order to do so, their operational energy usage should be estimated early in the design phase, so that buildings are as sustainable as possible. An early energy estimate can greatly help architects and engineers create sustainable structures. This study proposes a novel method to estimate building energy consumption based on the ELM (Extreme Learning Machine) method. This method is applied to building material thicknesses and their thermal insulation capability (K-value). For this purpose up to 180 simulations are carried out for different material thicknesses and insulation properties, using the EnergyPlus software application. The estimation and prediction obtained by the ELM model are compared with GP (genetic programming) and ANNs (artificial neural network) models for accuracy. The simulation results indicate that an improvement in predictive accuracy is achievable with the ELM approach in comparison with GP and ANN. - Highlights: • Buildings consume huge amounts of energy for operation. • Envelope materials and insulation influence building energy consumption. • Extreme learning machine is used to estimate energy usage of a sample building. • The key effective factors in this study are insulation thickness and K-value.
Conventional estimating method of earthquake response of mechanical appendage system
International Nuclear Information System (INIS)
Aoki, Shigeru; Suzuki, Kohei
1981-01-01
Generally, for the estimation of the earthquake response of appendage structure system installed in main structure system, the method of floor response analysis using the response spectra at the point of installing the appendage system has been used. On the other hand, the research on the estimation of the earthquake response of appendage system by the statistical procedure based on probability process theory has been reported. The development of a practical method for simply estimating the response is an important subject in aseismatic engineering. In this study, the method of estimating the earthquake response of appendage system in the general case that the natural frequencies of both structure systems were different was investigated. First, it was shown that floor response amplification factor was able to be estimated simply by giving the ratio of the natural frequencies of both structure systems, and its statistical property was clarified. Next, it was elucidated that the procedure of expressing acceleration, velocity and displacement responses with tri-axial response spectra simultaneously was able to be applied to the expression of FRAF. The applicability of this procedure to nonlinear system was examined. (Kako, I.)
Error Estimation and Accuracy Improvements in Nodal Transport Methods
International Nuclear Information System (INIS)
Zamonsky, O.M.
2000-01-01
The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid
Correction of Misclassifications Using a Proximity-Based Estimation Method
Directory of Open Access Journals (Sweden)
Shmulevich Ilya
2004-01-01
Full Text Available An estimation method for correcting misclassifications in signal and image processing is presented. The method is based on the use of context-based (temporal or spatial information in a sliding-window fashion. The classes can be purely nominal, that is, an ordering of the classes is not required. The method employs nonlinear operations based on class proximities defined by a proximity matrix. Two case studies are presented. In the first, the proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm. In the second, the estimation method is applied to two-dimensional signals for correction of misclassifications in images. In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework. Simulation results are presented in both case studies and the degree of improvement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.
Comparison of methods used for estimating pharmacist counseling behaviors.
Schommer, J C; Sullivan, D L; Wiederholt, J B
1994-01-01
To compare the rates reported for provision of types of information conveyed by pharmacists among studies for which different methods of estimation were used and different dispensing situations were studied. Empiric studies conducted in the US, reported from 1982 through 1992, were selected from International Pharmaceutical Abstracts, MEDLINE, and noncomputerized sources. Empiric studies were selected for review if they reported the provision of at least three types of counseling information. Four components of methods used for estimating pharmacist counseling behaviors were extracted and summarized in a table: (1) sample type and area, (2) sampling unit, (3) sample size, and (4) data collection method. In addition, situations that were investigated in each study were compiled. Twelve studies met our inclusion criteria. Patients were interviewed via telephone in four studies and were surveyed via mail in two studies. Pharmacists were interviewed via telephone in one study and surveyed via mail in two studies. For three studies, researchers visited pharmacy sites for data collection using the shopper method or observation method. Studies with similar methods and situations provided similar results. Data collected by using patient surveys, pharmacist surveys, and observation methods can provide useful estimations of pharmacist counseling behaviors if researchers measure counseling for specific, well-defined dispensing situations.
Improvement of Source Number Estimation Method for Single Channel Signal.
Directory of Open Access Journals (Sweden)
Zhi Dong
Full Text Available Source number estimation methods for single channel signal have been investigated and the improvements for each method are suggested in this work. Firstly, the single channel data is converted to multi-channel form by delay process. Then, algorithms used in the array signal processing, such as Gerschgorin's disk estimation (GDE and minimum description length (MDL, are introduced to estimate the source number of the received signal. The previous results have shown that the MDL based on information theoretic criteria (ITC obtains a superior performance than GDE at low SNR. However it has no ability to handle the signals containing colored noise. On the contrary, the GDE method can eliminate the influence of colored noise. Nevertheless, its performance at low SNR is not satisfactory. In order to solve these problems and contradictions, the work makes remarkable improvements on these two methods on account of the above consideration. A diagonal loading technique is employed to ameliorate the MDL method and a jackknife technique is referenced to optimize the data covariance matrix in order to improve the performance of the GDE method. The results of simulation have illustrated that the performance of original methods have been promoted largely.
Method to Locate Contaminant Source and Estimate Emission Strength
Directory of Open Access Journals (Sweden)
Qu Hongquan
2013-01-01
Full Text Available People greatly concern the issue of air quality in some confined spaces, such as spacecraft, aircraft, and submarine. With the increase of residence time in such confined space, contaminant pollution has become a main factor which endangers life. It is urgent to identify a contaminant source rapidly so that a prompt remedial action can be taken. A procedure of source identification should be able to locate the position and to estimate the emission strength of the contaminant source. In this paper, an identification method was developed to realize these two aims. This method was developed based on a discrete concentration stochastic model. With this model, a sensitivity analysis algorithm was induced to locate the source position, and a Kalman filter was used to further estimate the contaminant emission strength. This method could track and predict the source strength dynamically. Meanwhile, it can predict the distribution of contaminant concentration. Simulation results have shown the virtues of the method.
Improved stove programs need robust methods to estimate carbon offsets
Johnson, Michael; Edwards, Rufus; Masera, Omar
2010-01-01
Current standard methods result in significant discrepancies in carbon offset accounting compared to approaches based on representative community based subsamples, which provide more realistic assessments at reasonable cost. Perhaps more critically, neither of the currently approved methods incorporates uncertainties inherent in estimates of emission factors or non-renewable fuel usage (fNRB). Since emission factors and fNRB contribute 25% and 47%, respectively, to the overall uncertainty in ...
Estimation of water percolation by different methods using TDR
Directory of Open Access Journals (Sweden)
Alisson Jadavi Pereira da Silva
2014-02-01
Full Text Available Detailed knowledge on water percolation into the soil in irrigated areas is fundamental for solving problems of drainage, pollution and the recharge of underground aquifers. The aim of this study was to evaluate the percolation estimated by time-domain-reflectometry (TDR in a drainage lysimeter. We used Darcy's law with K(θ functions determined by field and laboratory methods and by the change in water storage in the soil profile at 16 points of moisture measurement at different time intervals. A sandy clay soil was saturated and covered with plastic sheet to prevent evaporation and an internal drainage trial in a drainage lysimeter was installed. The relationship between the observed and estimated percolation values was evaluated by linear regression analysis. The results suggest that percolation in the field or laboratory can be estimated based on continuous monitoring with TDR, and at short time intervals, of the variations in soil water storage. The precision and accuracy of this approach are similar to those of the lysimeter and it has advantages over the other evaluated methods, of which the most relevant are the possibility of estimating percolation in short time intervals and exemption from the predetermination of soil hydraulic properties such as water retention and hydraulic conductivity. The estimates obtained by the Darcy-Buckingham equation for percolation levels using function K(θ predicted by the method of Hillel et al. (1972 provided compatible water percolation estimates with those obtained in the lysimeter at time intervals greater than 1 h. The methods of Libardi et al. (1980, Sisson et al. (1980 and van Genuchten (1980 underestimated water percolation.
Song, R B; Oldach, M S; Basso, D M; da Costa, R C; Fisher, L C; Mo, X; Moore, S A
2016-04-01
The purpose of this study was to evaluate a simplified method of walking track analysis to assess treatment outcome in canine spinal cord injury. Measurements of stride length (SL) and base of support (BS) were made using a 'finger painting' technique for footprint analysis in all limbs of 20 normal dogs and 27 dogs with 28 episodes of acute thoracolumbar spinal cord injury (SCI) caused by spontaneous intervertebral disc extrusion. Measurements were determined at three separate time points in normal dogs and on days 3, 10 and 30 following decompressive surgery in dogs with SCI. Values for SL, BS and coefficient of variance (COV) for each parameter were compared between groups at each time point. Mean SL was significantly shorter in all four limbs of SCI-affected dogs at days 3, 10, and 30 compared to normal dogs. SL gradually increased toward normal in the 30 days following surgery. As measured by this technique, the COV-SL was significantly higher in SCI-affected dogs than normal dogs in both thoracic limbs (TL) and pelvic limbs (PL) only at day 3 after surgery. BS-TL was significantly wider in SCI-affected dogs at days 3, 10 and 30 following surgery compared to normal dogs. These findings support the use of footprint parameters to compare locomotor differences between normal and SCI-affected dogs, and to assess recovery from SCI. Additionally, our results underscore important changes in TL locomotion in thoracolumbar SCI-affected dogs. Copyright © 2016 Elsevier Ltd. All rights reserved.
Comparing different methods for estimating radiation dose to the conceptus
Energy Technology Data Exchange (ETDEWEB)
Lopez-Rendon, X.; Dedulle, A. [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Herestraat 49, box 7003, Leuven (Belgium); Walgraeve, M.S.; Woussen, S.; Zhang, G. [University Hospitals Leuven, Department of Radiology, Leuven (Belgium); Bosmans, H. [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Herestraat 49, box 7003, Leuven (Belgium); University Hospitals Leuven, Department of Radiology, Leuven (Belgium); Zanca, F. [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Herestraat 49, box 7003, Leuven (Belgium); GE Healthcare, Buc (France)
2017-02-15
To compare different methods available in the literature for estimating radiation dose to the conceptus (D{sub conceptus}) against a patient-specific Monte Carlo (MC) simulation and a commercial software package (CSP). Eight voxel models from abdominopelvic CT exams of pregnant patients were generated. D{sub conceptus} was calculated with an MC framework including patient-specific longitudinal tube current modulation (TCM). For the same patients, dose to the uterus, D{sub uterus}, was calculated as an alternative for D{sub conceptus}, with a CSP that uses a standard-size, non-pregnant phantom and a generic TCM curve. The percentage error between D{sub uterus} and D{sub conceptus} was studied. Dose to the conceptus and percent error with respect to D{sub conceptus} was also estimated for three methods in the literature. The percentage error ranged from -15.9% to 40.0% when comparing MC to CSP. When comparing the TCM profiles with the generic TCM profile from the CSP, differences were observed due to patient habitus and conceptus position. For the other methods, the percentage error ranged from -30.1% to 13.5% but applicability was limited. Estimating an accurate D{sub conceptus} requires a patient-specific approach that the CSP investigated cannot provide. Available methods in the literature can provide a better estimation if applicable to patient-specific cases. (orig.)
Stability estimates for hp spectral element methods for general ...
Indian Academy of Sciences (India)
We establish basic stability estimates for a non-conforming ℎ- spectral element method which allows for simultaneous mesh refinement and variable polynomial degree. The spectral element functions are non-conforming if the boundary conditions are Dirichlet. For problems with mixed boundary conditions they are ...
A simple method for estimating thermal response of building ...
African Journals Online (AJOL)
This paper develops a simple method for estimating the thermal response of building materials in the tropical climatic zone using the basic heat equation. The efficacy of the developed model has been tested with data from three West African cities, namely Kano (lat. 12.1 ºN) Nigeria, Ibadan (lat. 7.4 ºN) Nigeria and Cotonou ...
Methods to estimate breeding values in honey bees
Brascamp, E.W.; Bijma, P.
2014-01-01
Background Efficient methodologies based on animal models are widely used to estimate breeding values in farm animals. These methods are not applicable in honey bees because of their mode of reproduction. Observations are recorded on colonies, which consist of a single queen and thousands of workers
Sampling point selection for energy estimation in the quasicontinuum method
Beex, L.A.A.; Peerlings, R.H.J.; Geers, M.G.D.
2010-01-01
The quasicontinuum (QC) method reduces computational costs of atomistic calculations by using interpolation between a small number of so-called repatoms to represent the displacements of the complete lattice and by selecting a small number of sampling atoms to estimate the total potential energy of
Benefits of EMU Participation : Estimates using the Synthetic Control Method
Verstegen, Loes; van Groezen, Bas; Meijdam, Lex
2017-01-01
This paper investigates quantitatively the benefits from participation in the Economic and Monetary Union for individual Euro area countries. Using the synthetic control method, we estimate how real GDP per capita would have developed for the EMU member states, if those countries had not joined the
Lidar method to estimate emission rates from extended sources
Currently, point measurements, often combined with models, are the primary means by which atmospheric emission rates are estimated from extended sources. However, these methods often fall short in their spatial and temporal resolution and accuracy. In recent years, lidar has emerged as a suitable to...
Comparing four methods to estimate usual intake distributions
Souverein, O.W.; Dekkers, A.L.; Geelen, A.; Haubrock, J.; Vries, de J.H.M.; Ocke, M.C.; Harttig, U.; Boeing, H.; Veer, van 't P.
2011-01-01
Background/Objectives: The aim of this paper was to compare methods to estimate usual intake distributions of nutrients and foods. As ‘true’ usual intake distributions are not known in practice, the comparison was carried out through a simulation study, as well as empirically, by application to data
The relative efficiency of three methods of estimating herbage mass ...
African Journals Online (AJOL)
The methods involved were randomly placed circular quadrats; randomly placed narrow strips; and disc meter sampling. Disc meter and quadrat sampling appear to be more efficient than strip sampling. In a subsequent small plot grazing trial the estimates of herbage mass, using the disc meter, had a consistent precision ...
Dual ant colony operational modal analysis parameter estimation method
Sitarz, Piotr; Powałka, Bartosz
2018-01-01
Operational Modal Analysis (OMA) is a common technique used to examine the dynamic properties of a system. Contrary to experimental modal analysis, the input signal is generated in object ambient environment. Operational modal analysis mainly aims at determining the number of pole pairs and at estimating modal parameters. Many methods are used for parameter identification. Some methods operate in time while others in frequency domain. The former use correlation functions, the latter - spectral density functions. However, while some methods require the user to select poles from a stabilisation diagram, others try to automate the selection process. Dual ant colony operational modal analysis parameter estimation method (DAC-OMA) presents a new approach to the problem, avoiding issues involved in the stabilisation diagram. The presented algorithm is fully automated. It uses deterministic methods to define the interval of estimated parameters, thus reducing the problem to optimisation task which is conducted with dedicated software based on ant colony optimisation algorithm. The combination of deterministic methods restricting parameter intervals and artificial intelligence yields very good results, also for closely spaced modes and significantly varied mode shapes within one measurement point.
Simple method for the estimation of glomerular filtration rate
Energy Technology Data Exchange (ETDEWEB)
Groth, T [Group for Biomedical Informatics, Uppsala Univ. Data Center, Uppsala (Sweden); Tengstroem, B [District General Hospital, Skoevde (Sweden)
1977-02-01
A simple method is presented for indirect estimation of the glomerular filtration rate from two venous blood samples, drawn after a single injection of a small dose of (/sup 125/I)sodium iothalamate (10 ..mu..Ci). The method does not require exact dosage, as the first sample, taken after a few minutes (t=5 min) after injection, is used to normilize the value of the second sample, which should be taken in between 2 to 4 h after injection. The glomerular filtration rate, as measured by standard insulin clearance, may then be predicted from the logarithm of the normalized value and linear regression formulas with a standard error of estimate of the order of 1 to 2 ml/min/1.73 m/sup 2/. The slope-intercept method for direct estimation of glomerular filtration rate is also evaluated and found to significantly underestimate standard insulin clearance. The normalized 'single-point' method is concluded to be superior to the slope-intercept method and more sophisticated methods using curve fitting technique, with regard to predictive force and clinical applicability.
Accurate position estimation methods based on electrical impedance tomography measurements
Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.
2017-08-01
Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less
Structural Reliability Using Probability Density Estimation Methods Within NESSUS
Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric
2003-01-01
A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been
Methods for Measuring and Estimating Methane Emission from Ruminants
Directory of Open Access Journals (Sweden)
Jørgen Madsen
2012-04-01
Full Text Available This paper is a brief introduction to the different methods used to quantify the enteric methane emission from ruminants. A thorough knowledge of the advantages and disadvantages of these methods is very important in order to plan experiments, understand and interpret experimental results, and compare them with other studies. The aim of the paper is to describe the principles, advantages and disadvantages of different methods used to quantify the enteric methane emission from ruminants. The best-known methods: Chambers/respiration chambers, SF6 technique and in vitro gas production technique and the newer CO2 methods are described. Model estimations, which are used to calculate national budget and single cow enteric emission from intake and diet composition, are also discussed. Other methods under development such as the micrometeorological technique, combined feeder and CH4 analyzer and proxy methods are briefly mentioned. Methods of choice for estimating enteric methane emission depend on aim, equipment, knowledge, time and money available, but interpretation of results obtained with a given method can be improved if knowledge about the disadvantages and advantages are used in the planning of experiments.
A simple method for estimating the entropy of neural activity
International Nuclear Information System (INIS)
Berry II, Michael J; Tkačik, Gašper; Dubuis, Julien; Marre, Olivier; Da Silveira, Rava Azeredo
2013-01-01
The number of possible activity patterns in a population of neurons grows exponentially with the size of the population. Typical experiments explore only a tiny fraction of the large space of possible activity patterns in the case of populations with more than 10 or 20 neurons. It is thus impossible, in this undersampled regime, to estimate the probabilities with which most of the activity patterns occur. As a result, the corresponding entropy—which is a measure of the computational power of the neural population—cannot be estimated directly. We propose a simple scheme for estimating the entropy in the undersampled regime, which bounds its value from both below and above. The lower bound is the usual ‘naive’ entropy of the experimental frequencies. The upper bound results from a hybrid approximation of the entropy which makes use of the naive estimate, a maximum entropy fit, and a coverage adjustment. We apply our simple scheme to artificial data, in order to check their accuracy; we also compare its performance to those of several previously defined entropy estimators. We then apply it to actual measurements of neural activity in populations with up to 100 cells. Finally, we discuss the similarities and differences between the proposed simple estimation scheme and various earlier methods. (paper)
Seasonal adjustment methods and real time trend-cycle estimation
Bee Dagum, Estela
2016-01-01
This book explores widely used seasonal adjustment methods and recent developments in real time trend-cycle estimation. It discusses in detail the properties and limitations of X12ARIMA, TRAMO-SEATS and STAMP - the main seasonal adjustment methods used by statistical agencies. Several real-world cases illustrate each method and real data examples can be followed throughout the text. The trend-cycle estimation is presented using nonparametric techniques based on moving averages, linear filters and reproducing kernel Hilbert spaces, taking recent advances into account. The book provides a systematical treatment of results that to date have been scattered throughout the literature. Seasonal adjustment and real time trend-cycle prediction play an essential part at all levels of activity in modern economies. They are used by governments to counteract cyclical recessions, by central banks to control inflation, by decision makers for better modeling and planning and by hospitals, manufacturers, builders, transportat...
Limitations of the time slide method of background estimation
International Nuclear Information System (INIS)
Was, Michal; Bizouard, Marie-Anne; Brisson, Violette; Cavalier, Fabien; Davier, Michel; Hello, Patrice; Leroy, Nicolas; Robinet, Florent; Vavoulidis, Miltiadis
2010-01-01
Time shifting the output of gravitational wave detectors operating in coincidence is a convenient way of estimating the background in a search for short-duration signals. In this paper, we show how non-stationary data affect the background estimation precision. We present a method of measuring the fluctuations of the data and computing its effects on a coincident search. In particular, we show that for fluctuations of moderate amplitude, time slides larger than the fluctuation time scales can be used. We also recall how the false alarm variance saturates with the number of time shifts.
Limitations of the time slide method of background estimation
Energy Technology Data Exchange (ETDEWEB)
Was, Michal; Bizouard, Marie-Anne; Brisson, Violette; Cavalier, Fabien; Davier, Michel; Hello, Patrice; Leroy, Nicolas; Robinet, Florent; Vavoulidis, Miltiadis, E-mail: mwas@lal.in2p3.f [LAL, Universite Paris-Sud, CNRS/IN2P3, Orsay (France)
2010-10-07
Time shifting the output of gravitational wave detectors operating in coincidence is a convenient way of estimating the background in a search for short-duration signals. In this paper, we show how non-stationary data affect the background estimation precision. We present a method of measuring the fluctuations of the data and computing its effects on a coincident search. In particular, we show that for fluctuations of moderate amplitude, time slides larger than the fluctuation time scales can be used. We also recall how the false alarm variance saturates with the number of time shifts.
A new DOD and DOA estimation method for MIMO radar
Gong, Jian; Lou, Shuntian; Guo, Yiduo
2018-04-01
The battlefield electromagnetic environment is becoming more and more complex, and MIMO radar will inevitably be affected by coherent and non-stationary noise. To solve this problem, an angle estimation method based on oblique projection operator and Teoplitz matrix reconstruction is proposed. Through the reconstruction of Toeplitz, nonstationary noise is transformed into Gauss white noise, and then the oblique projection operator is used to separate independent and correlated sources. Finally, simulations are carried out to verify the performance of the proposed algorithm in terms of angle estimation performance and source overload.
Review of methods for level density estimation from resonance parameters
International Nuclear Information System (INIS)
Froehner, F.H.
1983-01-01
A number of methods are available for statistical analysis of resonance parameter sets, i.e. for estimation of level densities and average widths with account of missing levels. The main categories are (i) methods based on theories of level spacings (orthogonal-ensemble theory, Dyson-Mehta statistics), (ii) methods based on comparison with simulated cross section curves (Monte Carlo simulation, Garrison's autocorrelation method), (iii) methods exploiting the observed neutron width distribution by means of Bayesian or more approximate procedures such as maximum-likelihood, least-squares or moment methods, with various recipes for the treatment of detection thresholds and resolution effects. The present review will concentrate on (iii) with the aim of clarifying the basic mathematical concepts and the relationship between the various techniques. Recent theoretical progress in the treatment of resolution effects, detectability thresholds and p-wave admixture is described. (Auth.)
Dental age estimation using Willems method: A digital orthopantomographic study
Directory of Open Access Journals (Sweden)
Rezwana Begum Mohammed
2014-01-01
Full Text Available In recent years, age estimation has become increasingly important in living people for a variety of reasons, including identifying criminal and legal responsibility, and for many other social events such as a birth certificate, marriage, beginning a job, joining the army, and retirement. Objectives: The aim of this study was to assess the developmental stages of left seven mandibular teeth for estimation of dental age (DA in different age groups and to evaluate the possible correlation between DA and chronological age (CA in South Indian population using Willems method. Materials and Methods: Digital Orthopantomogram of 332 subjects (166 males, 166 females who fit the study and the criteria were obtained. Assessment of mandibular teeth (from central incisor to the second molar on left quadrant development was undertaken and DA was assessed using Willems method. Results and Discussion: The present study showed a significant correlation between DA and CA in both males (r = 0.71 and females (r = 0.88. The overall mean difference between the estimated DA and CA for males was 0.69 ± 2.14 years (P 0.05. Willems method underestimated the mean age of males by 0.69 years and females by 0.08 years and showed that females mature earlier than males in selected population. The mean difference between DA and CA according to Willems method was 0.39 years and is statistically significant (P < 0.05. Conclusion: This study showed significant relation between DA and CA. Thus, digital radiographic assessment of mandibular teeth development can be used to generate mean DA using Willems method and also the estimated age range for an individual of unknown CA.
NEW COMPLETENESS METHODS FOR ESTIMATING EXOPLANET DISCOVERIES BY DIRECT DETECTION
International Nuclear Information System (INIS)
Brown, Robert A.; Soummer, Remi
2010-01-01
We report on new methods for evaluating realistic observing programs that search stars for planets by direct imaging, where observations are selected from an optimized star list and stars can be observed multiple times. We show how these methods bring critical insight into the design of the mission and its instruments. These methods provide an estimate of the outcome of the observing program: the probability distribution of discoveries (detection and/or characterization) and an estimate of the occurrence rate of planets (η). We show that these parameters can be accurately estimated from a single mission simulation, without the need for a complete Monte Carlo mission simulation, and we prove the accuracy of this new approach. Our methods provide tools to define a mission for a particular science goal; for example, a mission can be defined by the expected number of discoveries and its confidence level. We detail how an optimized star list can be built and how successive observations can be selected. Our approach also provides other critical mission attributes, such as the number of stars expected to be searched and the probability of zero discoveries. Because these attributes depend strongly on the mission scale (telescope diameter, observing capabilities and constraints, mission lifetime, etc.), our methods are directly applicable to the design of such future missions and provide guidance to the mission and instrument design based on scientific performance. We illustrate our new methods with practical calculations and exploratory design reference missions for the James Webb Space Telescope (JWST) operating with a distant starshade to reduce scattered and diffracted starlight on the focal plane. We estimate that five habitable Earth-mass planets would be discovered and characterized with spectroscopy, with a probability of zero discoveries of 0.004, assuming a small fraction of JWST observing time (7%), η = 0.3, and 70 observing visits, limited by starshade fuel.
COMPARATIVE ANALYSIS OF ESTIMATION METHODS OF PHARMACY ORGANIZATION BANKRUPTCY PROBABILITY
Directory of Open Access Journals (Sweden)
V. L. Adzhienko
2014-01-01
Full Text Available A purpose of this study was to determine the probability of bankruptcy by various methods in order to predict the financial crisis of pharmacy organization. Estimating the probability of pharmacy organization bankruptcy was conducted using W. Beaver’s method adopted in the Russian Federation, with integrated assessment of financial stability use on the basis of scoring analysis. The results obtained by different methods are comparable and show that the risk of bankruptcy of the pharmacy organization is small.
ACCELERATED METHODS FOR ESTIMATING THE DURABILITY OF PLAIN BEARINGS
Directory of Open Access Journals (Sweden)
Myron Czerniec
2014-09-01
Full Text Available The paper presents methods for determining the durability of slide bearings. The developed methods enhance the calculation process by even 100000 times, compared to the accurate solution obtained with the generalized cumulative model of wear. The paper determines the accuracy of results for estimating the durability of bearings depending on the size of blocks of constant conditions of contact interaction between the shaft with small out-of-roundedness and the bush with a circular contour. The paper gives an approximate dependence for determining accurate durability using either a more accurate or an additional method.
Method to Estimate the Dissolved Air Content in Hydraulic Fluid
Hauser, Daniel M.
2011-01-01
In order to verify the air content in hydraulic fluid, an instrument was needed to measure the dissolved air content before the fluid was loaded into the system. The instrument also needed to measure the dissolved air content in situ and in real time during the de-aeration process. The current methods used to measure the dissolved air content require the fluid to be drawn from the hydraulic system, and additional offline laboratory processing time is involved. During laboratory processing, there is a potential for contamination to occur, especially when subsaturated fluid is to be analyzed. A new method measures the amount of dissolved air in hydraulic fluid through the use of a dissolved oxygen meter. The device measures the dissolved air content through an in situ, real-time process that requires no additional offline laboratory processing time. The method utilizes an instrument that measures the partial pressure of oxygen in the hydraulic fluid. By using a standardized calculation procedure that relates the oxygen partial pressure to the volume of dissolved air in solution, the dissolved air content is estimated. The technique employs luminescent quenching technology to determine the partial pressure of oxygen in the hydraulic fluid. An estimated Henry s law coefficient for oxygen and nitrogen in hydraulic fluid is calculated using a standard method to estimate the solubility of gases in lubricants. The amount of dissolved oxygen in the hydraulic fluid is estimated using the Henry s solubility coefficient and the measured partial pressure of oxygen in solution. The amount of dissolved nitrogen that is in solution is estimated by assuming that the ratio of dissolved nitrogen to dissolved oxygen is equal to the ratio of the gas solubility of nitrogen to oxygen at atmospheric pressure and temperature. The technique was performed at atmospheric pressure and room temperature. The technique could be theoretically carried out at higher pressures and elevated
Garavaglia, F.; Paquet, E.; Lang, M.; Renard, B.; Arnaud, P.; Aubert, Y.; Carre, J.
2013-12-01
In flood risk assessment the methods can be divided in two families: deterministic methods and probabilistic methods. In the French hydrologic community the probabilistic methods are historically preferred to the deterministic ones. Presently a French research project named EXTRAFLO (RiskNat Program of the French National Research Agency, https://extraflo.cemagref.fr) deals with the design values for extreme rainfall and floods. The object of this project is to carry out a comparison of the main methods used in France for estimating extreme values of rainfall and floods, to obtain a better grasp of their respective fields of application. In this framework we present the results of Task 7 of EXTRAFLO project. Focusing on French watersheds, we compare the main extreme flood estimation methods used in French background: (i) standard flood frequency analysis (Gumbel and GEV distribution), (ii) regional flood frequency analysis (regional Gumbel and GEV distribution), (iii) local and regional flood frequency analysis improved by historical information (Naulet et al., 2005), (iv) simplify probabilistic method based on rainfall information (i.e. Gradex method (CFGB, 1994), Agregee method (Margoum, 1992) and Speed method (Cayla, 1995)), (v) flood frequency analysis by continuous simulation approach and based on rainfall information (i.e. Schadex method (Paquet et al., 2013, Garavaglia et al., 2010), Shyreg method (Lavabre et al., 2003)) and (vi) multifractal approach. The main result of this comparative study is that probabilistic methods based on additional information (i.e. regional, historical and rainfall information) provide better estimations than the standard flood frequency analysis. Another interesting result is that, the differences between the various extreme flood quantile estimations of compared methods increase with return period, staying relatively moderate up to 100-years return levels. Results and discussions are here illustrated throughout with the example
Ridge regression estimator: combining unbiased and ordinary ridge regression methods of estimation
Directory of Open Access Journals (Sweden)
Sharad Damodar Gore
2009-10-01
Full Text Available Statistical literature has several methods for coping with multicollinearity. This paper introduces a new shrinkage estimator, called modified unbiased ridge (MUR. This estimator is obtained from unbiased ridge regression (URR in the same way that ordinary ridge regression (ORR is obtained from ordinary least squares (OLS. Properties of MUR are derived. Results on its matrix mean squared error (MMSE are obtained. MUR is compared with ORR and URR in terms of MMSE. These results are illustrated with an example based on data generated by Hoerl and Kennard (1975.
Methods to estimate irrigated reference crop evapotranspiration - a review.
Kumar, R; Jat, M K; Shankar, V
2012-01-01
Efficient water management of crops requires accurate irrigation scheduling which, in turn, requires the accurate measurement of crop water requirement. Irrigation is applied to replenish depleted moisture for optimum plant growth. Reference evapotranspiration plays an important role for the determination of water requirements for crops and irrigation scheduling. Various models/approaches varying from empirical to physically base distributed are available for the estimation of reference evapotranspiration. Mathematical models are useful tools to estimate the evapotranspiration and water requirement of crops, which is essential information required to design or choose best water management practices. In this paper the most commonly used models/approaches, which are suitable for the estimation of daily water requirement for agricultural crops grown in different agro-climatic regions, are reviewed. Further, an effort has been made to compare the accuracy of various widely used methods under different climatic conditions.
A simple method for estimation of phosphorous in urine
International Nuclear Information System (INIS)
Chaudhary, Seema; Gondane, Sonali; Sawant, Pramilla D.; Rao, D.D.
2016-01-01
Following internal contamination of 32 P, it is preferentially eliminated from the body in urine. It is estimated by in-situ precipitation of ammonium molybdo-phosphate (AMP) in urine followed by gross beta counting. The amount of AMP formed in-situ depends on the amount of stable phosphorous (P) present in the urine and hence, it was essential to generate information regarding urinary excretion of stable P. If amount of P excreted is significant then the amount of AMP formed would correspondingly increase leading to absorption of some of the β particles. The present study was taken up for the estimation of daily urinary excretion of P using the phospho-molybdate spectrophotometry method. Few urine samples received from radiation workers were analyzed and based on the observed range of stable P in urine; volume of sample required for 32 P estimation was finalized
Benchmarking Foot Trajectory Estimation Methods for Mobile Gait Analysis
Directory of Open Access Journals (Sweden)
Julius Hannink
2017-08-01
Full Text Available Mobile gait analysis systems based on inertial sensing on the shoe are applied in a wide range of applications. Especially for medical applications, they can give new insights into motor impairment in, e.g., neurodegenerative disease and help objectify patient assessment. One key component in these systems is the reconstruction of the foot trajectories from inertial data. In literature, various methods for this task have been proposed. However, performance is evaluated on a variety of datasets due to the lack of large, generally accepted benchmark datasets. This hinders a fair comparison of methods. In this work, we implement three orientation estimation and three double integration schemes for use in a foot trajectory estimation pipeline. All methods are drawn from literature and evaluated against a marker-based motion capture reference. We provide a fair comparison on the same dataset consisting of 735 strides from 16 healthy subjects. As a result, the implemented methods are ranked and we identify the most suitable processing pipeline for foot trajectory estimation in the context of mobile gait analysis.
A method to estimate stellar ages from kinematical data
Almeida-Fernandes, F.; Rocha-Pinto, H. J.
2018-05-01
We present a method to build a probability density function (PDF) for the age of a star based on its peculiar velocities U, V, and W and its orbital eccentricity. The sample used in this work comes from the Geneva-Copenhagen Survey (GCS) that contains the spatial velocities, orbital eccentricities, and isochronal ages for about 14 000 stars. Using the GCS stars, we fitted the parameters that describe the relations between the distributions of kinematical properties and age. This parametrization allows us to obtain an age probability from the kinematical data. From this age PDF, we estimate an individual average age for the star using the most likely age and the expected age. We have obtained the stellar age PDF for the age of 9102 stars from the GCS and have shown that the distribution of individual ages derived from our method is in good agreement with the distribution of isochronal ages. We also observe a decline in the mean metallicity with our ages for stars younger than 7 Gyr, similar to the one observed for isochronal ages. This method can be useful for the estimation of rough stellar ages for those stars that fall in areas of the Hertzsprung-Russell diagram where isochrones are tightly crowded. As an example of this method, we estimate the age of Trappist-1, which is a M8V star, obtaining the age of t(UVW) = 12.50(+0.29 - 6.23) Gyr.
Comparison of different methods for estimation of potential evapotranspiration
International Nuclear Information System (INIS)
Nazeer, M.
2010-01-01
Evapotranspiration can be estimated with different available methods. The aim of this research study to compare and evaluate the originally measured potential evapotranspiration from Class A pan with the Hargreaves equation, the Penman equation, the Penman-Montheith equation, and the FAO56 Penman-Monteith equation. The evaporation rate from pan recorded greater than stated methods. For each evapotranspiration method, results were compared against mean monthly potential evapotranspiration (PET) from Pan data according to FAO (ET/sub o/=K/sub pan X E/sub pan)), from daily measured recorded data of the twenty-five years (1984-2008). On the basis of statistical analysis between the pan data and the FAO56- Penman-Monteith method are not considered to be very significant (=0.98) at 95% confidence and prediction intervals. All methods required accurate weather data for precise results, for the purpose of this study the past twenty five years data were analyzed and used including maximum and minimum air temperature, relative humidity, wind speed, sunshine duration and rainfall. Based on linear regression analysis results the FAO56 PMM ranked first (R/sup 2/=0.98) followed by Hergreaves method (R/sup 2/=0.96), Penman-Monteith method (R/sup 2/=0.94) and Penman method (=0.93). Obviously, using FAO56 Penman Monteith method with precise climatic variables for ET/sub o/ estimation is more reliable than the other alternative methods, Hergreaves is more simple and rely only on air temperatures data and can be used alternative of FAO56 Penman-Monteith method if other climatic data are missing or unreliable. (author)
Vegetation index methods for estimating evapotranspiration by remote sensing
Glenn, Edward P.; Nagler, Pamela L.; Huete, Alfredo R.
2010-01-01
Evapotranspiration (ET) is the largest term after precipitation in terrestrial water budgets. Accurate estimates of ET are needed for numerous agricultural and natural resource management tasks and to project changes in hydrological cycles due to potential climate change. We explore recent methods that combine vegetation indices (VI) from satellites with ground measurements of actual ET (ETa) and meteorological data to project ETa over a wide range of biome types and scales of measurement, from local to global estimates. The majority of these use time-series imagery from the Moderate Resolution Imaging Spectrometer on the Terra satellite to project ET over seasons and years. The review explores the theoretical basis for the methods, the types of ancillary data needed, and their accuracy and limitations. Coefficients of determination between modeled ETa and measured ETa are in the range of 0.45–0.95, and root mean square errors are in the range of 10–30% of mean ETa values across biomes, similar to methods that use thermal infrared bands to estimate ETa and within the range of accuracy of the ground measurements by which they are calibrated or validated. The advent of frequent-return satellites such as Terra and planed replacement platforms, and the increasing number of moisture and carbon flux tower sites over the globe, have made these methods feasible. Examples of operational algorithms for ET in agricultural and natural ecosystems are presented. The goal of the review is to enable potential end-users from different disciplines to adapt these methods to new applications that require spatially-distributed ET estimates.
A generic method for estimating system reliability using Bayesian networks
International Nuclear Information System (INIS)
Doguc, Ozge; Ramirez-Marquez, Jose Emmanuel
2009-01-01
This study presents a holistic method for constructing a Bayesian network (BN) model for estimating system reliability. BN is a probabilistic approach that is used to model and predict the behavior of a system based on observed stochastic events. The BN model is a directed acyclic graph (DAG) where the nodes represent system components and arcs represent relationships among them. Although recent studies on using BN for estimating system reliability have been proposed, they are based on the assumption that a pre-built BN has been designed to represent the system. In these studies, the task of building the BN is typically left to a group of specialists who are BN and domain experts. The BN experts should learn about the domain before building the BN, which is generally very time consuming and may lead to incorrect deductions. As there are no existing studies to eliminate the need for a human expert in the process of system reliability estimation, this paper introduces a method that uses historical data about the system to be modeled as a BN and provides efficient techniques for automated construction of the BN model, and hence estimation of the system reliability. In this respect K2, a data mining algorithm, is used for finding associations between system components, and thus building the BN model. This algorithm uses a heuristic to provide efficient and accurate results while searching for associations. Moreover, no human intervention is necessary during the process of BN construction and reliability estimation. The paper provides a step-by-step illustration of the method and evaluation of the approach with literature case examples
A generic method for estimating system reliability using Bayesian networks
Energy Technology Data Exchange (ETDEWEB)
Doguc, Ozge [Stevens Institute of Technology, Hoboken, NJ 07030 (United States); Ramirez-Marquez, Jose Emmanuel [Stevens Institute of Technology, Hoboken, NJ 07030 (United States)], E-mail: jmarquez@stevens.edu
2009-02-15
This study presents a holistic method for constructing a Bayesian network (BN) model for estimating system reliability. BN is a probabilistic approach that is used to model and predict the behavior of a system based on observed stochastic events. The BN model is a directed acyclic graph (DAG) where the nodes represent system components and arcs represent relationships among them. Although recent studies on using BN for estimating system reliability have been proposed, they are based on the assumption that a pre-built BN has been designed to represent the system. In these studies, the task of building the BN is typically left to a group of specialists who are BN and domain experts. The BN experts should learn about the domain before building the BN, which is generally very time consuming and may lead to incorrect deductions. As there are no existing studies to eliminate the need for a human expert in the process of system reliability estimation, this paper introduces a method that uses historical data about the system to be modeled as a BN and provides efficient techniques for automated construction of the BN model, and hence estimation of the system reliability. In this respect K2, a data mining algorithm, is used for finding associations between system components, and thus building the BN model. This algorithm uses a heuristic to provide efficient and accurate results while searching for associations. Moreover, no human intervention is necessary during the process of BN construction and reliability estimation. The paper provides a step-by-step illustration of the method and evaluation of the approach with literature case examples.
A Novel Nonlinear Parameter Estimation Method of Soft Tissues
Directory of Open Access Journals (Sweden)
Qianqian Tong
2017-12-01
Full Text Available The elastic parameters of soft tissues are important for medical diagnosis and virtual surgery simulation. In this study, we propose a novel nonlinear parameter estimation method for soft tissues. Firstly, an in-house data acquisition platform was used to obtain external forces and their corresponding deformation values. To provide highly precise data for estimating nonlinear parameters, the measured forces were corrected using the constructed weighted combination forecasting model based on a support vector machine (WCFM_SVM. Secondly, a tetrahedral finite element parameter estimation model was established to describe the physical characteristics of soft tissues, using the substitution parameters of Young’s modulus and Poisson’s ratio to avoid solving complicated nonlinear problems. To improve the robustness of our model and avoid poor local minima, the initial parameters solved by a linear finite element model were introduced into the parameter estimation model. Finally, a self-adapting Levenberg–Marquardt (LM algorithm was presented, which is capable of adaptively adjusting iterative parameters to solve the established parameter estimation model. The maximum absolute error of our WCFM_SVM model was less than 0.03 Newton, resulting in more accurate forces in comparison with other correction models tested. The maximum absolute error between the calculated and measured nodal displacements was less than 1.5 mm, demonstrating that our nonlinear parameters are precise.
Hexographic Method of Complex Town-Planning Terrain Estimate
Khudyakov, A. Ju
2017-11-01
The article deals with the vital problem of a complex town-planning analysis based on the “hexographic” graphic analytic method, makes a comparison with conventional terrain estimate methods and contains the method application examples. It discloses a procedure of the author’s estimate of restrictions and building of a mathematical model which reflects not only conventional town-planning restrictions, but also social and aesthetic aspects of the analyzed territory. The method allows one to quickly get an idea of the territory potential. It is possible to use an unlimited number of estimated factors. The method can be used for the integrated assessment of urban areas. In addition, it is possible to use the methods of preliminary evaluation of the territory commercial attractiveness in the preparation of investment projects. The technique application results in simple informative graphics. Graphical interpretation is straightforward from the experts. A definite advantage is the free perception of the subject results as they are not prepared professionally. Thus, it is possible to build a dialogue between professionals and the public on a new level allowing to take into account the interests of various parties. At the moment, the method is used as a tool for the preparation of integrated urban development projects at the Department of Architecture in Federal State Autonomous Educational Institution of Higher Education “South Ural State University (National Research University)”, FSAEIHE SUSU (NRU). The methodology is included in a course of lectures as the material on architectural and urban design for architecture students. The same methodology was successfully tested in the preparation of business strategies for the development of some territories in the Chelyabinsk region. This publication is the first in a series of planned activities developing and describing the methodology of hexographical analysis in urban and architectural practice. It is also
Estimating surface acoustic impedance with the inverse method.
Piechowicz, Janusz
2011-01-01
Sound field parameters are predicted with numerical methods in sound control systems, in acoustic designs of building and in sound field simulations. Those methods define the acoustic properties of surfaces, such as sound absorption coefficients or acoustic impedance, to determine boundary conditions. Several in situ measurement techniques were developed; one of them uses 2 microphones to measure direct and reflected sound over a planar test surface. Another approach is used in the inverse boundary elements method, in which estimating acoustic impedance of a surface is expressed as an inverse boundary problem. The boundary values can be found from multipoint sound pressure measurements in the interior of a room. This method can be applied to arbitrarily-shaped surfaces. This investigation is part of a research programme on using inverse methods in industrial room acoustics.
Application of the Monte Carlo method to estimate doses in a radioactive waste drum environment
International Nuclear Information System (INIS)
Rodenas, J.; Garcia, T.; Burgos, M.C.; Felipe, A.; Sanchez-Mayoral, M.L.
2002-01-01
During refuelling operation in a Nuclear Power Plant, filtration is used to remove non-soluble radionuclides contained in the water from reactor pool. Filter cartridges accumulate a high radioactivity, so that they are usually placed into a drum. When the operation ends up, the drum is filled with concrete and stored along with other drums containing radioactive wastes. Operators working in the refuelling plant near these radwaste drums can receive high dose rates. Therefore, it is convenient to estimate those doses to prevent risks in order to apply ALARA criterion for dose reduction to workers. The Monte Carlo method has been applied, using MCNP 4B code, to simulate the drum containing contaminated filters and estimate doses produced in the drum environment. In the paper, an analysis of the results obtained with the MCNP code has been performed. Thus, the influence on the evaluated doses of distance from drum and interposed shielding barriers has been studied. The source term has also been analysed to check the importance of the isotope composition. Two different geometric models have been considered in order to simplify calculations. Results have been compared with dose measurements in plant in order to validate the calculation procedure. This work has been developed at the Nuclear Engineering Department of the Polytechnic University of Valencia in collaboration with IBERINCO in the frame of an RD project sponsored by IBERINCO
DEFF Research Database (Denmark)
Silva, Filipe Miguel Faria da
2015-01-01
The installation of HVAC underground cables became more common in recent years, a trend expected to continue in the future. Underground cables are more complex than overhead lines and the calculation of their resistance and reactance can be challenging and time consuming for frequencies that are ......The installation of HVAC underground cables became more common in recent years, a trend expected to continue in the future. Underground cables are more complex than overhead lines and the calculation of their resistance and reactance can be challenging and time consuming for frequencies...... that are not power frequency. Software packages capable of performing exact calculations of these two parameters exist, but simple equations able to estimate the reactance and resistance of an underground cable for the frequencies associated to a transient or a resonance phenomenon would be helpful. This paper...
Estimating the Capacity of Urban Transportation Networks with an Improved Sensitivity Based Method
Directory of Open Access Journals (Sweden)
Muqing Du
2015-01-01
Full Text Available The throughput of a given transportation network is always of interest to the traffic administrative department, so as to evaluate the benefit of the transportation construction or expansion project before its implementation. The model of the transportation network capacity formulated as a mathematic programming with equilibrium constraint (MPEC well defines this problem. For practical applications, a modified sensitivity analysis based (SAB method is developed to estimate the solution of this bilevel model. The high-efficient origin-based (OB algorithm is extended for the precise solution of the combined model which is integrated in the network capacity model. The sensitivity analysis approach is also modified to simplify the inversion of the Jacobian matrix in large-scale problems. The solution produced in every iteration of SAB is restrained to be feasible to guarantee the success of the heuristic search. From the numerical experiments, the accuracy of the derivatives for the linear approximation could significantly affect the converging of the SAB method. The results also show that the proposed method could obtain good suboptimal solutions from different starting points in the test examples.
Interpretation of the method of images in estimating superconducting levitation
International Nuclear Information System (INIS)
Perez-Diaz, Jose Luis; Garcia-Prada, Juan Carlos
2007-01-01
Among different papers devoted to superconducting levitation of a permanent magnet over a superconductor using the method of images, there is a discrepancy of a factor of two when estimating the lift force. This is not a minor matter but an interesting fundamental question that contributes to understanding the physical phenomena of 'imaging' on a superconductor surface. We solve it, make clear the physical behavior underlying it, and suggest the reinterpretation of some previous experiments
New method to estimate the frequency stability of laser signals
International Nuclear Information System (INIS)
McFerran, J.J.; Maric, M.; Luiten, A.N.
2004-01-01
A frequent challenge in the scientific and commercial use of lasers is the need to determine the frequency stability of the output optical signal. In this article we present a new method to estimate this quantity while avoiding the complexity of the usual technique. The new technique displays the result in terms of the usual time domain measure of frequency stability: the square root Allan variance
Diffeomorphic Iterative Centroid Methods for Template Estimation on Large Datasets
Cury , Claire; Glaunès , Joan Alexis; Colliot , Olivier
2014-01-01
International audience; A common approach for analysis of anatomical variability relies on the stimation of a template representative of the population. The Large Deformation Diffeomorphic Metric Mapping is an attractive framework for that purpose. However, template estimation using LDDMM is computationally expensive, which is a limitation for the study of large datasets. This paper presents an iterative method which quickly provides a centroid of the population in the shape space. This centr...
Method for developing cost estimates for generic regulatory requirements
International Nuclear Information System (INIS)
1985-01-01
The NRC has established a practice of performing regulatory analyses, reflecting costs as well as benefits, of proposed new or revised generic requirements. A method had been developed to assist the NRC in preparing the types of cost estimates required for this purpose and for assigning priorities in the resolution of generic safety issues. The cost of a generic requirement is defined as the net present value of total lifetime cost incurred by the public, industry, and government in implementing the requirement for all affected plants. The method described here is for commercial light-water-reactor power plants. Estimating the cost for a generic requirement involves several steps: (1) identifying the activities that must be carried out to fully implement the requirement, (2) defining the work packages associated with the major activities, (3) identifying the individual elements of cost for each work package, (4) estimating the magnitude of each cost element, (5) aggregating individual plant costs over the plant lifetime, and (6) aggregating all plant costs and generic costs to produce a total, national, present value of lifetime cost for the requirement. The method developed addresses all six steps. In this paper, we discuss on the first three
Geometric estimation method for x-ray digital intraoral tomosynthesis
Li, Liang; Yang, Yao; Chen, Zhiqiang
2016-06-01
It is essential for accurate image reconstruction to obtain a set of parameters that describes the x-ray scanning geometry. A geometric estimation method is presented for x-ray digital intraoral tomosynthesis (DIT) in which the detector remains stationary while the x-ray source rotates. The main idea is to estimate the three-dimensional (3-D) coordinates of each shot position using at least two small opaque balls adhering to the detector surface as the positioning markers. From the radiographs containing these balls, the position of each x-ray focal spot can be calculated independently relative to the detector center no matter what kind of scanning trajectory is used. A 3-D phantom which roughly simulates DIT was designed to evaluate the performance of this method both quantitatively and qualitatively in the sense of mean square error and structural similarity. Results are also presented for real data acquired with a DIT experimental system. These results prove the validity of this geometric estimation method.
Influence function method for fast estimation of BWR core performance
International Nuclear Information System (INIS)
Rahnema, F.; Martin, C.L.; Parkos, G.R.; Williams, R.D.
1993-01-01
The model, which is based on the influence function method, provides rapid estimate of important quantities such as margins to fuel operating limits, the effective multiplication factor, nodal power and void and bundle flow distributions as well as the traversing in-core probe (TIP) and local power range monitor (LPRM) readings. The fast model has been incorporated into GE's three-dimensional core monitoring system (3D Monicore). In addition to its predicative capability, the model adapts to LPRM readings in the monitoring mode. Comparisons have shown that the agreement between the results of the fast method and those of the standard 3D Monicore is within a few percent. (orig.)
[New non-volumetric method for estimating peroperative blood loss].
Tachoires, D; Mourot, F; Gillardeau, G
1979-01-01
The authors have developed a new method for the estimation of peroperative blood loss by measurement of the haematocrit of a fluid obtained by diluting the blood from swabs in a known volume of isotonic saline solution. This value, referred to a monogram, may be used to assess the volume of blood impregnating the compresses, in relation to the pre-operative or present haematocrit of the patient, by direct reading. The precision of the method is discussed. The results obtained justified its routine application in surgery in children, patients with cardiac failure and in all cases requiring precise compensation of per-operative blood loss.
A new method to estimate genetic gain in annual crops
Directory of Open Access Journals (Sweden)
Flávio Breseghello
1998-12-01
Full Text Available The genetic gain obtained by breeding programs to improve quantitative traits may be estimated by using data from regional trials. A new statistical method for this estimate is proposed and includes four steps: a joint analysis of regional trial data using a generalized linear model to obtain adjusted genotype means and covariance matrix of these means for the whole studied period; b calculation of the arithmetic mean of the adjusted genotype means, exclusively for the group of genotypes evaluated each year; c direct year comparison of the arithmetic means calculated, and d estimation of mean genetic gain by regression. Using the generalized least squares method, a weighted estimate of mean genetic gain during the period is calculated. This method permits a better cancellation of genotype x year and genotype x trial/year interactions, thus resulting in more precise estimates. This method can be applied to unbalanced data, allowing the estimation of genetic gain in series of multilocational trials.Os ganhos genéticos obtidos pelo melhoramento de caracteres quantitativos podem ser estimados utilizando resultados de ensaios regionais de avaliação de linhagens e cultivares. Um novo método estatístico para esta estimativa é proposto, o qual consiste em quatro passos: a análise conjunta da série de dados dos ensaios regionais através de um modelo linear generalizado de forma a obter as médias ajustadas dos genótipos e a matriz de covariâncias destas médias; b para o grupo de genótipos avaliados em cada ano, cálculo da média aritmética das médias ajustadas obtidas na análise conjunta; c comparação direta dos anos, conforme as médias aritméticas obtidas, e d estimativa de um ganho genético médio, por regressão. Aplicando-se o método de quadrados mínimos generalizado, é calculada uma estimativa ponderada do ganho genético médio no período. Este método permite um melhor cancelamento das interações genótipo x ano e gen
Morgante, Enrico
2018-01-01
I review the construction of Simplified Models for Dark Matter searches. After discussing the philosophy and some simple examples, I turn the attention to the aspect of the theoretical consistency and to the implications of the necessary extensions of these models.
Sunk, Ilse-Gerlinde; Amoyo-Minar, Love; Stamm, Tanja; Haider, Stefanie; Niederreiter, Birgit; Supp, Gabriela; Soleiman, Afschin; Kainberger, Franz; Smolen, Josef S; Bobacz, Klaus
2014-11-01
To develop a radiographic score for assessment of hand osteoarthritis (OA) that is based on histopathological alterations of the distal (DIP) and proximal (PIP) interphalangeal joints. DIP and PIP joints were obtained from corpses (n=40). Plain radiographies of these joints were taken. Joint samples were prepared for histological analysis; cartilage damage was graded according to the Mankin scoring system. A 2×2 Fisher's exact test was applied to define those radiographic features most likely to be associated with histological alterations. Receiver operating characteristic curves were analysed to determine radiographic thresholds. Intraclass correlation coefficients (ICC) estimated intra- and inter-reader variability. Spearman's correlation was applied to examine the relationship between our score and histopathological changes. Differences between groups were determined by a Student's t test. The Interphalangeal Osteoarthritis Radiographic Simplified (iOARS) score is presented. The score is based on histopathological changes of DIP and PIP joints and follows a simple dichotomy whether OA is present or not. The iOARS score relies on three equally ranked radiographic features (osteophytes, joint space narrowing and subchondral sclerosis). For both DIP and PIP joints, the presence of one x-ray features reflects interphalangeal OA. Sensitivity and specificity for DIP joints were 92.3% and 90.9%, respectively, and 75% and 100% for PIP joints. All readers were able to reproduce their own readings in DIP and PIP joints after 4 weeks. The overall agreement between the three readers was good; ICCs ranged from 0.945 to 0.586. Additionally, outcomes of the iOARS score in a hand OA cohort revealed a higher prevalence of interphalangeal joint OA compared with the Kellgren and Lawrence score. The iOARS score is uniquely based on histopathological alterations of the interphalangeal joints in order to reliably determine OA of the DIP and PIP joints radiographically. Its high
SCoPE: an efficient method of Cosmological Parameter Estimation
International Nuclear Information System (INIS)
Das, Santanu; Souradeep, Tarun
2014-01-01
Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of the chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than 95% and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data
Methods for estimating low-flow statistics for Massachusetts streams
Ries, Kernell G.; Friesz, Paul J.
2000-01-01
Methods and computer software are described in this report for determining flow duration, low-flow frequency statistics, and August median flows. These low-flow statistics can be estimated for unregulated streams in Massachusetts using different methods depending on whether the location of interest is at a streamgaging station, a low-flow partial-record station, or an ungaged site where no data are available. Low-flow statistics for streamgaging stations can be estimated using standard U.S. Geological Survey methods described in the report. The MOVE.1 mathematical method and a graphical correlation method can be used to estimate low-flow statistics for low-flow partial-record stations. The MOVE.1 method is recommended when the relation between measured flows at a partial-record station and daily mean flows at a nearby, hydrologically similar streamgaging station is linear, and the graphical method is recommended when the relation is curved. Equations are presented for computing the variance and equivalent years of record for estimates of low-flow statistics for low-flow partial-record stations when either a single or multiple index stations are used to determine the estimates. The drainage-area ratio method or regression equations can be used to estimate low-flow statistics for ungaged sites where no data are available. The drainage-area ratio method is generally as accurate as or more accurate than regression estimates when the drainage-area ratio for an ungaged site is between 0.3 and 1.5 times the drainage area of the index data-collection site. Regression equations were developed to estimate the natural, long-term 99-, 98-, 95-, 90-, 85-, 80-, 75-, 70-, 60-, and 50-percent duration flows; the 7-day, 2-year and the 7-day, 10-year low flows; and the August median flow for ungaged sites in Massachusetts. Streamflow statistics and basin characteristics for 87 to 133 streamgaging stations and low-flow partial-record stations were used to develop the equations. The
Estimation methods for process holdup of special nuclear materials
International Nuclear Information System (INIS)
Pillay, K.K.S.; Picard, R.R.; Marshall, R.S.
1984-06-01
The US Nuclear Regulatory Commission sponsored a research study at the Los Alamos National Laboratory to explore the possibilities of developing statistical estimation methods for materials holdup at highly enriched uranium (HEU)-processing facilities. Attempts at using historical holdup data from processing facilities and selected holdup measurements at two operating facilities confirmed the need for high-quality data and reasonable control over process parameters in developing statistical models for holdup estimations. A major effort was therefore directed at conducting large-scale experiments to demonstrate the value of statistical estimation models from experimentally measured data of good quality. Using data from these experiments, we developed statistical models to estimate residual inventories of uranium in large process equipment and facilities. Some of the important findings of this investigation are the following: prediction models for the residual holdup of special nuclear material (SNM) can be developed from good-quality historical data on holdup; holdup data from several of the equipment used at HEU-processing facilities, such as air filters, ductwork, calciners, dissolvers, pumps, pipes, and pipe fittings, readily lend themselves to statistical modeling of holdup; holdup profiles of process equipment such as glove boxes, precipitators, and rotary drum filters can change with time; therefore, good estimation of residual inventories in these types of equipment requires several measurements at the time of inventory; although measurement of residual holdup of SNM in large facilities is a challenging task, reasonable estimates of the hidden inventories of holdup to meet the regulatory requirements can be accomplished through a combination of good measurements and the use of statistical models. 44 references, 62 figures, 43 tables
Dynamic systems models new methods of parameter and state estimation
2016-01-01
This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...
Kitayama, Tomoya; Kinoshita, Ayako; Sugimoto, Masahiro; Nakayama, Yoichi; Tomita, Masaru
2006-01-01
Abstract Background In order to improve understanding of metabolic systems there have been attempts to construct S-system models from time courses. Conventionally, non-linear curve-fitting algorithms have been used for modelling, because of the non-linear properties of parameter estimation from time series. However, the huge iterative calculations required have hindered the development of large-scale metabolic pathway models. To solve this problem we propose a novel method involving power-law...
Energy Technology Data Exchange (ETDEWEB)
Mishra, Srikanta; Jin, Larry; He, Jincong; Durlofsky, Louis
2015-06-30
Reduced-order models provide a means for greatly accelerating the detailed simulations that will be required to manage CO_{2} storage operations. In this work, we investigate the use of one such method, POD-TPWL, which has previously been shown to be effective in oil reservoir simulation problems. This method combines trajectory piecewise linearization (TPWL), in which the solution to a new (test) problem is represented through a linearization around the solution to a previously-simulated (training) problem, with proper orthogonal decomposition (POD), which enables solution states to be expressed in terms of a relatively small number of parameters. We describe the application of POD-TPWL for CO_{2}-water systems simulated using a compositional procedure. Stanford’s Automatic Differentiation-based General Purpose Research Simulator (AD-GPRS) performs the full-order training simulations and provides the output (derivative matrices and system states) required by the POD-TPWL method. A new POD-TPWL capability introduced in this work is the use of horizontal injection wells that operate under rate (rather than bottom-hole pressure) control. Simulation results are presented for CO_{2} injection into a synthetic aquifer and into a simplified model of the Mount Simon formation. Test cases involve the use of time-varying well controls that differ from those used in training runs. Results of reasonable accuracy are consistently achieved for relevant well quantities. Runtime speedups of around a factor of 370 relative to full- order AD-GPRS simulations are achieved, though the preprocessing needed for POD-TPWL model construction corresponds to the computational requirements for about 2.3 full-order simulation runs. A preliminary treatment for POD-TPWL modeling in which test cases differ from training runs in terms of geological parameters (rather than well controls) is also presented. Results in this case involve only small differences between
METAHEURISTIC OPTIMIZATION METHODS FOR PARAMETERS ESTIMATION OF DYNAMIC SYSTEMS
Directory of Open Access Journals (Sweden)
V. Panteleev Andrei
2017-01-01
Full Text Available The article considers the usage of metaheuristic methods of constrained global optimization: “Big Bang - Big Crunch”, “Fireworks Algorithm”, “Grenade Explosion Method” in parameters of dynamic systems estimation, described with algebraic-differential equations. Parameters estimation is based upon the observation results from mathematical model behavior. Their values are derived after criterion minimization, which describes the total squared error of state vector coordinates from the deduced ones with precise values observation at different periods of time. Paral- lelepiped type restriction is imposed on the parameters values. Used for solving problems, metaheuristic methods of constrained global extremum don’t guarantee the result, but allow to get a solution of a rather good quality in accepta- ble amount of time. The algorithm of using metaheuristic methods is given. Alongside with the obvious methods for solving algebraic-differential equation systems, it is convenient to use implicit methods for solving ordinary differen- tial equation systems. Two ways of solving the problem of parameters evaluation are given, those parameters differ in their mathematical model. In the first example, a linear mathematical model describes the chemical action parameters change, and in the second one, a nonlinear mathematical model describes predator-prey dynamics, which characterize the changes in both kinds’ population. For each of the observed examples there are calculation results from all the three methods of optimization, there are also some recommendations for how to choose methods parameters. The obtained numerical results have demonstrated the efficiency of the proposed approach. The deduced parameters ap- proximate points slightly differ from the best known solutions, which were deduced differently. To refine the results one should apply hybrid schemes that combine classical methods of optimization of zero, first and second orders and
Energy Technology Data Exchange (ETDEWEB)
Rozel, Ch
1998-06-01
In the frame of safety studies, it is useful to know the advance of an eventual release of radionuclides in ground water to determine the radiological impact on man by water ingestion, by irrigated plants ingestion and animals production ingestion (such milk or meat). The objectives of this report are to present the different physics phenomenons encountered during the migration, to list the different methods of doing ( to determine the radionuclides migration in soil), to choose one method and to check the results coherence with experience return. (N.C.)
Ambit determination method in estimating rice plant population density
Directory of Open Access Journals (Sweden)
Abu Bakar, B.,
2017-11-01
Full Text Available Rice plant population density is a key indicator in determining the crop setting and fertilizer application rate. It is therefore essential that the population density is monitored to ensure that a correct crop management decision is taken. The conventional method of determining plant population is by manually counting the total number of rice plant tillers in a 25 cm x 25 cm square frame. Sampling is done by randomly choosing several different locations within a plot to perform tiller counting. This sampling method is time consuming, labour intensive and costly. An alternative fast estimating method was developed to overcome this issue. The method relies on measuring the outer circumference or ambit of the contained rice plants in a 25 cm x 25 cm square frame to determine the number of tillers within that square frame. Data samples of rice variety MR219 were collected from rice plots in the Muda granary area, Sungai Limau Dalam, Kedah. The data were taken at 50 days and 70 days after seeding (DAS. A total of 100 data samples were collected for each sampling day. A good correlation was obtained for the variety of 50 DAS and 70 DAS. The model was then verified by taking 100 samples with the latching strap for 50 DAS and 70 DAS. As a result, this technique can be used as a fast, economical and practical alternative to manual tiller counting. The technique can potentially be used in the development of an electronic sensing system to estimate paddy plant population density.
The Software Cost Estimation Method Based on Fuzzy Ontology
Directory of Open Access Journals (Sweden)
Plecka Przemysław
2014-12-01
Full Text Available In the course of sales process of Enterprise Resource Planning (ERP Systems, it turns out that the standard system must be extended or changed (modified according to specific customer’s requirements. Therefore, suppliers face the problem of determining the cost of additional works. Most methods of cost estimation bring satisfactory results only at the stage of pre-implementation analysis. However, suppliers need to know the estimated cost as early as at the stage of trade talks. During contract negotiations, they expect not only the information about the costs of works, but also about the risk of exceeding these costs or about the margin of safety. One method that gives more accurate results at the stage of trade talks is the method based on the ontology of implementation costs. This paper proposes modification of the method involving the use of fuzzy attributes, classes, instances and relations in the ontology. The result provides not only the information about the value of work, but also about the minimum and maximum expected cost, and the most likely range of costs. This solution allows suppliers to effectively negotiate the contract and increase the chances of successful completion of the project.
Method for estimating modulation transfer function from sample images.
Saiga, Rino; Takeuchi, Akihisa; Uesugi, Kentaro; Terada, Yasuko; Suzuki, Yoshio; Mizutani, Ryuta
2018-02-01
The modulation transfer function (MTF) represents the frequency domain response of imaging modalities. Here, we report a method for estimating the MTF from sample images. Test images were generated from a number of images, including those taken with an electron microscope and with an observation satellite. These original images were convolved with point spread functions (PSFs) including those of circular apertures. The resultant test images were subjected to a Fourier transformation. The logarithm of the squared norm of the Fourier transform was plotted against the squared distance from the origin. Linear correlations were observed in the logarithmic plots, indicating that the PSF of the test images can be approximated with a Gaussian. The MTF was then calculated from the Gaussian-approximated PSF. The obtained MTF closely coincided with the MTF predicted from the original PSF. The MTF of an x-ray microtomographic section of a fly brain was also estimated with this method. The obtained MTF showed good agreement with the MTF determined from an edge profile of an aluminum test object. We suggest that this approach is an alternative way of estimating the MTF, independently of the image type. Copyright © 2017 Elsevier Ltd. All rights reserved.
A projection and density estimation method for knowledge discovery.
Directory of Open Access Journals (Sweden)
Adam Stanski
Full Text Available A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features.
Methods for cost estimation in software project management
Briciu, C. V.; Filip, I.; Indries, I. I.
2016-02-01
The speed in which the processes used in software development field have changed makes it very difficult the task of forecasting the overall costs for a software project. By many researchers, this task has been considered unachievable, but there is a group of scientist for which this task can be solved using the already known mathematical methods (e.g. multiple linear regressions) and the new techniques as genetic programming and neural networks. The paper presents a solution for building a model for the cost estimation models in the software project management using genetic algorithms starting from the PROMISE datasets related COCOMO 81 model. In the first part of the paper, a summary of the major achievements in the research area of finding a model for estimating the overall project costs is presented together with the description of the existing software development process models. In the last part, a basic proposal of a mathematical model of a genetic programming is proposed including here the description of the chosen fitness function and chromosome representation. The perspective of model described it linked with the current reality of the software development considering as basis the software product life cycle and the current challenges and innovations in the software development area. Based on the author's experiences and the analysis of the existing models and product lifecycle it was concluded that estimation models should be adapted with the new technologies and emerging systems and they depend largely by the chosen software development method.
Cooper, James F
2011-01-01
The first two parts of the IJPC series on endotoxin testing explained the nature of pyrogenic contamination and described various Limulus amebocyte lysate methods for detecting and measuring endotoxin levels with the bacterial endotoxin test described in the United States Pharmacopeia. This third article in that series describes the endotoxin test that is simplest to permorm for pharmacists who prefer to conduct an endotoxin assa at the time of compounding in the pharmacy setting.
International Nuclear Information System (INIS)
Tansho, Ryohei; Takada, Yoshihisa; Mizutani, Shohei; Kohno, Ryosuke; Hotta, Kenji; Akimoto, Tetsuo; Hara, Yousuke
2013-01-01
A beam delivery system using a single-radius-beam-wobbling method has been used to form a conformal irradiation field for proton radiotherapy in Japan. A proton beam broadened by the beam-wobbling system provides a non-Gaussian distribution of projection angle different in two mutually orthogonal planes with a common beam central axis, at a certain position. However, the conventional initial beam model for dose calculations has been using an approximation of symmetric Gaussian angular distribution with the same variance in both planes (called here a Gaussian model with symmetric variance (GMSV)), instead of the accurate one. We have developed a more accurate initial beam model defined as a non-Gaussian model with asymmetric variance (NonGMAV), and applied it to dose calculations using the simplified Monte Carlo (SMC) method. The initial beam model takes into account the different distances of two beam-wobbling magnets from the iso-center and also the different amplitudes of kick angle given by each magnet. We have confirmed that the calculation using the SMC with NonGMAV reproduced the measured dose distribution formed in air by a mono-energetic proton beam passing through a square aperture collimator better than with the GMSV and with a Gaussian model with asymmetric variance (GMAV) in which different variances of angular distributions are used in the two mutually orthogonal planes. Measured dose distributions in a homogeneous phantom formed by a modulated proton beam passing through a range shifter and an L-shaped range compensator, were consistent with calculations using the SMC with GMAV and NonGMAV, but in disagreement with calculations using the SMC with GMSV. Measured lateral penumbrae in a lateral direction were reproduced better by calculations using the SMC with NonGMAV than by those with GMAV, when an aperture collimator with a smaller opening was used. We found that such a difference can be attributed to the non-Gaussian angular distribution of the
Estimating Return on Investment in Translational Research: Methods and Protocols
Trochim, William; Dilts, David M.; Kirk, Rosalind
2014-01-01
Assessing the value of clinical and translational research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by the National Institutes of Health and its Clinical and Translational Awards (CTSA). To address this issue, the authors propose a model for measuring the return on investment (ROI) of one key CTSA program, the clinical research unit (CRU). By estimating the economic and social inputs and outputs of this program, this model produces multiple levels of ROI: investigator, program and institutional estimates. A methodology, or evaluation protocol, is proposed to assess the value of this CTSA function, with specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered, analyzed, and evaluated. This paper provides an approach CTSAs could use to assess the economic and social returns on NIH and institutional investments in these critical activities. PMID:23925706
Estimating return on investment in translational research: methods and protocols.
Grazier, Kyle L; Trochim, William M; Dilts, David M; Kirk, Rosalind
2013-12-01
Assessing the value of clinical and translational research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by the National Institutes of Health (NIH) and its Clinical and Translational Awards (CTSAs). To address this issue, the authors propose a model for measuring the return on investment (ROI) of one key CTSA program, the clinical research unit (CRU). By estimating the economic and social inputs and outputs of this program, this model produces multiple levels of ROI: investigator, program, and institutional estimates. A methodology, or evaluation protocol, is proposed to assess the value of this CTSA function, with specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered, analyzed, and evaluated. This article provides an approach CTSAs could use to assess the economic and social returns on NIH and institutional investments in these critical activities.
Webometrics: Some Critical Issues of WWW Size Estimation Methods
Directory of Open Access Journals (Sweden)
Srinivasan Mohana Arunachalam
2018-04-01
Full Text Available The number of webpages in the Internet has increased tremendously over the last two decades however only a part of it is indexed by various search engines. This small portion is the indexable web of the Internet and can be usually reachable from a Search Engine. Search engines play a big role in making the World Wide Web accessible to the end user, and how much of the World Wide Web is accessible on the size of the search engine’s index. Researchers have proposed several ways to estimate this size of the indexable web using search engines with and without privileged access to the search engine’s database. Our report provides a summary of methods used in the last two decades to estimate the size of the World Wide Web, as well as describe how this knowledge can be used in other aspects/tasks concerning the World Wide Web.
Estimating bacterial diversity for ecological studies: methods, metrics, and assumptions.
Directory of Open Access Journals (Sweden)
Julia Birtel
Full Text Available Methods to estimate microbial diversity have developed rapidly in an effort to understand the distribution and diversity of microorganisms in natural environments. For bacterial communities, the 16S rRNA gene is the phylogenetic marker gene of choice, but most studies select only a specific region of the 16S rRNA to estimate bacterial diversity. Whereas biases derived from from DNA extraction, primer choice and PCR amplification are well documented, we here address how the choice of variable region can influence a wide range of standard ecological metrics, such as species richness, phylogenetic diversity, β-diversity and rank-abundance distributions. We have used Illumina paired-end sequencing to estimate the bacterial diversity of 20 natural lakes across Switzerland derived from three trimmed variable 16S rRNA regions (V3, V4, V5. Species richness, phylogenetic diversity, community composition, β-diversity, and rank-abundance distributions differed significantly between 16S rRNA regions. Overall, patterns of diversity quantified by the V3 and V5 regions were more similar to one another than those assessed by the V4 region. Similar results were obtained when analyzing the datasets with different sequence similarity thresholds used during sequences clustering and when the same analysis was used on a reference dataset of sequences from the Greengenes database. In addition we also measured species richness from the same lake samples using ARISA Fingerprinting, but did not find a strong relationship between species richness estimated by Illumina and ARISA. We conclude that the selection of 16S rRNA region significantly influences the estimation of bacterial diversity and species distributions and that caution is warranted when comparing data from different variable regions as well as when using different sequencing techniques.
Analytical Method to Estimate the Complex Permittivity of Oil Samples
Directory of Open Access Journals (Sweden)
Lijuan Su
2018-03-01
Full Text Available In this paper, an analytical method to estimate the complex dielectric constant of liquids is presented. The method is based on the measurement of the transmission coefficient in an embedded microstrip line loaded with a complementary split ring resonator (CSRR, which is etched in the ground plane. From this response, the dielectric constant and loss tangent of the liquid under test (LUT can be extracted, provided that the CSRR is surrounded by such LUT, and the liquid level extends beyond the region where the electromagnetic fields generated by the CSRR are present. For that purpose, a liquid container acting as a pool is added to the structure. The main advantage of this method, which is validated from the measurement of the complex dielectric constant of olive and castor oil, is that reference samples for calibration are not required.
Estimation of metallic impurities in uranium by carrier distillation method
International Nuclear Information System (INIS)
Page, A.G.; Godbole, S.V.; Deshkar, S.B.; Joshi, B.D.
1976-01-01
An emission spectrographic method has been standardised for the estimation of twenty-two metallic impurities in uranium using carrier-distillation technique. Silver chloride with a concentration of 5% has been used as the carrier and palladium and gallium are used as internal standards. Precision and accuracy determinations of the synthetic samples indicate 6-15% deviation for most of the elements. Using the method described here, five uranium reference samples received from C.E.A.-France were analysed. The detection limits obtained for Cd, Co and W are lower than those reported in the literature while limits for the remaining elements are comparable to the values reported. The method is suitable for the chemical quality control analysis of uranium used for the Fast Breeder Test Reactor (FBTR) fuel. (author)
Method for estimating boiling temperatures of crude oils
International Nuclear Information System (INIS)
Jones, R.K.
1996-01-01
Evaporation is often the dominant mechanism for mass loss during the first few days following an oil spill. The initial boiling point of the oil and the rate at which the boiling point changes as the oil evaporates are needed to initialize some computer models used in spill response. The lack of available boiling point data often limits the usefulness of these models in actual emergency situations. A new computational method was developed to estimate the temperature at which a crude oil boils as a function of the fraction evaporated using only standard distillation data, which are commonly available. This method employs established thermodynamic rules and approximations, and was designed to be used with automated spill-response models. Comparisons with measurements show a strong correlation between results obtained with this method and measured values
Advances in Time Estimation Methods for Molecular Data.
Kumar, Sudhir; Hedges, S Blair
2016-04-01
Molecular dating has become central to placing a temporal dimension on the tree of life. Methods for estimating divergence times have been developed for over 50 years, beginning with the proposal of molecular clock in 1962. We categorize the chronological development of these methods into four generations based on the timing of their origin. In the first generation approaches (1960s-1980s), a strict molecular clock was assumed to date divergences. In the second generation approaches (1990s), the equality of evolutionary rates between species was first tested and then a strict molecular clock applied to estimate divergence times. The third generation approaches (since ∼2000) account for differences in evolutionary rates across the tree by using a statistical model, obviating the need to assume a clock or to test the equality of evolutionary rates among species. Bayesian methods in the third generation require a specific or uniform prior on the speciation-process and enable the inclusion of uncertainty in clock calibrations. The fourth generation approaches (since 2012) allow rates to vary from branch to branch, but do not need prior selection of a statistical model to describe the rate variation or the specification of speciation model. With high accuracy, comparable to Bayesian approaches, and speeds that are orders of magnitude faster, fourth generation methods are able to produce reliable timetrees of thousands of species using genome scale data. We found that early time estimates from second generation studies are similar to those of third and fourth generation studies, indicating that methodological advances have not fundamentally altered the timetree of life, but rather have facilitated time estimation by enabling the inclusion of more species. Nonetheless, we feel an urgent need for testing the accuracy and precision of third and fourth generation methods, including their robustness to misspecification of priors in the analysis of large phylogenies and data
Energy Technology Data Exchange (ETDEWEB)
Caine, S [Glasgow Royal Infirmary (UK); Fleck, A [Charing Cross Hospital, London (UK). Medical School
1984-09-01
A method is described for obtaining the specific activity of /sup 14/C in urea, essential in the measurement of the synthesis rate of a plasma protein in vivo, which is simpler than the original procedure. The principle is the measurement of /sup 14/CO/sub 2/ and NH/sub 4//sup +/ separately, after incubation with urease. A simple alteration gives samples of /sup 13/CO/sub 2/ for mass spectrometry. The 'recoveries' of /sup 14/C and /sup 13/C in urea were invariably between 90 and 96% and the CV was 3%.
Kitayama, Tomoya; Kinoshita, Ayako; Sugimoto, Masahiro; Nakayama, Yoichi; Tomita, Masaru
2006-07-17
In order to improve understanding of metabolic systems there have been attempts to construct S-system models from time courses. Conventionally, non-linear curve-fitting algorithms have been used for modelling, because of the non-linear properties of parameter estimation from time series. However, the huge iterative calculations required have hindered the development of large-scale metabolic pathway models. To solve this problem we propose a novel method involving power-law modelling of metabolic pathways from the Jacobian of the targeted system and the steady-state flux profiles by linearization of S-systems. The results of two case studies modelling a straight and a branched pathway, respectively, showed that our method reduced the number of unknown parameters needing to be estimated. The time-courses simulated by conventional kinetic models and those described by our method behaved similarly under a wide range of perturbations of metabolite concentrations. The proposed method reduces calculation complexity and facilitates the construction of large-scale S-system models of metabolic pathways, realizing a practical application of reverse engineering of dynamic simulation models from the Jacobian of the targeted system and steady-state flux profiles.
Directory of Open Access Journals (Sweden)
Sugimoto Masahiro
2006-07-01
Full Text Available Abstract Background In order to improve understanding of metabolic systems there have been attempts to construct S-system models from time courses. Conventionally, non-linear curve-fitting algorithms have been used for modelling, because of the non-linear properties of parameter estimation from time series. However, the huge iterative calculations required have hindered the development of large-scale metabolic pathway models. To solve this problem we propose a novel method involving power-law modelling of metabolic pathways from the Jacobian of the targeted system and the steady-state flux profiles by linearization of S-systems. Results The results of two case studies modelling a straight and a branched pathway, respectively, showed that our method reduced the number of unknown parameters needing to be estimated. The time-courses simulated by conventional kinetic models and those described by our method behaved similarly under a wide range of perturbations of metabolite concentrations. Conclusion The proposed method reduces calculation complexity and facilitates the construction of large-scale S-system models of metabolic pathways, realizing a practical application of reverse engineering of dynamic simulation models from the Jacobian of the targeted system and steady-state flux profiles.
Estimating Fuel Cycle Externalities: Analytical Methods and Issues, Report 2
Energy Technology Data Exchange (ETDEWEB)
Barnthouse, L.W.; Cada, G.F.; Cheng, M.-D.; Easterly, C.E.; Kroodsma, R.L.; Lee, R.; Shriner, D.S.; Tolbert, V.R.; Turner, R.S.
1994-07-01
of complex issues that also have not been fully addressed. This document contains two types of papers that seek to fill part of this void. Some of the papers describe analytical methods that can be applied to one of the five steps of the damage function approach. The other papers discuss some of the complex issues that arise in trying to estimate externalities. This report, the second in a series of eight reports, is part of a joint study by the U.S. Department of Energy (DOE) and the Commission of the European Communities (EC)* on the externalities of fuel cycles. Most of the papers in this report were originally written as working papers during the initial phases of this study. The papers provide descriptions of the (non-radiological) atmospheric dispersion modeling that the study uses; reviews much of the relevant literature on ecological and health effects, and on the economic valuation of those impacts; contains several papers on some of the more complex and contentious issues in estimating externalities; and describes a method for depicting the quality of scientific information that a study uses. The analytical methods and issues that this report discusses generally pertain to more than one of the fuel cycles, though not necessarily to all of them. The report is divided into six parts, each one focusing on a different subject area.
Sediment Curve Uncertainty Estimation Using GLUE and Bootstrap Methods
Directory of Open Access Journals (Sweden)
aboalhasan fathabadi
2017-02-01
Full Text Available Introduction: In order to implement watershed practices to decrease soil erosion effects it needs to estimate output sediment of watershed. Sediment rating curve is used as the most conventional tool to estimate sediment. Regarding to sampling errors and short data, there are some uncertainties in estimating sediment using sediment curve. In this research, bootstrap and the Generalized Likelihood Uncertainty Estimation (GLUE resampling techniques were used to calculate suspended sediment loads by using sediment rating curves. Materials and Methods: The total drainage area of the Sefidrood watershed is about 560000 km2. In this study uncertainty in suspended sediment rating curves was estimated in four stations including Motorkhane, Miyane Tonel Shomare 7, Stor and Glinak constructed on Ayghdamosh, Ghrangho, GHezelOzan and Shahrod rivers, respectively. Data were randomly divided into a training data set (80 percent and a test set (20 percent by Latin hypercube random sampling.Different suspended sediment rating curves equations were fitted to log-transformed values of sediment concentration and discharge and the best fit models were selected based on the lowest root mean square error (RMSE and the highest correlation of coefficient (R2. In the GLUE methodology, different parameter sets were sampled randomly from priori probability distribution. For each station using sampled parameter sets and selected suspended sediment rating curves equation suspended sediment concentration values were estimated several times (100000 to 400000 times. With respect to likelihood function and certain subjective threshold, parameter sets were divided into behavioral and non-behavioral parameter sets. Finally using behavioral parameter sets the 95% confidence intervals for suspended sediment concentration due to parameter uncertainty were estimated. In bootstrap methodology observed suspended sediment and discharge vectors were resampled with replacement B (set to
Groundwater Seepage Estimation into Amirkabir Tunnel Using Analytical Methods and DEM and SGR Method
Hadi Farhadian; Homayoon Katibeh
2015-01-01
In this paper, groundwater seepage into Amirkabir tunnel has been estimated using analytical and numerical methods for 14 different sections of the tunnel. Site Groundwater Rating (SGR) method also has been performed for qualitative and quantitative classification of the tunnel sections. The obtained results of above mentioned methods were compared together. The study shows reasonable accordance with results of the all methods unless for two sections of tunnel. In these t...
Methods on estimation of the evaporation from water surface
International Nuclear Information System (INIS)
Trajanovska, Lidija; Tanushevska, Dushanka; Aleksovska, Nina
2001-01-01
The whole world water supply on the Earth is in close dependence on hydrological cycle connected with water circulation at Earth-Atmosphere route through evaporation, precipitation and water runoff. Evaporation exists worldwide where the atmosphere is unsatiated of water steam (when there is humidity in short supply) and it depends on climatic conditions in some regions. The purpose of this paper is to determine a method for estimation of evaporation of natural water surface in our areas, that means its determination as exact as possible. (Original)
International Nuclear Information System (INIS)
Sorensen, H.; Nordskov, A.; Sass, B.; Visler, T.
1987-01-01
A simplified version of a deuterium pellet gun based on the pipe gun principle is described. The pipe gun is made from a continuous tube of stainless steel and gas is fed in from the muzzle end only. It is indicated that the pellet length is determined by the temperature gradient along the barrel right outside the freezing cell. Velocities of around 1000 m/s with a scatter of +- 2% are obtained with a propellant gas pressure of 40 bar
UTILITY OF SIMPLIFIED LABANOTATION
Directory of Open Access Journals (Sweden)
Maria del Pilar Naranjo
2016-02-01
Full Text Available After using simplified Labanotation as a didactic tool for some years, the author can conclude that it accomplishes at least three main functions: efficiency of rehearsing time, social recognition and broadening of the choreographic consciousness of the dancer. The doubts of the dancing community about the issue of ‘to write or not to write’ are highly determined by the contexts and their own choreographic evolution, but the utility of Labanotation, as a tool for knowledge, is undeniable.
Directory of Open Access Journals (Sweden)
Carolyn M Higuchi
Full Text Available In vitro growth of follicles is a promising technology to generate large quantities of competent oocytes from immature follicles and could expand the potential of assisted reproductive technologies (ART. Isolated follicle culture is currently the primary method used to develop and mature follicles in vitro. However, this procedure typically requires complicated, time-consuming procedures, as well as destruction of the normal ovarian microenvironment. Here we describe a simplified 3-D ovarian culture system that can be used to mature multilayered secondary follicles into antral follicles, generating developmentally competent oocytes in vitro. Ovaries recovered from mice at 14 days of age were cut into 8 pieces and placed onto a thick Matrigel drop (3-D culture for 10 days of culture. As a control, ovarian pieces were cultured on a membrane filter without any Matrigel drop (Membrane culture. We also evaluated the effect of activin A treatment on follicle growth within the ovarian pieces with or without Matrigel support. Thus we tested four different culture conditions: C (Membrane/activin-, A (Membrane/activin+, M (Matrigel/activin-, and M+A (Matrigel/activin+. We found that the cultured follicles and oocytes steadily increased in size regardless of the culture condition used. However, antral cavity formation occurred only in the follicles grown in the 3-D culture system (M, M+A. Following ovarian tissue culture, full-grown GV oocytes were isolated from the larger follicles to evaluate their developmental competence by subjecting them to in vitro maturation (IVM and in vitro fertilization (IVF. Maturation and fertilization rates were higher using oocytes grown in 3-D culture (M, M+A than with those grown in membrane culture (C, A. In particular, activin A treatment further improved 3-D culture (M+A success. Following IVF, two-cell embryos were transferred to recipients to generate full-term offspring. In summary, this simple and easy 3-D ovarian
Wang, Chao; Yang, Chuan-sheng
2017-09-01
In this paper, we present a simplified parsimonious higher-order multivariate Markov chain model with new convergence condition. (TPHOMMCM-NCC). Moreover, estimation method of the parameters in TPHOMMCM-NCC is give. Numerical experiments illustrate the effectiveness of TPHOMMCM-NCC.
Asiri, Sharefa M.; Laleg-Kirati, Taous-Meriem
2017-01-01
In this paper, a method based on modulating functions is proposed to estimate the Cerebral Blood Flow (CBF). The problem is written in an input estimation problem for a damped wave equation which is used to model the spatiotemporal variations
Estimation of creatinine in Urine sample by Jaffe's method
International Nuclear Information System (INIS)
Wankhede, Sonal; Arunkumar, Suja; Sawant, Pramilla D.; Rao, B.B.
2012-01-01
In-vitro bioassay monitoring is based on the determination of activity concentrations in biological samples excreted from the body and is most suitable for alpha and beta emitters. A truly representative bioassay sample is the one having all the voids collected during a 24-h period however, this being technically difficult, overnight urine samples collected by the workers are analyzed. These overnight urine samples are collected for 10-16 h, however in the absence of any specific information, 12 h duration is assumed and the observed results are then corrected accordingly obtain the daily excretion rate. To reduce the uncertainty due to unknown duration of sample collection, IAEA has recommended two methods viz., measurement of specific gravity and creatinine excretion rate in urine sample. Creatinine is a final metabolic product creatinine phosphate in the body and is excreted at a steady rate for people with normally functioning kidneys. It is, therefore, often used as a normalization factor for estimation of duration of sample collection. The present study reports the chemical procedure standardized and its application for the estimation of creatinine in urine samples collected from occupational workers. Chemical procedure for estimation of creatinine in bioassay samples was standardized and applied successfully for its estimation in bioassay samples collected from the workers. The creatinine excretion rate observed for these workers is lower than observed in literature. Further, work is in progress to generate a data bank of creatinine excretion rate for most of the workers and also to study the variability in creatinine coefficient for the same individual based on the analysis of samples collected for different duration