Sample records for simplified estimation method

  1. Evaluating simplified methods for liquefaction assessment for loss estimation (United States)

    Kongar, Indranil; Rossetto, Tiziana; Giovinazzi, Sonia


    Currently, some catastrophe models used by the insurance industry account for liquefaction by applying a simple factor to shaking-induced losses. The factor is based only on local liquefaction susceptibility and this highlights the need for a more sophisticated approach to incorporating the effects of liquefaction in loss models. This study compares 11 unique models, each based on one of three principal simplified liquefaction assessment methods: liquefaction potential index (LPI) calculated from shear-wave velocity, the HAZUS software method and a method created specifically to make use of USGS remote sensing data. Data from the September 2010 Darfield and February 2011 Christchurch earthquakes in New Zealand are used to compare observed liquefaction occurrences to forecasts from these models using binary classification performance measures. The analysis shows that the best-performing model is the LPI calculated using known shear-wave velocity profiles, which correctly forecasts 78 % of sites where liquefaction occurred and 80 % of sites where liquefaction did not occur, when the threshold is set at 7. However, these data may not always be available to insurers. The next best model is also based on LPI but uses shear-wave velocity profiles simulated from the combination of USGS VS30 data and empirical functions that relate VS30 to average shear-wave velocities at shallower depths. This model correctly forecasts 58 % of sites where liquefaction occurred and 84 % of sites where liquefaction did not occur, when the threshold is set at 4. These scores increase to 78 and 86 %, respectively, when forecasts are based on liquefaction probabilities that are empirically related to the same values of LPI. This model is potentially more useful for insurance since the input data are publicly available. HAZUS models, which are commonly used in studies where no local model is available, perform poorly and incorrectly forecast 87 % of sites where liquefaction occurred, even at

  2. A simplified method of estimating noise power spectra

    Energy Technology Data Exchange (ETDEWEB)

    Hanson, K.M.


    A technique to estimate the radial dependence of the noise power spectrum of images is proposed in which the calculations are conducted solely in the spatial domain of the noise image. The noise power spectrum averaged over a radial spatial-frequency interval is obtained form the variance of a noise image that has been convolved with a small kernel that approximates a Laplacian operator. Recursive consolidation of the image by factors of two in each dimension yields estimates of the noise power spectrum over that full range of spatial frequencies.

  3. Estimating Agricultural Water Use using the Operational Simplified Surface Energy Balance Evapotranspiration Estimation Method (United States)

    Forbes, B. T.


    Due to the predominantly arid climate in Arizona, access to adequate water supply is vital to the economic development and livelihood of the State. Water supply has become increasingly important during periods of prolonged drought, which has strained reservoir water levels in the Desert Southwest over past years. Arizona's water use is dominated by agriculture, consuming about seventy-five percent of the total annual water demand. Tracking current agricultural water use is important for managers and policy makers so that current water demand can be assessed and current information can be used to forecast future demands. However, many croplands in Arizona are irrigated outside of areas where water use reporting is mandatory. To estimate irrigation withdrawals on these lands, we use a combination of field verification, evapotranspiration (ET) estimation, and irrigation system qualification. ET is typically estimated in Arizona using the Modified Blaney-Criddle method which uses meteorological data to estimate annual crop water requirements. The Modified Blaney-Criddle method assumes crops are irrigated to their full potential over the entire growing season, which may or may not be realistic. We now use the Operational Simplified Surface Energy Balance (SSEBop) ET data in a remote-sensing and energy-balance framework to estimate cropland ET. SSEBop data are of sufficient resolution (30m by 30m) for estimation of field-scale cropland water use. We evaluate our SSEBop-based estimates using ground-truth information and irrigation system qualification obtained in the field. Our approach gives the end user an estimate of crop consumptive use as well as inefficiencies in irrigation system performance—both of which are needed by water managers for tracking irrigated water use in Arizona.

  4. Evaluation of a Simplified Method to Estimate the Peak Inter-Story Drift Ratio of Steel Frames with Hysteretic Dampers

    Directory of Open Access Journals (Sweden)

    Jae-Do Kang


    Full Text Available In this paper, a simplified method is proposed to estimate the peak inter-story drift ratios of steel frames with hysteretic dampers. The simplified method involved the following: (1 the inelastic spectral displacement is estimated using a single-degree-of-freedom (SDOF system with multi-springs, which is equivalent to a steel frame with dampers and in which multi-springs represent the hysteretic behavior of dampers; (2 the first inelastic mode vector is estimated using a pattern of story drifts obtained from nonlinear static pushover analysis; and (3 the effects of modes higher than the first mode are estimated by using the jth modal period, jth mode vector, and jth modal damping ratio obtained from eigenvalue analysis. The accuracy of the simplified method is estimated using the results of nonlinear time history analysis (NTHA on a series of three-story, six-story, and twelve-story steel moment resisting frames with steel hysteretic dampers. Based on the results of a comparison of the peak inter-story drift ratios estimated by the simplified method and that computed via NTHA using an elaborate analytical model, the accuracy of the simplified method is sufficient for evaluating seismic demands.

  5. Simplified in situ method for estimating ruminal dry matter and protein degradability of concentrates. (United States)

    Olaisen, V; Mejdell, T; Volden, H; Nesse, N


    In this study, dry matter and crude protein in situ degradation data from different concentrate feeds were used to test the accuracy of effective degradability (ED) measures when using reduced ruminal incubation times compared with models based on seven or eight incubation times. The ED was estimated both with and without correction for nylon bag particle loss. The crude protein ED corrected for particle loss of the calibration data set was widely distributed in a range from 16 to 90% with an overall mean value of 60.4%, and the dry matter ED was distributed in the range from 22.7 to 80.7%, with a mean value of 56.9%. The simplified method was developed based on bilinear regression models where all combinations of one to three disappearance values were tested to find the optimal time point combinations to estimate ED. Bilinear regression models based on two and three ruminal incubation times gave similar estimates to a standard in situ method over a wide range of passage rates both for the data set used to parameterize the models and the independent data set used to evaluate the models. Using two incubation times, the bilinear model based on 4 and 24 h gave the most accurate estimates, and the models based on 2, 8, and 24 h for uncorrected data and 4, 8, and 24 h for corrected data were most accurate of the three time points bilinear models. The number of nylon bags used by these models was reduced by 58 to 78% compared with the standard in situ method, and the total incubation time needed was substantially reduced.

  6. Development, verification, and application of a simplified method to estimate total-streambed scour at bridge sites in Illinois (United States)

    Holmes, Robert R.; Dunn, Chad J.


    A simplified method to estimate total-streambed scour was developed for application to bridges in the State of Illinois. Scour envelope curves, developed as empirical relations between calculated total scour and bridge-site chracteristics for 213 State highway bridges in Illinois, are used in the method to estimate the 500-year flood scour. These 213 bridges, geographically distributed throughout Illinois, had been previously evaluated for streambed scour with the application of conventional hydraulic and scour-analysis methods recommended by the Federal Highway Administration. The bridge characteristics necessary for application of the simplified bridge scour-analysis method can be obtained from an office review of bridge plans, examination of topographic maps, and reconnaissance-level site inspection. The estimates computed with the simplified method generally resulted in a larger value of 500-year flood total-streambed scour than with the more detailed conventional method. The simplified method was successfully verified with a separate data set of 106 State highway bridges, which are geographically distributed throughout Illinois, and 15 county highway bridges.

  7. A Simplified Method for Estimating Subsonic Lift-Curve Slope at Low Angles of Attack for Irregular Planform Wings (United States)

    Spencer, Bernard, Jr.


    A simplified method is presented for estimating the lift-curve slope of irregular planform wings at subsonic speeds and low angles of attack. The present process is an extension of the method derived in NACA Technical Note 3911 and enables quick estimates of subsonic liftcurve slope, to be made whereas more refined procedures require considerable time and computation. Comparison of experimental and estimated values for a wide range of wing planforms having discontinuous spanwise sweep variation indicates good agreement. A comparison of the present procedure with a 20-step vortex method (NACA Research Memorandum L50L13) indicated good agreement for a variable-sweep configuration.

  8. Simplified Analytical Method for Estimating the Resistance of Lock Gates to Ship Impacts

    Directory of Open Access Journals (Sweden)

    Loïc Buldgen


    Full Text Available The present paper is concerned with the design of lock gates submitted to ship impacts. In this paper, a simplified analytical method is presented to evaluate the resistance of such structures under collision. The basic idea is to assume that the resistance is first provided through a local deforming mode, corresponding to a localized crushing of some impacted structural elements. For consecutive larger deformations, the resistance is then mostly provided through a global deforming mode, corresponding to an overall movement of the entire gate. For assessing the resistance in the case of the local deforming mode, the structure is divided into a given number of large structural entities called “superelements.” For each of them, a relation between the resistance of the gate and the penetration of the striking ship is established. However, as some results are already available in the literature, this subject is not treated extensively in this paper. On the contrary, the calculation of the resistance of the gate provided through the global mode is detailed and the strategy to switch from local to global deformation is highlighted. Finally, we propose to validate our developments by making a comparison between results obtained numerically and those predicted by the present analytical approach.

  9. A comparison of consumptive-use estimates derived from the simplified surface energy balance approach and indirect reporting methods (United States)

    Maupin, Molly A.; Senay, Gabriel B.; Kenny, Joan F.; Savoca, Mark E.


    Recent advances in remote-sensing technology and Simplified Surface Energy Balance (SSEB) methods can provide accurate and repeatable estimates of evapotranspiration (ET) when used with satellite observations of irrigated lands. Estimates of ET are generally considered equivalent to consumptive use (CU) because they represent the part of applied irrigation water that is evaporated, transpired, or otherwise not available for immediate reuse. The U.S. Geological Survey compared ET estimates from SSEB methods to CU data collected for 1995 using indirect methods as part of the National Water Use Information Program (NWUIP). Ten-year (2000-2009) average ET estimates from SSEB methods were derived using Moderate Resolution Imaging Spectroradiometer (MODIS) 1-kilometer satellite land surface temperature and gridded weather datasets from the Global Data Assimilation System (GDAS). County-level CU estimates for 1995 were assembled and referenced to 1-kilometer grid cells to synchronize with the SSEB ET estimates. Both datasets were seasonally and spatially weighted to represent the irrigation season (June-September) and those lands that were identified in the county as irrigated. A strong relation (R2 greater than 0.7) was determined between NWUIP CU and SSEB ET data. Regionally, the relation is stronger in arid western states than in humid eastern states, and positive and negative biases are both present at state-level comparisons. SSEB ET estimates can play a major role in monitoring and updating county-based CU estimates by providing a quick and cost-effective method to detect major year-to-year changes at county levels, as well as providing a means to disaggregate county-based ET estimates to sub-county levels. More research is needed to identify the causes for differences in state-based relations.

  10. Proposal simplified estimation method for annual electricity generation of photovoltaic systems; Taiyoko hatsuden shisutemu no nenkan hatsudenryo no kan'i suiteihoshiki no teian

    Energy Technology Data Exchange (ETDEWEB)

    Unozawa, H.; Kurokawa, K. [Tokyo University of Agriculture and Technology, Tokyo (Japan); Sugibuchi, K. [Daido Hoxan Inc., Osaka (Japan)


    For designing photovoltaic (PV) systems appropriately, it is necessary to estimate solar radiation on an inclined surface at arbitrary azimuth and tilt angles accurately. The authors proposes a nomograph that expresses data graphically from irradiation database for 225 sites which proposed by the Japan Weather association. This allows to estimate easily solar radiation on an inclined surface. In addition, by combining the solar radiation climatic zone of Japan, solar radiation on an inclined in a simple procedure. In this paper, a simplified method for annual electricity generation calculation sheet is also proposed. By this work, fundamental planning work can be accomplished without any professional knowledge. (author)

  11. Simplified Life-Cycle Cost Estimation (United States)

    Remer, D. S.; Lorden, G.; Eisenberger, I.


    Simple method for life-cycle cost (LCC) estimation avoids pitfalls inherent in formulations requiring separate estimates of inflation and interest rates. Method depends for validity observation that interest and inflation rates closely track each other.

  12. Estimation of the potential evapotranspiration by a simplified penman method Estimativa da evapotranspiração potencial pelo método de Penman-Simplificado

    Directory of Open Access Journals (Sweden)

    Nilson A. Villa Nova


    Full Text Available The numerous methods for calculating the potential or reference evapotranspiration (ETo or ETP almost always do for a 24-hour period, including values of climatic parameters throughout the nocturnal period (daily averages. These results have a nil effect on transpiration, constituting the main evaporative demand process in cases of localized irrigation. The aim of the current manuscript was to come up with a model rather simplified for the calculation of diurnal daily ETo. It deals with an alternative approach based on the theoretical background of the Penman method without having to consider values of aerodynamic conductance of latent and sensible heat fluxes, as well as data of wind speed and relative humidity of the air. The comparison between the diurnal values of ETo measured in weighing lysimeters with elevated precision and estimated by either the Penman-Monteith method or the Simplified-Penman approach in study also points out a fairly consistent agreement among the potential demand calculation criteria. The Simplified-Penman approach was a feasible alternative to estimate ETo under the local meteorological conditions of two field trials. With the availability of the input data required, such a method could be employed in other climatic regions for scheduling irrigation.Os inúmeros métodos existentes para o cálculo da evapotranspiração potencial (ETo, quase sempre o fazem para períodos de 24 horas, incluindo valores de parâmetros climáticos do período noturno (médias diárias, os quais têm efeito nulo na transpiração, que é a principal demanda nos casos de irrigação localizada. No presente trabalho, teve-se o objetivo de elaborar, com base no método de Penman, um modelo bastante simplificado para o cálculo da evapotranspiração diurna, que prescinde de dados de velocidade do vento e umidade relativa do ar, sem perda de precisão, ampliando bastante a sua possibilidade de aplicação. A comparação entre os valores de

  13. Estimating performance of HRSGs is simplified

    Energy Technology Data Exchange (ETDEWEB)

    Ganapathy, V. (ABCO Industries, Abilene, TX (United States))


    This article describes calculating heat recovery steam generators (HRSG) performance without the use of an elaborate computer program. In designing HRSG, used in gas-turbine-based cogeneration plants, the following information must be calculated: HRSG steam production in the fired and unfired mode, oxygen content of the exhaust, and fuel consumption. With the simplified procedure described, engineers can obtain the information quickly without consulting HRSG suppliers or using a computer program.

  14. A simplified multisupport response spectrum method (United States)

    Ye, Jihong; Zhang, Zhiqiang; Liu, Xianming


    A simplified multisupport response spectrum method is presented. The structural response is a sum of two components of a structure with a first natural period less than 2 s. The first component is the pseudostatic response caused by the inconsistent motions of the structural supports, and the second is the structural dynamic response to ground motion accelerations. This method is formally consistent with the classical response spectrum method, and the effects of multisupport excitation are considered for any modal response spectrum or modal superposition. If the seismic inputs at each support are the same, the support displacements caused by the pseudostatic response become rigid body displacements. The response spectrum in the case of multisupport excitations then reduces to that for uniform excitations. In other words, this multisupport response spectrum method is a modification and extension of the existing response spectrum method under uniform excitation. Moreover, most of the coherency coefficients in this formulation are simplified by approximating the ground motion excitation as white noise. The results indicate that this simplification can reduce the calculation time while maintaining accuracy. Furthermore, the internal forces obtained by the multisupport response spectrum method are compared with those produced by the traditional response spectrum method in two case studies of existing long-span structures. Because the effects of inconsistent support displacements are not considered in the traditional response spectrum method, the values of internal forces near the supports are underestimated. These regions are important potential failure points and deserve special attention in the seismic design of reticulated structures.

  15. Simplified analysis of sub-wavelength triangular gratings by simplified modal method. (United States)

    Sridharan, Gayathri; Bhattacharya, Shanti


    A phase-equivalence of a triangular grating and a "corresponding" blazed structure is proposed. This equivalence is used to simplify the analysis of the grating, which otherwise would require the repetitive application of the simplified modal method to each lamellar grating that constitutes the triangular grating. The concept is used to arrive at an equation for the phase introduced by the triangular grating. The proposed model is verified by finite element simulations. A method of fabricating a triangular grating in quartz is presented. The proposed theory, along with optical testing, can be used as a non-destructive means by which to estimate the height of the triangular grating during the dry etching process.

  16. Electrical estimating methods

    CERN Document Server

    Del Pico, Wayne J


    Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el

  17. Estimation of cardiac reserve by peak power: validation and initial application of a simplified index


    Armstrong, G; Carlier, S.; Fukamachi, K; Thomas, J; Marwick, T


    OBJECTIVES—To validate a simplified estimate of peak power (SPP) against true (invasively measured) peak instantaneous power (TPP), to assess the feasibility of measuring SPP during exercise and to correlate this with functional capacity.
DESIGN—Development of a simplified method of measurement and observational study.
SETTING—Tertiary referral centre for cardiothoracic disease.
SUBJECTS—For validation of SPP with TPP, seven normal dogs and four dogs with dilated cardiomyopathy were studied. ...

  18. Estimating long-term evolution of fine sediment budget in the Iffezheim reservoir using a simplified method based on classification of boundary conditions (United States)

    Zhang, Qing; Hillebrand, Gudrun; Hoffmann, Thomas; Hinkelmann, Reinhard


    , and 2 longer ones, which include several short-term periods. Short-term periods spread from 1 to 3 months, whereas long-term periods indicate 2 and 5 years. The simulation results showed an acceptable agreement with the measurements. It was also found that the long-term periods had less deviation to the measurements than the short ones. This simplified method exhibited clear savings in computational time compared to the instationary simulations; in this case only 3 hours of computational time were needed for 5 years simulation period using the reference computer mentioned above. Further research is needed with respect to the limits of this linear approach, i.e. with respect to the frequency with which the set of steady simulations has to be updated due to significant changes in morphology and in turn in hydraulics. Yet, the preliminary results are promising, suggesting that the developed approach is very suitable for a long-term simulation of riverbed evolution. REFERENCES Olsen, N.R.B. 2014. A three-dimensional numerical model for simulation of sediment movements in water intakes with multiblock option. Version 1 and 2. User's manual. Department of Hydraulic and Environmental Engineering. The Norwegian University of Science and Technology, Trondheim, Norway. Wasser- und Schifffahrtsamt (WSA) Freiburg. 2011. Sachstandsbericht oberer Wehrkanal Staustufe Iffezheim. Technical report - Upper weir channel of the Iffezheim hydropower reservoir. Zhang, Q., Hillebrand, G. Moser, H. & Hinkelmann, R. 2015. Simulation of non-uniform sediment transport in a German Reservoir with the SSIIM Model and sensitivity analysis. Proceedings of the 36th IAHR World Congress. The Hague, The Netherland.

  19. Simplified theory of plastic zones based on Zarka's method

    CERN Document Server

    Hübel, Hartwig


    The present book provides a new method to estimate elastic-plastic strains via a series of linear elastic analyses. For a life prediction of structures subjected to variable loads, frequently encountered in mechanical and civil engineering, the cyclically accumulated deformation and the elastic plastic strain ranges are required. The Simplified Theory of Plastic Zones (STPZ) is a direct method which provides the estimates of these and all other mechanical quantities in the state of elastic and plastic shakedown. The STPZ is described in detail, with emphasis on the fact that not only scientists but engineers working in applied fields and advanced students are able to get an idea of the possibilities and limitations of the STPZ. Numerous illustrations and examples are provided to support the reader's understanding.

  20. A Simplified Method to Estimate Sc-CO2 Extraction of Bioactive Compounds from Different Matrices: Chili Pepper vs. Tomato By-Products

    Directory of Open Access Journals (Sweden)

    Francesca Venturi


    Full Text Available In the last few decades, the search for bioactive compounds or “target molecules” from natural sources or their by-products has become the most important application of the supercritical fluid extraction (SFE process. In this context, the present research had two main objectives: (i to verify the effectiveness of a two-step SFE process (namely, a preliminary Sc-CO2 extraction of carotenoids followed by the recovery of polyphenols by ethanol coupled with Sc-CO2 in order to obtain bioactive extracts from two widespread different matrices (chili pepper and tomato by-products, and (ii to test the validity of the mathematical model proposed to describe the kinetics of SFE of carotenoids from different matrices, the knowledge of which is required also for the definition of the role played in the extraction process by the characteristics of the sample matrix. On the basis of the results obtained, it was possible to introduce a simplified kinetic model that was able to describe the time evolution of the extraction of bioactive compounds (mainly carotenoids and phenols from different substrates. In particular, while both chili pepper and tomato were confirmed to be good sources of bioactive antioxidant compounds, the extraction process from chili pepper was faster than from tomato under identical operating conditions.

  1. Simplified method for calculating shear deflections of beams. (United States)

    I. Orosz


    When one designs with wood, shear deflections can become substantial compared to deflections due to moments, because the modulus of elasticity in bending differs from that in shear by a large amount. This report presents a simplified energy method to calculate shear deflections in bending members. This simplified approach should help designers decide whether or not...

  2. Gas/Aerosol partitioning: a simplified method for global modeling

    NARCIS (Netherlands)

    Metzger, S.M.


    The main focus of this thesis is the development of a simplified method to routinely calculate gas/aerosol partitioning of multicomponent aerosols and aerosol associated water within global atmospheric chemistry and climate models. Atmospheric aerosols are usually multicomponent mixtures,

  3. Study on Collision of Ship Side Structure by Simplified Plastic Analysis Method (United States)

    Sun, C. J.; Zhou, J. H.; Wu, W.


    During its lifetime, a ship may encounter collision or grounding and sustain permanent damage after these types of accidents. Crashworthiness has been based on two kinds of main methods: simplified plastic analysis and numerical simulation. A simplified plastic analysis method is presented in this paper. Numerical methods using the non-linear finite-element software LS-DYNA are conducted to validate the method. The results show that, as for the accuracy of calculation results, the simplified plasticity analysis are in good agreement with the finite element simulation, which reveals that the simplified plasticity analysis method can quickly and accurately estimate the crashworthiness of the side structure during the collision process and can be used as a reliable risk assessment method.

  4. Simplified hourly method to calculate summer temperatures in dwellings

    DEFF Research Database (Denmark)

    Mortensen, Lone Hedegaard; Aggerholm, Søren


    :2008 but with further simplifications. The method is used for calculating room temperatures for all hours of a reference year. It is essential that the simplified method is able to predict the temperature in the room with the highest heat load. The heat load is influenced by the solar load, internal load, ventilation...... with an ordinary distribution of windows and a “worst” case where the window area facing south and west was increased by more than 60%. The simplified method used Danish weather data and only needs information on transmission losses, thermal mass, surface contact, internal load, ventilation scheme and solar load......The objective of this study was to develop a method for hourly calculation of the operating temperature in order to evaluate summer comfort in dwellings to help improve building design. A simplified method was developed on the basis of the simple hourly method of the standard ISO 13790...

  5. Simplified Processing Method for Meter Data Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Fowler, Kimberly M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Colotelo, Alison H. A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Downs, Janelle L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ham, Kenneth D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Henderson, Jordan W. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Montgomery, Sadie A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Vernon, Christopher R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Parker, Steven A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)


    Simple/Quick metered data processing method that can be used for Army Metered Data Management System (MDMS) and Logistics Innovation Agency data, but may also be useful for other large data sets. Intended for large data sets when analyst has little information about the buildings.

  6. 77 FR 54482 - Allocation of Costs Under the Simplified Methods (United States)


    ... to allocate capitalizable section 263A costs to specific items in inventory. The legislative history... methods of allocating costs to items in their inventory that were available under prior law. See S. Rep.... The simplified methods differ from facts-and-circumstances methods in that, as applied to inventories...

  7. Simplified method of odontograms for individual identification. (United States)

    Villa Vigil, M A; Arenal, A A; Rodriguez Gonzalez, M A; Llompart, R C; Gonzalez Menendez, J M; Diez, F R


    A computerized method of codifying dental lesions and treatment is presented to enable faster identification of victims of catastrophes. Each tooth is assigned a bidigital value. The first digit refers to the root and designates whether it is found to be perfect, damaged and/or treated, or absent. The second digit refers to the crown and is assigned according to the number of surfaces showing lesions or treatment. For the identification of corpses, missing individuals whose record reveals values less than or equal to those of the subject in every tooth (or of each quadrant in cases of doubt) are selected for consideration from the data base. Individuals who have even one tooth with a value greater than that of the subject are eliminated.

  8. Equity of cadastral valuation and simplified methods

    Directory of Open Access Journals (Sweden)

    Gianni Guerrieri


    . Among these studies, there is the recent paper by Rocco Curto, Elena Fregonara, Patrizia Semeraro (2014 “Come rendere più eque le rendite catastali in attesa della revisione degli estimi?”( How can land registry values be made fairer pending a review of valuations? in which a rapid and simple methodology to vary the current real estate rent through corrective coefficients of location is proposed. In this way, the taxable basis on real estate fees is re-defined in order to reduce the current fiscal iniquity caused by the obsolescence of the incomes based on the current cadastre office. However, a temporary correction to implement while waiting for the reform of the entire cadastral system. In particular, in their paper, Curto et al. (2014 propose to multiply the value of the current income by a coefficient obtained as a ratio between the average prices of a given census microzone and a reference index that “the ratio between an index price which most accurately sums up property values in individual municipalities or aggregations of municipalities in the case of the smallest municipalities (determined on the basis of market observations constituting the entire statistical sample and the corresponding price indices of the values of each Microzone, defined on the basis of market observations (sub-samples” (p. 62. In the remainder of the paper, the methodology underlying the hypothesis contained in the MEF’s “Revision of real estate taxation proposal”(August 2013 is explained. Secondly, the methodological differences of corrections of real estate incomes proposed in the cited document by the MEF and in Curto et al. (2014’s article are then compared. Subsequently, some empirical proof is supplied relating to the two taxable-basis equity recovery methods. Lastly, further consideration on the effective and generalized implementation of the proposed methods will be made.

  9. A Simplified Diagnostic Method for Elastomer Bond Durability (United States)

    White, Paul


    A simplified method has been developed for determining bond durability under exposure to water or high humidity conditions. It uses a small number of test specimens with relatively short times of water exposure at elevated temperature. The method is also gravimetric; the only equipment being required is an oven, specimen jars, and a conventional laboratory balance.

  10. Estimation of cardiac reserve by peak power: validation and initial application of a simplified index (United States)

    Armstrong, G. P.; Carlier, S. G.; Fukamachi, K.; Thomas, J. D.; Marwick, T. H.


    OBJECTIVES: To validate a simplified estimate of peak power (SPP) against true (invasively measured) peak instantaneous power (TPP), to assess the feasibility of measuring SPP during exercise and to correlate this with functional capacity. DESIGN: Development of a simplified method of measurement and observational study. SETTING: Tertiary referral centre for cardiothoracic disease. SUBJECTS: For validation of SPP with TPP, seven normal dogs and four dogs with dilated cardiomyopathy were studied. To assess feasibility and clinical significance in humans, 40 subjects were studied (26 patients; 14 normal controls). METHODS: In the animal validation study, TPP was derived from ascending aortic pressure and flow probe, and from Doppler measurements of flow. SPP, calculated using the different flow measures, was compared with peak instantaneous power under different loading conditions. For the assessment in humans, SPP was measured at rest and during maximum exercise. Peak aortic flow was measured with transthoracic continuous wave Doppler, and systolic and diastolic blood pressures were derived from brachial sphygmomanometry. The difference between exercise and rest simplified peak power (Delta SPP) was compared with maximum oxygen uptake (VO(2)max), measured from expired gas analysis. RESULTS: SPP estimates using peak flow measures correlated well with true peak instantaneous power (r = 0.89 to 0.97), despite marked changes in systemic pressure and flow induced by manipulation of loading conditions. In the human study, VO(2)max correlated with Delta SPP (r = 0.78) better than Delta ejection fraction (r = 0.18) and Delta rate-pressure product (r = 0.59). CONCLUSIONS: The simple product of mean arterial pressure and peak aortic flow (simplified peak power, SPP) correlates with peak instantaneous power over a range of loading conditions in dogs. In humans, it can be estimated during exercise echocardiography, and correlates with maximum oxygen uptake better than ejection

  11. Real-time estimation of battery internal temperature based on a simplified thermoelectric model (United States)

    Zhang, Cheng; Li, Kang; Deng, Jing


    Li-ion batteries have been widely used in the EVs, and the battery thermal management is a key but challenging part of the battery management system. For EV batteries, only the battery surface temperature can be measured in real-time. However, it is the battery internal temperature that directly affects the battery performance, and large temperature difference may exist between surface and internal temperatures, especially in high power demand applications. In this paper, an online battery internal temperature estimation method is proposed based on a novel simplified thermoelectric model. The battery thermal behaviour is first described by a simplified thermal model, and battery electrical behaviour by an electric model. Then, these two models are interrelated to capture the interactions between battery thermal and electrical behaviours, thus offer a comprehensive description of the battery behaviour that is useful for battery management. Finally, based on the developed model, the battery internal temperature is estimated using an extended Kalman filter. The experimental results confirm the efficacy of the proposed method, and it can be used for online internal temperature estimation which is a key indicator for better real-time battery thermal management.

  12. Simplified large African carnivore density estimators from track indices

    Directory of Open Access Journals (Sweden)

    Christiaan W. Winterbach


    Full Text Available Background The range, population size and trend of large carnivores are important parameters to assess their status globally and to plan conservation strategies. One can use linear models to assess population size and trends of large carnivores from track-based surveys on suitable substrates. The conventional approach of a linear model with intercept may not intercept at zero, but may fit the data better than linear model through the origin. We assess whether a linear regression through the origin is more appropriate than a linear regression with intercept to model large African carnivore densities and track indices. Methods We did simple linear regression with intercept analysis and simple linear regression through the origin and used the confidence interval for ß in the linear model y = αx + ß, Standard Error of Estimate, Mean Squares Residual and Akaike Information Criteria to evaluate the models. Results The Lion on Clay and Low Density on Sand models with intercept were not significant (P > 0.05. The other four models with intercept and the six models thorough origin were all significant (P < 0.05. The models using linear regression with intercept all included zero in the confidence interval for ß and the null hypothesis that ß = 0 could not be rejected. All models showed that the linear model through the origin provided a better fit than the linear model with intercept, as indicated by the Standard Error of Estimate and Mean Square Residuals. Akaike Information Criteria showed that linear models through the origin were better and that none of the linear models with intercept had substantial support. Discussion Our results showed that linear regression through the origin is justified over the more typical linear regression with intercept for all models we tested. A general model can be used to estimate large carnivore densities from track densities across species and study areas. The formula observed track density = 3.26

  13. Simplifying cardiovascular risk estimation using resting heart rate.

    LENUS (Irish Health Repository)

    Cooney, Marie Therese


    Elevated resting heart rate (RHR) is a known, independent cardiovascular (CV) risk factor, but is not included in risk estimation systems, including Systematic COronary Risk Evaluation (SCORE). We aimed to derive risk estimation systems including RHR as an extra variable and assess the value of this addition.

  14. Simplified large African carnivore density estimators from track indices. (United States)

    Winterbach, Christiaan W; Ferreira, Sam M; Funston, Paul J; Somers, Michael J


    The range, population size and trend of large carnivores are important parameters to assess their status globally and to plan conservation strategies. One can use linear models to assess population size and trends of large carnivores from track-based surveys on suitable substrates. The conventional approach of a linear model with intercept may not intercept at zero, but may fit the data better than linear model through the origin. We assess whether a linear regression through the origin is more appropriate than a linear regression with intercept to model large African carnivore densities and track indices. We did simple linear regression with intercept analysis and simple linear regression through the origin and used the confidence interval for ß in the linear model y = αx + ß, Standard Error of Estimate, Mean Squares Residual and Akaike Information Criteria to evaluate the models. The Lion on Clay and Low Density on Sand models with intercept were not significant (P > 0.05). The other four models with intercept and the six models thorough origin were all significant (P African carnivores using track counts on sandy substrates in areas where carnivore densities are 0.27 carnivores/100 km2 or higher. To improve the current models, we need independent data to validate the models and data to test for non-linear relationship between track indices and true density at low densities.

  15. Simplified Model for the Hybrid Method to Design Stabilising Piles Placed at the Toe of Slopes

    Directory of Open Access Journals (Sweden)

    Dib M.


    Full Text Available Stabilizing precarious slopes by installing piles has become a widespread technique for landslides prevention. The design of slope-stabilizing piles by the finite element method is more accurate comparing to the conventional methods. This accuracy is because of the ability of this method to simulate complex configurations, and to analyze the soil-pile interaction effect. However, engineers prefer to use the simplified analytical techniques to design slope stabilizing piles, this is due to the high computational resources required by the finite element method. Aiming to combine the accuracy of the finite element method with simplicity of the analytical approaches, a hybrid methodology to design slope stabilizing piles was proposed in 2012. It consists of two steps; (1: an analytical estimation of the resisting force needed to stabilize the precarious slope, and (2: a numerical analysis to define the adequate pile configuration that offers the required resisting force. The hybrid method is applicable only for the analysis and the design of stabilizing piles placed in the middle of the slope, however, in certain cases like road constructions, piles are needed to be placed at the toe of the slope. Therefore, in this paper a simplified model for the hybrid method is dimensioned to analyze and design stabilizing piles placed at the toe of a precarious slope. The validation of the simplified model is presented by a comparative analysis with the full coupled finite element model.

  16. Technical note: Use of a simplified equation for estimating glomerular filtration rate in beef cattle. (United States)

    Murayama, I; Miyano, A; Sasaki, Y; Hirata, T; Ichijo, T; Satoh, H; Sato, S; Furuhama, K


    This study was performed to clarify whether a formula (Holstein equation) based on a single blood sample and the isotonic, nonionic, iodine contrast medium iodixanol in Holstein dairy cows can apply to the estimation of glomerular filtration rate (GFR) for beef cattle. To verify the application of iodixanol in beef cattle, instead of the standard tracer inulin, both agents were coadministered as a bolus intravenous injection to identical animals at doses of 10 mg of I/kg of BW and 30 mg/kg. Blood was collected 30, 60, 90, and 120 min after the injection, and the GFR was determined by the conventional multisample strategies. The GFR values from iodixanol were well consistent with those from inulin, and no effects of BW, age, or parity on GFR estimates were noted. However, the GFR in cattle weighing less than 300 kg, ageddynamic changes in renal function at young adult ages. Using clinically healthy cattle and those with renal failure, the GFR values estimated from the Holstein equation were in good agreement with those by the multisample method using iodixanol (r=0.89, P=0.01). The results indicate that the simplified Holstein equation using iodixanol can be used for estimating the GFR of beef cattle in the same dose regimen as Holstein dairy cows, and provides a practical and ethical alternative.

  17. Implementation of a Simplified State Estimator for Wind Turbine Monitoring on an Embedded System

    DEFF Research Database (Denmark)

    Rasmussen, Theis Bo; Yang, Guangya; Nielsen, Arne Hejde


    The transition towards a cyber-physical energy system (CPES) entails an increased dependency on valid data. Simultaneously, an increasing implementation of renewable generation leads to possible control actions at individual distributed energy resources (DERs). A state estimation covering the whole...... system, including individual DER, is time consuming and numerically challenging. This paper presents the approach and results of implementing a simplified state estimator onto an embedded system for improving DER monitoring. The implemented state estimator is based on numerically robust orthogonal...

  18. Study on a pattern classification method of soil quality based on simplified learning sample dataset (United States)

    Zhang, Jiahua; Liu, S.; Hu, Y.; Tian, Y.


    Based on the massive soil information in current soil quality grade evaluation, this paper constructed an intelligent classification approach of soil quality grade depending on classical sampling techniques and disordered multiclassification Logistic regression model. As a case study to determine the learning sample capacity under certain confidence level and estimation accuracy, and use c-means algorithm to automatically extract the simplified learning sample dataset from the cultivated soil quality grade evaluation database for the study area, Long chuan county in Guangdong province, a disordered Logistic classifier model was then built and the calculation analysis steps of soil quality grade intelligent classification were given. The result indicated that the soil quality grade can be effectively learned and predicted by the extracted simplified dataset through this method, which changed the traditional method for soil quality grade evaluation. ?? 2011 IEEE.

  19. Simplified method for calculation of equilibrium plasma composition (United States)

    Rydalevskaya, Maria A.


    In this work, a simplified method for the evaluation of equilibrium composition of plasmas consisted of monoatomic species is proposed. Multicomponent gas systems resulting from thermal ionization of spatially uniform mixtures are assumed enough rarefied to be treated as ideal gases even after multiple ionization steps. The method developed for the calculation of equilibrium composition of these mixtures makes use of the fundamental principles of statistical physics. Equilibrium concentrations of mixture components are determined by integration of distribution functions over the space of momentum and summation over electronic energy levels. These functions correspond to the entropy maximum. To determine unknown parameters, the systems of equations corresponding to the normalization conditions are derived. It is shown that the systems may be reduced to one algebraic equation if the equilibrium temperature is known. Numeral method to solve this equation is proposed. Special attention is given to the ionized mixtures, generated from the atoms of a single chemical species and the situations, when in the gas only the first- or the first- and second-order ionization are possible.

  20. Simplified Methodology to Estimate the Maximum Liquid Helium (LHe) Cryostat Pressure from a Vacuum Jacket Failure (United States)

    Ungar, Eugene K.; Richards, W. Lance


    The aircraft-based Stratospheric Observatory for Infrared Astronomy (SOFIA) is a platform for multiple infrared astronomical observation experiments. These experiments carry sensors cooled to liquid helium temperatures. The liquid helium supply is contained in large (i.e., 10 liters or more) vacuum-insulated dewars. Should the dewar vacuum insulation fail, the inrushing air will condense and freeze on the dewar wall, resulting in a large heat flux on the dewar's contents. The heat flux results in a rise in pressure and the actuation of the dewar pressure relief system. A previous NASA Engineering and Safety Center (NESC) assessment provided recommendations for the wall heat flux that would be expected from a loss of vacuum and detailed an appropriate method to use in calculating the maximum pressure that would occur in a loss of vacuum event. This method involved building a detailed supercritical helium compressible flow thermal/fluid model of the vent stack and exercising the model over the appropriate range of parameters. The experimenters designing science instruments for SOFIA are not experts in compressible supercritical flows and do not generally have access to the thermal/fluid modeling packages that are required to build detailed models of the vent stacks. Therefore, the SOFIA Program engaged the NESC to develop a simplified methodology to estimate the maximum pressure in a liquid helium dewar after the loss of vacuum insulation. The method would allow the university-based science instrument development teams to conservatively determine the cryostat's vent neck sizing during preliminary design of new SOFIA Science Instruments. This report details the development of the simplified method, the method itself, and the limits of its applicability. The simplified methodology provides an estimate of the dewar pressure after a loss of vacuum insulation that can be used for the initial design of the liquid helium dewar vent stacks. However, since it is not an exact

  1. Gas/Aerosol partitioning: a simplified method for global modeling (United States)

    Metzger, S. M.


    The main focus of this thesis is the development of a simplified method to routinely calculate gas/aerosol partitioning of multicomponent aerosols and aerosol associated water within global atmospheric chemistry and climate models. Atmospheric aerosols are usually multicomponent mixtures, partly composed of acids (e.g. H2SO4, HNO3), their salts (e.g. (NH4)2SO4, NH4NO3, respectively), and water. Because these acids and salts are highly hygroscopic, water, that is associated with aerosols in humid environments, often exceeds the total dry aerosol mass. Both the total dry aerosol mass and the aerosol associated water are important for the role of atmospheric aerosols in climate change simulations. Still, multicomponent aerosols are not yet routinely calculated within global atmospheric chemistry or climate models. The reason is that these particles, especially volatile aerosol compounds, require a complex and computationally expensive thermodynamical treatment. For instance, the aerosol associated water depends on the composition of the aerosol, which is determined by the gas/liquid/solid partitioning, in turn strongly dependent on temperature, relative humidity, and the presence of pre-existing aerosol particles. Based on thermodynamical relations such a simplified method has been derived. This method is based on the assumptions generally made by the modeling of multicomponent aerosols, but uses an alternative approach for the calculation of the aerosol activity and activity coefficients. This alternative approach relates activity coefficients to the ambient relative humidity, according to the vapor pressure reduction and the generalization of Raoult s law. This relationship, or simplification, is a consequence of the assumption that the aerosol composition and the aerosol associated water are in thermodynamic equilibrium with the ambient relative humidity, which determines the solute activity and, hence, activity coefficients of a multicomponent aerosol mixture

  2. Probing dimensionality using a simplified 4-probe method. (United States)

    Kjeldby, Snorre B; Evenstad, Otto M; Cooil, Simon P; Wells, Justin W


    4-probe electrical measurements have been in existence for many decades. One of the most useful aspects of the 4-probe method is that it is not only possible to find the resistivity of a sample (independently of the contact resistances), but that it is also possible to probe the dimensionality of the sample. In theory, this is straightforward to achieve by measuring the 4-probe resistance as a function of probe separation. In practice, it is challenging to move all four probes with sufficient precision over the necessary range. Here, we present an alternative approach. We demonstrate that the dimensionality of the conductive path within a sample can be directly probed using a modified 4-probe method in which an unconventional geometry is exploited; three of the probes are rigidly fixed, and the position of only one probe is changed. This allows 2D and 3D (and other) contributions the to resistivity to be readily disentangled. The required experimental instrumentation can be vastly simplified relative to traditional variable spacing 4-probe instruments.

  3. Diagnostic method for induction motor using simplified motor simulator


    Doumae, Yukihiro; Konishi, Masami; Imai, Jun; Asada, Hideki; Kitamura, Akira


    In this paper, an identification method of motor parameters for the diagnosis of rotor bar defects in the squirrel cage induction motor is proposed. It is difficult to distinguish the degree of deterioration by a conventional diagnostic method such as Fourier analysis. To overcome the difficulty, a motor simulator is used to identify the degree of deterioration of rotors in the squirrel cage induction motor. Using this method, the deterioration of rotor bars in the motor can be estimated quan...

  4. Simplified method for numerical modeling of fiber lasers. (United States)

    Shtyrina, O V; Yarutkina, I A; Fedoruk, M P


    A simplified numerical approach to modeling of dissipative dispersion-managed fiber lasers is examined. We present a new numerical iteration algorithm for finding the periodic solutions of the system of nonlinear ordinary differential equations describing the intra-cavity dynamics of the dissipative soliton characteristics in dispersion-managed fiber lasers. We demonstrate that results obtained using simplified model are in good agreement with full numerical modeling based on the corresponding partial differential equations.

  5. Methods for age estimation

    Directory of Open Access Journals (Sweden)

    D. Sümeyra Demirkıran


    Full Text Available Concept of age estimation plays an important role on both civil law and regulation of criminal behaviors. In forensic medicine, age estimation is practiced for individual requests as well for request of the court. In this study it is aimed to compile the methods of age estimation and to make recommendations for the solution of the problems encountered. In radiological method the epiphyseal lines of the bones and views of the teeth are used. In order to estimate the age by comparing bone radiographs; Greulich-Pyle Atlas (GPA, Tanner-Whitehouse Atlas (TWA and “Adli Tıpta Yaş Tayini (ATYT” books are used. Bone age is found to be 2 years older averagely than chronologic age, especially in puberty, according to the forensic age estimations described in the ATYT book. For the age estimation with teeth, Demirjian method is used. In time different methods are developed by modifying Demirjian method. However no accurate method was found. Histopathological studies are done on bone marrow cellularity and dermis cells. No correlation was found between histopathoogical findings and choronologic age. Important ethical and legal issues are brought with current age estimation methods especially in teenage period. Therefore it is required to prepare atlases of bone age compatible with our society by collecting the findings of the studies in Turkey. Another recommendation could be to pay attention to the courts of age raising trials of teenage women and give special emphasis on birth and population records

  6. Evaluation of Simplified Models for Estimating Public Dose from Spent Nuclear Fuel Shipments

    Energy Technology Data Exchange (ETDEWEB)

    Connolly, Kevin J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Radulescu, Georgeta [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)


    This paper investigates the dose rate as a function of distance from a representative high-capacity SNF rail-type transportation cask. It uses the SCALE suite of radiation transport modeling and simulation codes to determine neutron and gamma radiation dose rates. The SCALE calculated dose rate is compared with simplified analytical methods historically used for these calculations. The SCALE dose rate calculation presented in this paper employs a very detailed transportation cask model (e.g., pin-by-pin modeling of fuel assembly) and a new hybrid computational transport method. Because it includes pin-level heterogeneity and models ample air and soil outside the cask to simulate scattering of gamma and neutron radiation, this detailed SCALE model is expected to yield more accurate results than previously used models which made more simplistic assumptions (e.g., fuel assembly treated as a point or line source, simple 1-D model of environment outside of cask). The results in this paper are preliminary and, as progress is made on developing and validating improved models, results may be subject to change as models and estimates become more refined and better information leads to more accurate assumptions.

  7. Simplified Methods Applied to Nonlinear Motion of Spar Platforms

    Energy Technology Data Exchange (ETDEWEB)

    Haslum, Herbjoern Alf


    Simplified methods for prediction of motion response of spar platforms are presented. The methods are based on first and second order potential theory. Nonlinear drag loads and the effect of the pumping motion in a moon-pool are also considered. Large amplitude pitch motions coupled to extreme amplitude heave motions may arise when spar platforms are exposed to long period swell. The phenomenon is investigated theoretically and explained as a Mathieu instability. It is caused by nonlinear coupling effects between heave, surge, and pitch. It is shown that for a critical wave period, the envelope of the heave motion makes the pitch motion unstable. For the same wave period, a higher order pitch/heave coupling excites resonant heave response. This mutual interaction largely amplifies both the pitch and the heave response. As a result, the pitch/heave instability revealed in this work is more critical than the previously well known Mathieu's instability in pitch which occurs if the wave period (or the natural heave period) is half the natural pitch period. The Mathieu instability is demonstrated both by numerical simulations with a newly developed calculation tool and in model experiments. In order to learn more about the conditions for this instability to occur and also how it may be controlled, different damping configurations (heave damping disks and pitch/surge damping fins) are evaluated both in model experiments and by numerical simulations. With increased drag damping, larger wave amplitudes and more time are needed to trigger the instability. The pitch/heave instability is a low probability of occurrence phenomenon. Extreme wave periods are needed for the instability to be triggered, about 20 seconds for a typical 200m draft spar. However, it may be important to consider the phenomenon in design since the pitch/heave instability is very critical. It is also seen that when classical spar platforms (constant cylindrical cross section and about 200m draft

  8. A simplified fractional order impedance model and parameter identification method for lithium-ion batteries (United States)

    Yang, Qingxia; Xu, Jun; Cao, Binggang; Li, Xiuqing


    Identification of internal parameters of lithium-ion batteries is a useful tool to evaluate battery performance, and requires an effective model and algorithm. Based on the least square genetic algorithm, a simplified fractional order impedance model for lithium-ion batteries and the corresponding parameter identification method were developed. The simplified model was derived from the analysis of the electrochemical impedance spectroscopy data and the transient response of lithium-ion batteries with different states of charge. In order to identify the parameters of the model, an equivalent tracking system was established, and the method of least square genetic algorithm was applied using the time-domain test data. Experiments and computer simulations were carried out to verify the effectiveness and accuracy of the proposed model and parameter identification method. Compared with a second-order resistance-capacitance (2-RC) model and recursive least squares method, small tracing voltage fluctuations were observed. The maximum battery voltage tracing error for the proposed model and parameter identification method is within 0.5%; this demonstrates the good performance of the model and the efficiency of the least square genetic algorithm to estimate the internal parameters of lithium-ion batteries. PMID:28212405

  9. A simplified mass isotopomer approach to estimate gluconeogenesis rate in vivo using deuterium oxide. (United States)

    Junghans, Peter; Görs, Solvig; Lang, Iris S; Steinhoff, Julia; Hammon, Harald M; Metges, Cornelia C


    We compare a new simplified (2)H enrichment mass isotopomer analysis (MIA) against the laborious hexamethylentetramine (HMT) method to quantify the contribution of gluconeogenesis (GNG) to total glucose production (GP) in calves. Both methods are based on the (2)H labeling of glucose after in vivo administration of deuterium oxide. The (2)H enrichments of plasma glucose at different C-H positions were measured as aldonitrile pentaacetate (AAc) and methyloxime-trimethylsilyl (MoxTMS) derivatives or HMT by gas chromatography/mass spectrometry (GC/MS). Two pre-ruminating fasted Holstein calves (51 kg body mass, BM, age 7 days) received two oral bolus doses of (2)H(2)O (10 g/kg BM, 70 atom% (2)H) at 7:00 h and 11:00 h after overnight food withdrawal. Blood samples for fractional GNG determination were collected at -24 and between 6 and 9 h after the first (2)H(2)O dose. The ratio of (2)H enrichments C5/C2 represents the contribution of GNG to GP. The (2)H enrichment at C2 was calculated based on the ion fragments at m/z 328 (C1-C6) - m/z 187 (C3-C6) of glucose AAc. The (2)H enrichment at C5 was approximated either by averaging the (2)H enrichment at C5-C6 using the ion fragment of glucose MoxTMS at m/z 205 or by conversion of the C5 of glucose into HMT. The fractional GNG calculated by the C5-C6 average (2)H enrichment method (41.4 +/- 6.9%) compared to the HMT method (34.3 +/- 11.4%) was not different (mean +/- SD, n = 6 replicates). In conclusion, GNG can be estimated with less laborious sample preparation by means of our new C5-C6 average (2)H enrichment method using AAc and MoxTMS glucose derivatives. Copyright (c) 2010 John Wiley & Sons, Ltd.

  10. A Simplified Quantitative Method to Measure Brain Shifts in Patients with Middle Cerebral Artery Stroke. (United States)

    Paletta, Nina; Maali, Laith; Zahran, Abdurrehman; Sethuraman, Sankara; Figueroa, Ramon; Nichols, Fenwick T; Bruno, Askiel


    A standardized and validated method to measure brain shifts in malignant middle cerebral artery (MCA) stroke with decompressive hemicraniectomy (DHC) could facilitate clinical decision making, prognostication, and comparison of results between studies. We tested for reliability simplified methods to measure transcalvarial herniation, midline brain shift, and the contralateral cerebral ventricular atrium in malignant MCA stroke after DHC. Multiple raters measured brain shifts on post-DHC computed tomography (CT) scans with aligned and unaligned slice orientations in 25 patients. We compared the simplified measurements to previously reported more meticulous measurements. The simplified measurements correlate well with the more meticulous measurements on both aligned and unaligned CTs (intraclass correlation coefficients .72-.89). These simplified and expedient methods of measuring brain shifts in malignant MCA stroke after DHC correlate well with the more meticulous methods. Copyright © 2017 by the American Society of Neuroimaging.

  11. Shielding analyses of an AB-BNCT facility using Monte Carlo simulations and simplified methods

    Directory of Open Access Journals (Sweden)

    Lai Bo-Lun


    Full Text Available Accurate Monte Carlo simulations and simplified methods were used to investigate the shielding requirements of a hypothetical accelerator-based boron neutron capture therapy (AB-BNCT facility that included an accelerator room and a patient treatment room. The epithermal neutron beam for BNCT purpose was generated by coupling a neutron production target with a specially designed beam shaping assembly (BSA, which was embedded in the partition wall between the two rooms. Neutrons were produced from a beryllium target bombarded by 1-mA 30-MeV protons. The MCNP6-generated surface sources around all the exterior surfaces of the BSA were established to facilitate repeated Monte Carlo shielding calculations. In addition, three simplified models based on a point-source line-of-sight approximation were developed and their predictions were compared with the reference Monte Carlo results. The comparison determined which model resulted in better dose estimation, forming the basis of future design activities for the first ABBNCT facility in Taiwan.

  12. Shielding analyses of an AB-BNCT facility using Monte Carlo simulations and simplified methods (United States)

    Lai, Bo-Lun; Sheu, Rong-Jiun


    Accurate Monte Carlo simulations and simplified methods were used to investigate the shielding requirements of a hypothetical accelerator-based boron neutron capture therapy (AB-BNCT) facility that included an accelerator room and a patient treatment room. The epithermal neutron beam for BNCT purpose was generated by coupling a neutron production target with a specially designed beam shaping assembly (BSA), which was embedded in the partition wall between the two rooms. Neutrons were produced from a beryllium target bombarded by 1-mA 30-MeV protons. The MCNP6-generated surface sources around all the exterior surfaces of the BSA were established to facilitate repeated Monte Carlo shielding calculations. In addition, three simplified models based on a point-source line-of-sight approximation were developed and their predictions were compared with the reference Monte Carlo results. The comparison determined which model resulted in better dose estimation, forming the basis of future design activities for the first ABBNCT facility in Taiwan.

  13. Java Programs for Using Newmark's Method and Simplified Decoupled Analysis to Model Slope Performance During Earthquakes (United States)

    Jibson, Randall W.; Jibson, Matthew W.


    Landslides typically cause a large proportion of earthquake damage, and the ability to predict slope performance during earthquakes is important for many types of seismic-hazard analysis and for the design of engineered slopes. Newmark's method for modeling a landslide as a rigid-plastic block sliding on an inclined plane provides a useful method for predicting approximate landslide displacements. Newmark's method estimates the displacement of a potential landslide block as it is subjected to earthquake shaking from a specific strong-motion record (earthquake acceleration-time history). A modification of Newmark's method, decoupled analysis, allows modeling landslides that are not assumed to be rigid blocks. This open-file report is available on CD-ROM and contains Java programs intended to facilitate performing both rigorous and simplified Newmark sliding-block analysis and a simplified model of decoupled analysis. For rigorous analysis, 2160 strong-motion records from 29 earthquakes are included along with a search interface for selecting records based on a wide variety of record properties. Utilities are available that allow users to add their own records to the program and use them for conducting Newmark analyses. Also included is a document containing detailed information about how to use Newmark's method to model dynamic slope performance. This program will run on any platform that supports the Java Runtime Environment (JRE) version 1.3, including Windows, Mac OSX, Linux, Solaris, etc. A minimum of 64 MB of available RAM is needed, and the fully installed program requires 400 MB of disk space.

  14. Simplified method for measuring the response time of scram release electromagnet in a nuclear reactor

    Energy Technology Data Exchange (ETDEWEB)

    Patri, Sudheer, E-mail:; Mohana, M.; Kameswari, K.; Kumar, S. Suresh; Narmadha, S.; Vijayshree, R.; Meikandamurthy, C.; Venkatesan, A.; Palanisami, K.; Murthy, D. Thirugnana; Babu, B.; Prakash, V.; Rajan, K.K.


    Highlights: • An alternative method for estimating the electromagnet clutch release time. • A systematic approach to develop a computer based measuring system. • Prototype tests on the measurement system. • Accuracy of the method is ±6% and repeatability error is within 2%. - Abstract: The delay time in electromagnet clutch release during a reactor trip (scram action) is an important safety parameter, having a bearing on the plant safety during various design basis events. Generally, it is measured using current decay characteristics of electromagnet coil and its energising circuit. A simplified method of measuring the same in a Sodium cooled fast reactors (SFR) is proposed in this paper. The method utilises the position data of control rod to estimate the delay time in electromagnet clutch release. A computer based real time measurement system for measuring the electromagnet clutch delay time is developed and qualified for retrofitting in prototype fast breeder reactor. Various stages involved in the development of the system are principle demonstration, experimental verification of hardware capabilities and prototype system testing. Tests on prototype system have demonstrated the satisfactory performance of the system with intended accuracy and repeatability.

  15. A simplified close range photogrammetry method for soil erosion assessment (United States)

    With the increased affordability of consumer grade cameras and the development of powerful image processing software, digital photogrammetry offers a competitive advantage as a tool for soil erosion estimation compared to other technologies. One bottleneck of digital photogrammetry is its dependency...

  16. Estimating inelastic heavy-particle-hydrogen collision data. I. Simplified model and application to potassium-hydrogen collisions (United States)

    Belyaev, Andrey K.; Yakovleva, Svetlana A.


    Aims: We derive a simplified model for estimating atomic data on inelastic processes in low-energy collisions of heavy-particles with hydrogen, in particular for the inelastic processes with high and moderate rate coefficients. It is known that these processes are important for non-LTE modeling of cool stellar atmospheres. Methods: Rate coefficients are evaluated using a derived method, which is a simplified version of a recently proposed approach based on the asymptotic method for electronic structure calculations and the Landau-Zener model for nonadiabatic transition probability determination. Results: The rate coefficients are found to be expressed via statistical probabilities and reduced rate coefficients. It turns out that the reduced rate coefficients for mutual neutralization and ion-pair formation processes depend on single electronic bound energies of an atom, while the reduced rate coefficients for excitation and de-excitation processes depend on two electronic bound energies. The reduced rate coefficients are calculated and tabulated as functions of electronic bound energies. The derived model is applied to potassium-hydrogen collisions. For the first time, rate coefficients are evaluated for inelastic processes in K+H and K++H- collisions for all transitions from ground states up to and including ionic states. Tables with calculated data are only available at the CDS via anonymous ftp to ( or via

  17. Development of simplified sampling methods for behavioural data in rabbit does

    Directory of Open Access Journals (Sweden)

    C. Alfonso-Carrillo


    Full Text Available The aim of this study was to compare the results of different simplified sampling methods for behavioural data compared to reference records of 24-h in order to assess rabbit doe behaviours at different physiological stages (gestation and lactation in animals housed in 2 types of cages (conventional and alternative. In total, we analysed 576 h of continuous video of 12 rabbit does at the end of lactation and the same females after weaning. The behavioural observations were studied using 3 independent categories of classification (location in the cage, posture and functional behaviours. Continuous behavioural recordings of 24 h were considered as the reference method to validate another 4 data collection sampling methods by aggregated video recordings of different frequency and duration [regular short and long methods with 2.4 and 8 h of observation respectively, and irregular (more frequent during the active period short and long methods with 6 and 8 h of observation, respectively]. The current results showed that, independently of the housing system, the best method to reduce the total observation time required to assess rabbit does’ behaviour depends on the trait studied and physiological stage of the does. In gestating does, irregular methods were not suitable to estimate behaviours of long duration such as lying, sitting, resting and grooming. However, in both physiological stages, regular methods were accurate for location behaviours, postures and functional behaviours of long duration. Instead, for the study of infrequent behaviours performed mainly during dark period, where coefficients of variation were high, the irregular long method led to the lowest mean estimation errors.

  18. Simplified approaches to some nonoverlapping domain decomposition methods

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Jinchao


    An attempt will be made in this talk to present various domain decomposition methods in a way that is intuitively clear and technically coherent and concise. The basic framework used for analysis is the {open_quotes}parallel subspace correction{close_quotes} or {open_quotes}additive Schwarz{close_quotes} method, and other simple technical tools include {open_quotes}local-global{close_quotes} and {open_quotes}global-local{close_quotes} techniques, the formal one is for constructing subspace preconditioner based on a preconditioner on the whole space whereas the later one for constructing preconditioner on the whole space based on a subspace preconditioner. The domain decomposition methods discussed in this talk fall into two major categories: one, based on local Dirichlet problems, is related to the {open_quotes}substructuring method{close_quotes} and the other, based on local Neumann problems, is related to the {open_quotes}Neumann-Neumann method{close_quotes} and {open_quotes}balancing method{close_quotes}. All these methods will be presented in a systematic and coherent manner and the analysis for both two and three dimensional cases are carried out simultaneously. In particular, some intimate relationship between these algorithms are observed and some new variants of the algorithms are obtained.

  19. Simplified calculation methods for all-vertical-piled wharf in offshore deep water (United States)

    Wang, Yuan-zhan; He, Lin-lin


    All-vertical-piled wharf is a kind of high-piled wharf, but it is extremely different from the traditional ones in some aspects, such as the structural property, bearing characteristics, failure mechanism, and static or dynamic calculation methods. In this paper, the finite element method (FEM) and theoretical analysis method are combined to analyze the structural property, bearing behavior and failure mode of the all-vertical-piled wharf in offshore deep water, and to establish simplified calculation methods determining the horizontal static ultimate bearing capacity and the dynamic response for the all-vertical-piled wharf. Firstly, the bearing capability and failure mechanism for all-vertical-piled wharf are studied by use of FEM, and the failure criterion is put forward for all-vertical-piled wharf based on the `plastic hinge'. According to the failure criterion and P-Y curve method, the simplified calculation method of the horizontal static ultimate bearing capacity for all-vertical-piled wharf is proposed, and it is verified that the simplified method is reasonable by comparison with the FEM. Secondly, the displacement dynamic magnification factor for the all-vertical-piled wharf under wave cyclic loads and ship impact loads is calculated by the FEM and the theory formula based on the single degree of freedom (SDOF) system. The results obtained by the two methods are in good agreement with each other, and the simplified calculation method of the displacement dynamic magnification factor for all-vertical-piled wharf under dynamic loads is proposed. Then the simplified calculation method determining the dynamic response for the all-vertical-piled wharf is proposed in combination with P-Y curve method. That is, the dynamic response of the structure can be obtained through the static calculation results of P-Y curve method multiplied by the displacement dynamic magnification factor. The feasibility of the simplified dynamic response method is verified by

  20. Agrotransformation of Phytophthora nicotianae: a simplified and optimized method

    Directory of Open Access Journals (Sweden)

    Ronaldo José Durigan Dalio

    Full Text Available ABSTRACT Phytophthora nicotianae is a plant pathogen responsible for damaging crops and natural ecosystems worldwide. P. nicotianae is correlated with the diseases: citrus gummosis and citrus root rot, and the management of these diseases relies mainly on the certification of seedlings and eradication of infected trees. However, little is known about the infection strategies of P. nicotianae interacting with citrus plants, which rises up the need for examining its virulence at molecular levels. Here we show an optimized method to genetically manipulate P. nicotianae mycelium. We have transformed P. nicotianae with the expression cassette of fluorescence protein DsRed. The optimized AMT method generated relatively high transformation efficiency. It also shows advantages over the other methods since it is the simplest one, it does not require protoplasts or spores as targets, it is less expensive and it does not require specific equipment. Transformation with DsRed did not impair the physiology, reproduction or virulence of the pathogen. The optimized AMT method presented here is useful for rapid, cost-effective and reliable transformation of P. nicotianae with any gene of interest.

  1. Simplified, Accurate Method for Antibiotic Assay of Clinical Specimens (United States)

    Bennett, John V.; Brodie, Jean L.; Benner, Ernest J.; Kirby, William M. M.


    Large glass plates are used for this modified agar-well diffusion assay method, allowing up to 81 replications on a single plate. With a specially designed agar punch, it is possible to prepare the small agar wells very quickly. The saving in serum resulting from fewer replications of standards with the large plates, and the small volume of the agar wells, makes it economically feasible to use pooled human serum for the standard antibiotic solutions. Methods are described for preparing the standard solutions, and for providing controls for the deterioration of standards and unknowns. Procedures for preparing and maintaining the commonly used assay organisms are presented. Serum specimens are tested directly rather than diluting them to a narrow range of antibiotic concentrations. This is possible because of a procedure for calculations that recognizes the curvilinear relationship between zone sizes and antibiotic concentrations. Adaptation of this method to a number of the commonly used antibiotics is described. With this method, it has been possible to test large numbers of clinical specimens in a minimal time, and with accuracy consistently better than 10%. Images Fig. 1 PMID:4959982

  2. Seismic analysis of long tunnels: A review of simplified and unified methods

    Directory of Open Access Journals (Sweden)

    Haitao Yu


    Full Text Available Seismic analysis of long tunnels is important for safety evaluation of the tunnel structure during earthquakes. Simplified models of long tunnels are commonly adopted in seismic design by practitioners, in which the tunnel is usually assumed as a beam supported by the ground. These models can be conveniently used to obtain the overall response of the tunnel structure subjected to seismic loading. However, simplified methods are limited due to the assumptions that need to be made to reach the solution, e.g. shield tunnels are assembled with segments and bolts to form a lining ring and such structural details may not be included in the simplified model. In most cases, the design will require a numerical method that does not have the shortcomings of the analytical solutions, as it can consider the structural details, non-linear behavior, etc. Furthermore, long tunnels have significant length and pass through different strata. All of these would require large-scale seismic analysis of long tunnels with three-dimensional models, which is difficult due to the lack of available computing power. This paper introduces two types of methods for seismic analysis of long tunnels, namely simplified and unified methods. Several models, including the mass-spring-beam model, and the beam-spring model and its analytical solution are presented as examples of the simplified method. The unified method is based on a multiscale framework for long tunnels, with coarse and refined finite element meshes, or with the discrete element method and the finite difference method to compute the overall seismic response of the tunnel while including detailed dynamic response at positions of potential damage or of interest. A bridging scale term is introduced in the framework so that compatibility of dynamic behavior between the macro- and meso-scale subdomains is enforced. Examples are presented to demonstrate the applicability of the simplified and the unified methods.

  3. Simplified methods for the prolonged treatment of fish diseases (United States)

    Fish, F.F.


    The prevention or control of epidemics of fish diseases by applying a disinfecting solution in a uniform concentration directly to the water supply of a fish pond or trough for a definite period of time has been exceedingly slow in development. In so far as can be determined, the original idea should be credited to. Marsh and Robinson (1910). In their work on the control of algae in fish ponds by the continuous application of dilute copper sulphate solution, administered to the inflowing water supply by means of a floating syphon, they suggested this method as a possibility in the treatment of fish diseases. Following their work, this commendable idea seems to have remained quite dormant and apparently forgotten until Hess (1930) revived it twenty or more years later. This worker found that a prolonged immersion in a dilute disinfecting bath was more efficacious in remowng fluke parasites from goldfish than was the customary short "hand dip" method. Kingsbury and Embody (1932) later adapted the idea of a prolonged treatment to running water by the use of a float valve for maintaining a constant level in a reservoir, resulting in a constant flow to the pond or trough to be treated. Shortly thereafter, Fish (1933) modified the floating syphon of Marsh and Robinson, as it was a simpler apparatus than that of Kingsbury and Embody.

  4. Circuit Distortion Analysis Based on the Simplified Newton's Method

    Directory of Open Access Journals (Sweden)

    M. M. Gourary


    Full Text Available A new computational technique for distortion analysis of nonlinear circuits is presented. The new technique is applicable to the same class of circuits, namely, weakly nonlinear and time-varying circuits, as the periodic Volterra series. However, unlike the Volterra series, it does not require the computation of the second and third derivatives of device models. The new method is computationally efficient compared with a complete multitone nonlinear steady-state analysis such as harmonic balance. Moreover, the new technique naturally allows computing and characterizing the contributions of individual circuit components to the overall circuit distortion. This paper presents the theory of the new technique, a discussion of the numerical aspects, and numerical results.

  5. Development of Simplified Ultrasonic CT System for Imaging of Weld Metal and It's Comparison with TOFD Method (United States)

    Kim, Kyung-Cho; Fukuhara, Hiroaki; Yamawaki, Hisashi

    In this paper, as a new measurement method to estimate the structure change of weld metal, the simplified ultrasonic CT system, which uses the information of three directions, that is, 90°, +45° and -45° about inspection plane is developed. Use of simplified ultrasonic CT system has two merits: Firstly, the measurement time is very short comparing with general CT. Secondly, it can detect sensitively very infinitesimal defect in vertical or slant direction about inspection plane because the obtained image is not C scan image but CT image calculated from three directions. From these merits, this method can be considered as a very effective method for the evaluation of material condition. In order to compare the performance of simplified ultrasonic CT, the CT image obtained from several specimens with several simple defects was compared with the D scan image obtained by TOFD (Time of Flight Diffraction) method. We can see simple defects more clearly by new proposed method. Experimental results on several kinds of specimen, having welded joint by electron beam welding, welded joint by electron beam welding and fatigue crack showed that the obtained C scan or CT image has better resolution than the D scan image by TOFD method and shows similar image to actual shape.

  6. A Manual of Simplified Laboratory Methods for Operators of Wastewater Treatment Facilities. (United States)

    Westerhold, Arnold F., Ed.; Bennett, Ernest C., Ed.

    This manual is designed to provide the small wastewater treatment plant operator, as well as the new or inexperienced operator, with simplified methods for laboratory analysis of water and wastewater. It is emphasized that this manual is not a replacement for standard methods but a guide for plants with insufficient equipment to perform analyses…

  7. Estimating the Strength of Superrotation with a Simplified Shallow-Water Model (United States)

    Wang, H.; Wordsworth, R. D.


    Synchronously rotating close-in exoplanets, based on three-dimensional general circulation models, are usually expected to exhibit strong eastward equatorial jets (equatorial superrotation). The strength of equatorial superrotation greatly influences important observables, such as the day-night temperature difference and hottest region phase shift from the substellar point. Yet the strength of equatorial jets cannot be quantitatively predicted by current theories. We try to estimate the strength of superrotation with a simplified analytical model, which is based on a one-and-a-half-layer shallow water model. In our model, an active layer is governed by the shallow water equation, and a quiescent layer exchanges mass and momentum with the active layer. This shallow water model, originally proposed by Shell and Held (2004) to study superrotation, allows us to test different approximations that aid our estimation of the jet speed. In addition, by varying the interaction between the active layer and the immobile layer, we study how the lower atmosphere influences the dynamics and day-night gradient in the upper atmosphere and investigate the possibility of gathering information on the lower atmosphere by analyzing the observables of the upper atmosphere. We also compare our shallow-water model with an idealized three-dimensional general circulation model to assess the limitations of our model and theory.

  8. A simplified, data-constrained approach to estimate the permafrost carbon–climate feedback (United States)

    Koven, C. D.; Schuur, E. A. G.; Schädel, C.; Bohn, T. J.; Burke, E. J.; Chen, G.; Chen, X.; Ciais, P.; Grosse, G.; Harden, J. W.; Hayes, D. J.; Hugelius, G.; Jafarov, E. E.; Krinner, G.; Kuhry, P.; Lawrence, D. M.; MacDougall, A. H.; Marchenko, S. S.; McGuire, A. D.; Natali, S. M.; Nicolsky, D. J.; Olefeldt, D.; Peng, S.; Romanovsky, V. E.; Schaefer, K. M.; Strauss, J.; Treat, C. C.; Turetsky, M.


    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation–Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a three-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100. Under a medium warming scenario (RCP4.5), the approach projects permafrost soil C losses of 12.2–33.4 Pg C; under a high warming scenario (RCP8.5), the approach projects C losses of 27.9–112.6 Pg C. Projected C losses are roughly linearly proportional to global temperature changes across the two scenarios. These results indicate a global sensitivity of frozen soil C to climate change (γ sensitivity) of −14 to −19 Pg C °C−1 on a 100 year time scale. For CH4 emissions, our approach assumes a fixed saturated area and that increases in CH4 emissions are related to increased heterotrophic respiration in anoxic soil, yielding CH4 emission increases of 7% and 35% for the RCP4.5 and RCP8.5 scenarios, respectively, which add an additional greenhouse gas forcing of approximately 10–18%. The simplified

  9. A simplified, data-constrained approach to estimate the permafrost carbon–climate feedback (United States)

    Koven, C.D.; Schuur, E.A.G.; Schädel, C.; Bohn, T. J.; Burke, E. J.; Chen, G.; Chen, X.; Ciais, P.; Grosse, G.; Harden, J.W.; Hayes, D.J.; Hugelius, G.; Jafarov, Elchin E.; Krinner, G.; Kuhry, P.; Lawrence, D.M.; MacDougall, A. H.; Marchenko, Sergey S.; McGuire, A. David; Natali, Susan M.; Nicolsky, D.J.; Olefeldt, David; Peng, S.; Romanovsky, V.E.; Schaefer, Kevin M.; Strauss, J.; Treat, C.C.; Turetsky, M.


    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation–Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a three-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100. Under a medium warming scenario (RCP4.5), the approach projects permafrost soil C losses of 12.2–33.4 Pg C; under a high warming scenario (RCP8.5), the approach projects C losses of 27.9–112.6 Pg C. Projected C losses are roughly linearly proportional to global temperature changes across the two scenarios. These results indicate a global sensitivity of frozen soil C to climate change (γ sensitivity) of −14 to −19 Pg C °C−1 on a 100 year time scale. For CH4 emissions, our approach assumes a fixed saturated area and that increases in CH4 emissions are related to increased heterotrophic respiration in anoxic soil, yielding CH4 emission increases of 7% and 35% for the RCP4.5 and RCP8.5 scenarios, respectively, which add an additional greenhouse gas forcing of approximately 10–18%. The

  10. Calculation of renal differential function following renal transplant: retrospective validation of a simplified method. (United States)

    Mansouri, Ali M; Vejdani, Kaveh; Bastani, Bahar; Nguyen, Nghi C


    The estimation of differential function (DF) in post-renal transplant patients (PRTPs) is challenging because of the different distances of the native kidneys (NKs) and transplant kidney (TK) to the gamma camera and because current commercial software allows evaluation of only 2 kidneys instead of 3. We retrospectively validated a simplified method (SM) to process renal scans and hypothesized that it is comparable with the reference method (RM). Twelve 99mTc MAG3 renal scintigraphies of 10 PRTPs were performed on a dual-head gamma camera. The RM was a 2-step process, with the left and right NKs being compared with the TK separately. The SM was a 1-step process combining both NKs together. The DF estimates were consistent with geometric means in both methods. Statistical evaluation included linear correlation and Bland-Altman analysis. The RM and SM showed DF of 78% ± 25% versus 79% ± 27% for the TK and 22% ± 25% versus 21% ± 27% for the NKs (P = 0.3). There was excellent correlation between SM and RM measurements (r = 0.99, P < 0.001). Bland-Altman plot demonstrated a mean difference of 1.2 ± 3.8 at a 95% confidence interval (95% CI) of agreement of -6.2 to + 8.5 for the TK and -1.2 ± 3.8 at a 95% CI of agreement of -8.5 to + 6.2 for the NKs. Only 1 (8%) of 12 scans showed a difference slightly beyond the 95% CI, indicating a good agreement between SM and RM. The SM offers a simple way to evaluate renal DF in PRTP and shows comparable results with the RM. It may have great potential in clinical practice; however, larger studies are needed to verify and further extend the results of this study.

  11. A method to create simplified versions of existing habitat suitability index (HSI) models (United States)

    Wakeley, James S.


    The habitat evaluation procedures (HEP), developed by the US Fish and Wildlife Service, are widely used in the United States to determine the impacts of major construction projects on fish and wildlife habitats. HEP relies heavily on habitat suitability index (HSI) models that use measurements of important habitat characteristics to rate habitat quality for a species on a scale of 0 (unsuitable) to 1.0 (optimal). This report describes a method to simplify existing HSI models to reduce the time and expense involved in sampling habitat variables. Simplified models for three species produced HSI values within 0.2 of those predicted by the original models 90% of the time. Simplified models are particularly useful for rapid habitat inventories and evaluations, wildlife management, and impact assessments in extensive areas or with limited time and personnel.

  12. A Simplified Method for Tissue Engineering Skeletal Muscle Organoids in Vitro (United States)

    Shansky, Janet; DelTatto, Michael; Chromiak, Joseph; Vandenburgh, Herman


    Tissue-engineered three dimensional skeletal muscle organ-like structures have been formed in vitro from primary myoblasts by several different techniques. This report describes a simplified method for generating large numbers of muscle organoids from either primary embryonic avian or neonatal rodent myoblasts, which avoids the requirements for stretching and other mechanical stimulation.

  13. A Simplified Method for Stationary Heat Transfer of a Hollow Core Concrete Slab Used for TABS

    DEFF Research Database (Denmark)

    Yu, Tao; Heiselberg, Per Kvols; Lei, Bo


    Thermally activated building systems (TABS) have been an energy efficient way to improve the indoor thermal comfort. Due to the complicated structure, heat transfer prediction for a hollow core concrete used for TABS is difficult. This paper proposes a simplified method using equivalent thermal r...

  14. 77 FR 73965 - Allocation of Costs Under the Simplified Methods; Hearing (United States)


    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF THE TREASURY Internal Revenue Service 26 CFR Part 1 RIN 1545-BG07 Allocation of Costs Under the Simplified Methods... provide guidance on allocating costs to certain property produced by the taxpayer or acquired by the...

  15. Formative Research on the Simplifying Conditions Method (SCM) for Task Analysis and Sequencing. (United States)

    Kim, YoungHwan; Reigluth, Charles M.

    The Simplifying Conditions Method (SCM) is a set of guidelines for task analysis and sequencing of instructional content under the Elaboration Theory (ET). This article introduces the fundamentals of SCM and presents the findings from a formative research study on SCM. It was conducted in two distinct phases: design and instruction. In the first…

  16. A simplified model to estimate thermal resistance between carbon nanotube and sample in scanning thermal microscopy (United States)

    Nazarenko, Maxim; Rosamond, Mark C.; Gallant, Andrew J.; Kolosov, Oleg V.; Dubrovskii, Vladimir G.; Zeze, Dagou A.


    Scanning thermal microscopy (SThM) is an attractive technique for nanoscale thermal measurements. Multiwalled carbon nanotubes (MWCNT) can be used to enhance a SThM probe in order to drastically increase spatial resolution while keeping required thermal sensitivity. However, an accurate prediction of the thermal resistance at the interface between the MWCNT-enhanced probe tip and a sample under study is essential for the accurate interpretation of experimental measurements. Unfortunately, there is very little literature on Kapitza interfacial resistance involving carbon nanotubes under SThM configuration. We propose a model for heat conductance through an interface between the MWCNT tip and the sample, which estimates the thermal resistance based on phonon and geometrical properties of the MWCNT and the sample, without neglecting the diamond-like carbon layer covering the MWCNT tip. The model considers acoustic phonons as the main heat carriers and account for their scattering at the interface based on a fundamental quantum mechanical approach. The predicted value of the thermal resistance is then compared with experimental data available in the literature. Theoretical predictions and experimental results are found to be of the same order of magnitude, suggesting a simplified, yet realistic model to approximate thermal resistance between carbon nanotube and sample in SThM, albeit low temperature measurements are needed to achieve a better match between theory and experiment. As a result, several possible avenues are outlined to achieve more accurate predictions and to generalize the model.

  17. Simplified model for estimation of lightning induced transient transfer through distribution transformer

    Energy Technology Data Exchange (ETDEWEB)

    Manyahi, M.J. [University of Dar es Salaam (Tanzania). Faculty of Electrical and Computer Systems Engineering; Uppsala University (Sweden). The Angstrom Laboratory, Division for Electricity and Lightning Research; Thottappillil, R. [Uppsala University (Sweden). The Angstrom Laboratory, Division for Electricity and Lightning Research


    In this work a simplified procedure for the formulation of distribution transformer model for studying its response to lightning caused transients is presented. Simplification is achieved by the way with which the model formulation is realised. That is, by consolidating various steps for model formulation that is based on terminal measurements of driving point and transfer short circuit admittance parameters. Sequence of steps in the model formulation procedure begins with the determination of nodal admittance matrix of the transformer by network analyser measurements at the transformer terminals. Thereafter, the elements of nodal admittance matrix are simultaneously approximated in the form of rational functions consisting of real as well as complex conjugate poles and zeros, for realisation of admittance functions in the form of RLCG networks. Finally, the equivalent terminal model of the transformer is created as a {pi}-network consisting of the above RLCG networks for each of its branches. The model can be used in electromagnetic transient or circuit simulation programs in either time or frequency domain for estimating the transfer of common mode transients, such as that caused by lightning, across distribution class transformer. The validity of the model is verified by comparing the model predictions with experimentally measured outputs for different types of common-mode surge waveform as inputs, including a chopped waveform that simulate the operation of surge arresters. Besides it has been verified that the directly measured admittance functions by the network analyser closely matches the derived admittance functions from the time domain impulse measurements up to 3 MHz, higher than achieved in previous models, which improves the resulting model capability of simulating fast transients. The model can be used in power quality studies, to estimate the transient voltages appearing at the low voltage customer installation due to the induced lightning surges on

  18. Simplified method for the biological assessment of the quality of fresh and slightly brackish water

    Energy Technology Data Exchange (ETDEWEB)

    Dresscher, T.G.N.; van der Mark, H.


    A simplified method for the biological assessment of the saprobic quality of surface water is described. The method has the advantage that it is unnecessary to determine accurately the species of organisms present in the sample, which must be done when existing saprobic systems are used. The method described here cannot be used when there is a great abundance of a single species and a very strong environmental disturbance must be assumed.

  19. Simplified performance estimation of ISM-band, OFDM-based WSNs according to the sensitivity/SINAD parameters

    Directory of Open Access Journals (Sweden)

    Pierre van Rhyn


    Full Text Available A novel method is proposed to estimate committed information rate (CIR variations in typical orthogonal frequency division multiplexing (OFDM wireless local area networks (WLANs that are applied in wireless sensor networks (WSNs and operate within the industrial, scientific and medical (ISM frequency bands. The method is based on the observation of a phenomenon of which the significance has not previously been recognized nor documented; here termed the service level differential zone (SLDZ. This method, which conforms to the ITU-T Y.1564 test methodology, provides the means to set a committed information rate (CIR reference for IEEE 802.11a/g/n OFDM systems in terms of committed throughput bandwidth between a test node and an access point (AP at a specific range. An analytical approach is presented to determine the relationship between the maximum operating range (in metres of a wireless sensor network for a specific committed throughput bandwidth, and its link budget (in dB. The most significant contributions of this paper are the analytical tools to determine wireless network capabilities, variations and performance in a simplified method, which does not require specialized measurement equipment. With these it becomes possible for industrial technicians and engineers (who are not necessarily information technology (IT network experts to field analyze OFDM WLANs and so qualify their performance in terms of Y.1564 specified service level agreement (SLA requirements, as well as in terms of the widely acceptable sensitivity/SINAD parameters.

  20. Update and Improve Subsection NH –– Alternative Simplified Creep-Fatigue Design Methods

    Energy Technology Data Exchange (ETDEWEB)

    Tai Asayama


    This report described the results of investigation on Task 10 of DOE/ASME Materials NGNP/Generation IV Project based on a contract between ASME Standards Technology, LLC (ASME ST-LLC) and Japan Atomic Energy Agency (JAEA). Task 10 is to Update and Improve Subsection NH -- Alternative Simplified Creep-Fatigue Design Methods. Five newly proposed promising creep-fatigue evaluation methods were investigated. Those are (1) modified ductility exhaustion method, (2) strain range separation method, (3) approach for pressure vessel application, (4) hybrid method of time fraction and ductility exhaustion, and (5) simplified model test approach. The outlines of those methods are presented first, and predictability of experimental results of these methods is demonstrated using the creep-fatigue data collected in previous Tasks 3 and 5. All the methods (except the simplified model test approach which is not ready for application) predicted experimental results fairly accurately. On the other hand, predicted creep-fatigue life in long-term regions showed considerable differences among the methodologies. These differences come from the concepts each method is based on. All the new methods investigated in this report have advantages over the currently employed time fraction rule and offer technical insights that should be thought much of in the improvement of creep-fatigue evaluation procedures. The main points of the modified ductility exhaustion method, the strain range separation method, the approach for pressure vessel application and the hybrid method can be reflected in the improvement of the current time fraction rule. The simplified mode test approach would offer a whole new advantage including robustness and simplicity which are definitely attractive but this approach is yet to be validated for implementation at this point. Therefore, this report recommends the following two steps as a course of improvement of NH based on newly proposed creep-fatigue evaluation

  1. Optical Mass Displacement Tracking: A simplified field calibration method for the electro-mechanical seismometer. (United States)

    Burk, D. R.; Mackey, K. G.; Hartse, H. E.


    We have developed a simplified field calibration method for use in seismic networks that still employ the classical electro-mechanical seismometer. Smaller networks may not always have the financial capability to purchase and operate modern, state of the art equipment. Therefore these networks generally operate a modern, low-cost digitizer that is paired to an existing electro-mechanical seismometer. These systems are typically poorly calibrated. Calibration of the station is difficult to estimate because coil loading, digitizer input impedance, and amplifier gain differences vary by station and digitizer model. Therefore, it is necessary to calibrate the station channel as a complete system to take into account all components from instrument, to amplifier, to even the digitizer. Routine calibrations at the smaller networks are not always consistent, because existing calibration techniques require either specialized equipment or significant technical expertise. To improve station data quality at the small network, we developed a calibration method that utilizes open source software and a commonly available laser position sensor. Using a signal generator and a small excitation coil, we force the mass of the instrument to oscillate at various frequencies across its operating range. We then compare the channel voltage output to the laser-measured mass displacement to determine the instrument voltage sensitivity at each frequency point. Using the standard equations of forced motion, a representation of the calibration curve as a function of voltage per unit of ground velocity is calculated. A computer algorithm optimizes the curve and then translates the instrument response into a Seismic Analysis Code (SAC) poles & zeros format. Results have been demonstrated to fall within a few percent of a standard laboratory calibration. This method is an effective and affordable option for networks that employ electro-mechanical seismometers, and it is currently being deployed in

  2. Automated local line rolling forming and simplified deformation simulation method for complex curvature plate of ships

    Directory of Open Access Journals (Sweden)

    Y. Zhao


    Full Text Available Local line rolling forming is a common forming approach for the complex curvature plate of ships. However, the processing mode based on artificial experience is still applied at present, because it is difficult to integrally determine relational data for the forming shape, processing path, and process parameters used to drive automation equipment. Numerical simulation is currently the major approach for generating such complex relational data. Therefore, a highly precise and effective numerical computation method becomes crucial in the development of the automated local line rolling forming system for producing complex curvature plates used in ships. In this study, a three-dimensional elastoplastic finite element method was first employed to perform numerical computations for local line rolling forming, and the corresponding deformation and strain distribution features were acquired. In addition, according to the characteristics of strain distributions, a simplified deformation simulation method, based on the deformation obtained by applying strain was presented. Compared to the results of the three-dimensional elastoplastic finite element method, this simplified deformation simulation method was verified to provide high computational accuracy, and this could result in a substantial reduction in calculation time. Thus, the application of the simplified deformation simulation method was further explored in the case of multiple rolling loading paths. Moreover, it was also utilized to calculate the local line rolling forming for the typical complex curvature plate of ships. Research findings indicated that the simplified deformation simulation method was an effective tool for rapidly obtaining relationships between the forming shape, processing path, and process parameters.

  3. 3D Bearing Capacity of Structured Cells Supported on Cohesive Soil: Simplified Analysis Method

    Directory of Open Access Journals (Sweden)

    Martínez-Galván Sergio Antonio


    Full Text Available In this paper a simplified analysis method to compute the bearing capacity of structured cell foundations subjected to vertical loading and supported in soft cohesive soil is proposed. A structured cell is comprised by a top concrete slab structurally connected to concrete external walls that enclose the natural soil. Contrary to a box foundation it does not include a bottom slab and hence, the soil within the walls becomes an important component of the structured cell. This simplified method considers the three-dimensional geometry of the cell, the undrained shear strength of cohesive soils and the existence of structural continuity between the top concrete slab and the surrounding walls, along the walls themselves and the walls structural joints. The method was developed from results of numerical-parametric analyses, from which it was found that structured cells fail according to a punching-type mechanism.

  4. Simplified methods of topical fluoride administration: effects in individuals with hyposalivation. (United States)

    Gabre, Pia; Moberg Sköld, Ulla; Birkhed, Dowen


    The aim was to compare fluoride (F) levels in individuals with normal salivary secretion and hyposalivation in connection with their use of F solutions and toothpaste. Seven individuals with normal salivation and nine with hyposalivation rinsed with 0.2% NaF solution for 1 minute. In addition, individuals with hyposalivation performed the following: (i) 0.2% NaF rinsing for 20 seconds, (ii) rubbing oral mucosa with a swab soaked with 0.2% NaF solution, and (iii) brushing with 5,000 ppm F (1.1% NaF) toothpaste. Subjects characterized by hyposalivation reached approximately five times higher peak values of F concentrations in saliva after 1 minute rinsing with the F solution and higher area under the curve (AUC) values. The simplified methods exhibited the same AUC values as did 1 minute of rinsing. Brushing with 5,000 ppm F toothpaste resulted in higher AUC values than did the simplified methods. The F concentrations reached higher levels in individuals with hyposalivation compared to those with normal salivation. The simplified methods tested showed similar effects as conventional methods. ©2012 Special Care Dentistry Association and Wiley Periodicals, Inc.

  5. Demonstration of Simplified Field Test Methods for the Measurement of Diesel Particulate Matter (PM) from Military Diesel Engines (United States)


    The engine had an EPA Cert: JDX-NRCI-06-37 and CARB Cert: U-R- 004 -0269. Table 2-1. Information on Test Engine 2.2 Emission Test...the Simplified Field Test Method. Plot of All Data Collected with the FRM and Simplified Field Test Method. SFTM MEL Nom . PM Fact.* PM

  6. Creep-fatigue evaluation method for weld joint of Mod.9Cr-1Mo steel Part II: Plate bending test and proposal of a simplified evaluation method

    Energy Technology Data Exchange (ETDEWEB)

    Ando, Masanori, E-mail:; Takaya, Shigeru, E-mail:


    Highlights: • Creep-fatigue evaluation method for weld joint of Mod.9Cr-1Mo steel is proposed. • A simplified evaluation method is also proposed for the codification. • Both proposed evaluation method was validated by the plate bending test. • For codification, the local stress and strain behavior was analyzed. - Abstract: In the present study, to develop an evaluation procedure and design rules for Mod.9Cr-1Mo steel weld joints, a method for evaluating the creep-fatigue life of Mod.9Cr-1Mo steel weld joints was proposed based on finite element analysis (FEA) and a series of cyclic plate bending tests of longitudinal and horizontal seamed plates. The strain concentration and redistribution behaviors were evaluated and the failure cycles were estimated using FEA by considering the test conditions and metallurgical discontinuities in the weld joints. Inelastic FEA models consisting of the base metal, heat-affected zone and weld metal were employed to estimate the elastic follow-up behavior caused by the metallurgical discontinuities. The elastic follow-up factors determined by comparing the elastic and inelastic FEA results were determined to be less than 1.5. Based on the estimated elastic follow-up factors obtained via inelastic FEA, a simplified technique using elastic FEA was proposed for evaluating the creep-fatigue life in Mod.9Cr-1Mo steel weld joints. The creep-fatigue life obtained using the plate bending test was compared to those estimated from the results of inelastic FEA and by a simplified evaluation method.

  7. A randomised trial on simplified and conventional methods for complete denture fabrication: masticatory performance and ability. (United States)

    Cunha, T R; Della Vecchia, M P; Regis, R R; Ribeiro, A B; Muglia, V A; Mestriner, W; de Souza, R F


    To compare a simplified method to a conventional protocol for complete denture fabrication regarding masticatory performance and ability. A sample was formed by edentulous patients requesting treatment with maxillary and mandibular complete dentures. Participants were randomly divided into two groups: Group S, which received dentures fabricated by a simplified method, and Group C (n=21 each), which received conventionally fabricated dentures. After three months following insertion, masticatory performance was evaluated by a colorimetric assay based on chewing two capsules as test food during twenty and forty cycles. Masticatory ability was assessed by a questionnaire with binary answers and a single question answered by means of a 0-10 scale. A third group (DN) formed by seventeen dentate volunteers served as an external comparator. Groups were compared by statistical tests suitable for data distribution (α=0.05). Thirty-nine participants were assessed for three months (twenty from Group C and nineteen from Group S). Groups C and S presented similar masticatory performance which corresponded to approximately 30% of Group DN. Results for masticatory ability showed similarity between S and C, regardless of the assessment method, although an isolate questionnaire item showed more favourable results for the first group. The simplified method for complete denture fabrication is able to restore masticatory function to a level comparable to a conventional protocol, both physiologically and according to patient's perceptions. Although masticatory function is impaired by the loss of natural teeth and dentures can restore only a fraction of such function, patients can benefit from a simplified protocol for complete denture fabrication to the same extent they would by conventional techniques. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Estimating Daily Evapotranspiration From Remotely Sensed Instantaneous Observations With Simplified Derivations of a Theoretical Model (United States)

    Tang, Ronglin; Li, Zhao-Liang


    Surface evapotranspiration (ET) is one of the key components in global hydrological cycle and energy budget on Earth. This paper designs a theoretical relationship between daily and instantaneous ETs with a multiplication of multiple fractions through a mathematical derivation of the physics-based Penman-Monteith equation and further develops five methods for converting remotely sensed instantaneous ET to daily values, one of which is equivalent to the conventional constant evaporative fraction (EF) method. The five methods are then evaluated and intercompared using long-term ground-based eddy covariance system-measured half-hourly latent heat flux (LE) and three groups of Moderate Resolution Imaging Spectroradiometer-based instantaneous LE data sets collected from April 2009 to late October 2011 at the Yucheng station. Overall, the constant decoupling factor (Ω) method, the constant surface resistance (Rc) method, and the constant ratio of surface resistance to aerodynamic resistance (Rc/Ra) method could produce daily LE estimates that are in reasonably good agreement with the ground-based eddy covariance measurements, whereas the constant EF method and the constant Priestley-Taylor parameter (α) method underestimate the daily LE with larger biases and root-mean-square errors. The former three methods are of more solid physical foundation and can effectively capture the effect of temporally variable meteorological factors on the diurnal pattern of surface ET. They provide good alternatives to the nowadays commonly applied methods for the conversion of remotely sensed instantaneous ET to daily values.

  9. A Coupled Remote Sensing and Simplified Surface Energy Balance Approach to Estimate Actual Evapotranspiration from Irrigated Fields

    Directory of Open Access Journals (Sweden)

    Assefa M. Melesse


    Full Text Available Accurate crop performance monitoring and production estimation are critical fortimely assessment of the food balance of several countries in the world. Since 2001, theFamine Early Warning Systems Network (FEWS NET has been monitoring cropperformance and relative production using satellite-derived data and simulation models inAfrica, Central America, and Afghanistan where ground-based monitoring is limitedbecause of a scarcity of weather stations. The commonly used crop monitoring models arebased on a crop water-balance algorithm with inputs from satellite-derived rainfallestimates. These models are useful to monitor rainfed agriculture, but they are ineffectivefor irrigated areas. This study focused on Afghanistan, where over 80 percent ofagricultural production comes from irrigated lands. We developed and implemented aSimplified Surface Energy Balance (SSEB model to monitor and assess the performanceof irrigated agriculture in Afghanistan using a combination of 1-km thermal data and 250-m Normalized Difference Vegetation Index (NDVI data, both from the Moderate Resolution Imaging Spectroradiometer (MODIS sensor. We estimated seasonal actual evapotranspiration (ETa over a period of six years (2000-2005 for two major irrigated river basins in Afghanistan, the Kabul and the Helmand, by analyzing up to 19 cloud-free thermal and NDVI images from each year. These seasonal ETa estimates were used as relative indicators of year-to-year production magnitude differences. The temporal water- use pattern of the two irrigated basins was indicative of the cropping patterns specific to each region. Our results were comparable to field reports and to estimates based on watershed-wide crop water-balance model results. For example, both methods found that the 2003 seasonal ETa was the highest of all six years. The method also captured water management scenarios where a unique year-to-year variability was identified in addition to water-use differences between

  10. A simplified Excel® algorithm for estimating the least limiting water range of soils

    Directory of Open Access Journals (Sweden)

    Leão Tairone Paiva


    Full Text Available The least limiting water range (LLWR of soils has been employed as a methodological approach for evaluation of soil physical quality in different agricultural systems, including forestry, grasslands and major crops. However, the absence of a simplified methodology for the quantification of LLWR has hampered the popularization of its use among researchers and soil managers. Taking this into account this work has the objective of proposing and describing a simplified algorithm developed in Excel® software for quantification of the LLWR, including the calculation of the critical bulk density, at which the LLWR becomes zero. Despite the simplicity of the procedures and numerical techniques of optimization used, the nonlinear regression produced reliable results when compared to those found in the literature.

  11. Evaluation of simplified dna extraction methods for EMM typing of group a streptococci

    Directory of Open Access Journals (Sweden)

    Jose JJM


    Full Text Available Simplified methods of DNA extraction for amplification and sequencing for emm typing of group A streptococci (GAS can save valuable time and cost in resource crunch situations. To evaluate this, we compared two methods of DNA extraction directly from colonies with the standard CDC cell lysate method for emm typing of 50 GAS strains isolated from children with pharyngitis and impetigo. For this, GAS colonies were transferred into two sets of PCR tubes. One set was preheated at 94oC for two minutes in the thermal cycler and cooled while the other set was frozen overnight at -20oC and then thawed before adding the PCR mix. For the cell lysate method, cells were treated with mutanolysin and hyaluronidase before heating at 100oC for 10 minutes and cooling immediately as recommended in the CDC method. All 50 strains could be typed by sequencing the hyper variable region of the emm gene after amplification. The quality of sequences and the emm types identified were also identical. Our study shows that the two simplified DNA extraction methods directly from colonies can conveniently be used for typing a large number of GAS strains easily in relatively short time.

  12. A simplified approach to the PROMETHEE method for priority setting in management of mine action projects

    Directory of Open Access Journals (Sweden)

    Marko Mladineo


    Full Text Available In the last 20 years, priority setting in mine actions, i.e. in humanitarian demining, has become an increasingly important topic. Given that mine action projects require management and decision-making based on a multi -criteria approach, multi-criteria decision-making methods like PROMETHEE and AHP have been used worldwide for priority setting. However, from the aspect of mine action, where stakeholders in the decision-making process for priority setting are project managers, local politicians, leaders of different humanitarian organizations, or similar, applying these methods can be difficult. Therefore, a specialized web-based decision support system (Web DSS for priority setting, developed as part of the FP7 project TIRAMISU, has been extended using a module for developing custom priority setting scenarios in line with an exceptionally easy, user-friendly approach. The idea behind this research is to simplify the multi-criteria analysis based on the PROMETHEE method. Therefore, a simplified PROMETHEE method based on statistical analysis for automated suggestions of parameters such as preference function thresholds, interactive selection of criteria weights, and easy input of criteria evaluations is presented in this paper. The result is web-based DSS that can be applied worldwide for priority setting in mine action. Additionally, the management of mine action projects is supported using modules for providing spatial data based on the geographic information system (GIS. In this paper, the benefits and limitations of a simplified PROMETHEE method are presented using a case study involving mine action projects, and subsequently, certain proposals are given for the further research.

  13. River Discharge Estimation by Using Altimetry Data and Simplified Flood Routing Modeling

    Directory of Open Access Journals (Sweden)

    Tommaso Moramarco


    Full Text Available A methodology to estimate the discharge along rivers, even poorly gauged ones, taking advantage of water level measurements derived from satellite altimetry is proposed. The procedure is based on the application of the Rating Curve Model (RCM, a simple method allowing for the estimation of the flow conditions in a river section using only water levels recorded at that site and the discharges observed at another upstream section. The European Remote-Sensing Satellite 2, ERS-2, and the Environmental Satellite, ENVISAT, altimetry data are used to provide time series of water levels needed for the application of RCM. In order to evaluate the usefulness of the approach, the results are compared with the ones obtained by applying an empirical formula that allows discharge estimation from remotely sensed hydraulic information. To test the proposed procedure, the 236 km-reach of the Po River is investigated, for which five in situ stations and four satellite tracks are available. Results show that RCM is able to appropriately represent the discharge, and its performance is better than the empirical formula, although this latter does not require upstream hydrometric data. Given its simple formal structure, the proposed approach can be conveniently utilized in ungauged sites where only the survey of the cross-section is needed.

  14. Development and validation of a simplified titration method for monitoring volatile fatty acids in anaerobic digestion. (United States)

    Sun, Hao; Guo, Jianbin; Wu, Shubiao; Liu, Fang; Dong, Renjie


    The volatile fatty acids (VFAs) concentration has been considered as one of the most sensitive process performance indicators in anaerobic digestion (AD) process. However, the accurate determination of VFAs concentration in AD processes normally requires advanced equipment and complex pretreatment procedures. A simplified method with fewer sample pretreatment procedures and improved accuracy is greatly needed, particularly for on-site application. This report outlines improvements to the Nordmann method, one of the most popular titrations used for VFA monitoring. The influence of ion and solid interfering subsystems in titrated samples on results accuracy was discussed. The total solid content in titrated samples was the main factor affecting accuracy in VFA monitoring. Moreover, a high linear correlation was established between the total solids contents and VFA measurement differences between the traditional Nordmann equation and gas chromatography (GC). Accordingly, a simplified titration method was developed and validated using a semi-continuous experiment of chicken manure anaerobic digestion with various organic loading rates. The good fitting of the results obtained by this method in comparison with GC results strongly supported the potential application of this method to VFA monitoring. Copyright © 2017. Published by Elsevier Ltd.

  15. A Simplified Model to Estimate the Concentration of Inorganic Ions and Heavy Metals in Rivers

    Directory of Open Access Journals (Sweden)

    Clemêncio Nhantumbo


    Full Text Available This paper presents a model that uses only pH, alkalinity, and temperature to estimate the concentrations of major ions in rivers (Na+, K+, Mg2+, Ca2+, HCO3−, SO42−, Cl−, and NO3− together with the equilibrium concentrations of minor ions and heavy metals (Fe3+, Mn2+, Cd2+, Cu2+, Al3+, Pb2+, and Zn2+. Mining operations have been increasing, which has led to changes in the pollution loads to receiving water systems, meanwhile most developing countries cannot afford water quality monitoring. A possible solution is to implement less resource-demanding monitoring programs, supported by mathematical models that minimize the required sampling and analysis, while still being able to detect water quality changes, thereby allowing implementation of measures to protect the water resources. The present model was developed using existing theories for: (i carbonate equilibrium; (ii total alkalinity; (iii statistics of major ions; (iv solubility of minerals; and (v conductivity of salts in water. The model includes two options to estimate the concentrations of major ions: (1 a generalized method, which employs standard values from a world-wide data base; and (2 a customized method, which requires specific baseline data for the river of interest. The model was tested using data from four monitoring stations in Swedish rivers with satisfactory results.

  16. A transfer function type of simplified electrochemical model with modified boundary conditions and Padé approximation for Li-ion battery: Part 2. Modeling and parameter estimation (United States)

    Yuan, Shifei; Jiang, Lei; Yin, Chengliang; Wu, Hongjie; Zhang, Xi


    The electrochemistry-based battery model can provide physics-meaningful knowledge about the lithium-ion battery system with extensive computation burdens. To motivate the development of reduced order battery model, three major contributions have been made throughout this paper: (1) the transfer function type of simplified electrochemical model is proposed to address the current-voltage relationship with Padé approximation method and modified boundary conditions for electrolyte diffusion equations. The model performance has been verified under pulse charge/discharge and dynamic stress test (DST) profiles with the standard derivation less than 0.021 V and the runtime 50 times faster. (2) the parametric relationship between the equivalent circuit model and simplified electrochemical model has been established, which will enhance the comprehension level of two models with more in-depth physical significance and provide new methods for electrochemical model parameter estimation. (3) four simplified electrochemical model parameters: equivalent resistance Req, effective diffusion coefficient in electrolyte phase Deeff, electrolyte phase volume fraction ε and open circuit voltage (OCV), have been identified by the recursive least square (RLS) algorithm with the modified DST profiles under 45, 25 and 0 °C. The simulation results indicate that the proposed model coupled with RLS algorithm can achieve high accuracy for electrochemical parameter identification in dynamic scenarios.

  17. A Simplified Method for predicting Ultimate Compressive Strength of Ship Panels

    DEFF Research Database (Denmark)

    Paik, Jeom Kee; Pedersen, Preben Terndrup


    A simplified method for predicting ultimate compressive strength of ship panels which have complex shape of the initial deflection is described. The procedure consist of the elastic large deflection theory and the rigid-plastic analysis based on the collapse mechanism taking into account large...... deformation effects. By taking only one component for the selected deflection function, the computer time for the elastic large deflection analysis will be drastically reduced. The validity of the procedure is checked by comparing the present solutions with the finite-element results for actual ship panels...

  18. Dynamics of a fractional-order simplified unified system based on the Adomian decomposition method (United States)

    Xu, Yixin; Sun, Kehui; He, Shaobo; Zhang, Limin


    In this paper, the Adomian decomposition method (ADM) is applied to solve the fractional-order simplified unified system. The dynamics of the system is analyzed by means of the Lyapunov exponent spectrum, bifurcations, chaotic attractor, power spectrum and maximum Lyapunov exponent diagram. Route to chaos by period doubling is observed. Chaos is found to exist in the system with order as low as 1.371. In addition, chaotic behaviors are studied with different dimensional order, and the results show that this system is chaotic when only the first or third dimension is fractional.

  19. Methods for estimating the semivariogram

    DEFF Research Database (Denmark)

    Lophaven, Søren Nymand; Carstensen, Niels Jacob; Rootzen, Helle


    . In the existing literature various methods for modelling the semivariogram have been proposed, while only a few studies have been made on comparing different approaches. In this paper we compare eight approaches for modelling the semivariogram, i.e. six approaches based on least squares estimation......Modelling spatial variability, typically in terms of the semivariogram, is of great interest when the objective is to compute spatial predictions of parameters measured in space. Such parameters could be rainfall, temperature or concentrations of polluting agents in aquatic environments...... is insensitive to the choice of estimation method, but also that the uncertainties of predictions were reduced when applying maximum likelihood....

  20. Adjusting for treatment switching in randomised controlled trials - A simulation study and a simplified two-stage method. (United States)

    Latimer, Nicholas R; Abrams, K R; Lambert, P C; Crowther, M J; Wailoo, A J; Morden, J P; Akehurst, R L; Campbell, M J


    Estimates of the overall survival benefit of new cancer treatments are often confounded by treatment switching in randomised controlled trials (RCTs) - whereby patients randomised to the control group are permitted to switch onto the experimental treatment upon disease progression. In health technology assessment, estimates of the unconfounded overall survival benefit associated with the new treatment are needed. Several switching adjustment methods have been advocated in the literature, some of which have been used in health technology assessment. However, it is unclear which methods are likely to produce least bias in realistic RCT-based scenarios. We simulated RCTs in which switching, associated with patient prognosis, was permitted. Treatment effect size and time dependency, switching proportions and disease severity were varied across scenarios. We assessed the performance of alternative adjustment methods based upon bias, coverage and mean squared error, related to the estimation of true restricted mean survival in the absence of switching in the control group. We found that when the treatment effect was not time-dependent, rank preserving structural failure time models (RPSFTM) and iterative parameter estimation methods produced low levels of bias. However, in the presence of a time-dependent treatment effect, these methods produced higher levels of bias, similar to those produced by an inverse probability of censoring weights method. The inverse probability of censoring weights and structural nested models produced high levels of bias when switching proportions exceeded 85%. A simplified two-stage Weibull method produced low bias across all scenarios and provided the treatment switching mechanism is suitable, represents an appropriate adjustment method.

  1. Weather data for simplified energy calculation methods. Volume IV. United States: WYEC data

    Energy Technology Data Exchange (ETDEWEB)

    Olsen, A.R.; Moreno, S.; Deringer, J.; Watson, C.R.


    The objective of this report is to provide a source of weather data for direct use with a number of simplified energy calculation methods available today. Complete weather data for a number of cities in the United States are provided for use in the following methods: degree hour, modified degree hour, bin, modified bin, and variable degree day. This report contains sets of weather data for 23 cities using Weather Year for Energy Calculations (WYEC) source weather data. Considerable overlap is present in cities (21) covered by both the TRY and WYEC data. The weather data at each city has been summarized in a number of ways to provide differing levels of detail necessary for alternative simplified energy calculation methods. Weather variables summarized include dry bulb and wet bulb temperature, percent relative humidity, humidity ratio, wind speed, percent possible sunshine, percent diffuse solar radiation, total solar radiation on horizontal and vertical surfaces, and solar heat gain through standard DSA glass. Monthly and annual summaries, in some cases by time of day, are available. These summaries are produced in a series of nine computer generated tables.

  2. Weather data for simplified energy calculation methods. Volume III. Western United States: TRY data

    Energy Technology Data Exchange (ETDEWEB)

    Olsen, A.R.; Moreno, S.; Deringer, J.; Watson, C.R.


    The objective is to provide a source of weather data for direct use with a number of simplified energy calculation methods available today. Complete weather data for a number of cities in the United States are provided for use in the following methods: degree hour, modified degree hour, bin, modified bin, and variable degree day. This report contains sets of weather data for 24 cities in the continental United States using Test Reference Year (TRY) source weather data. The weather data at each city has been summarized in a number of ways to provide differing levels of detail necessary for alternative simplified energy calculation methods. Weather variables summarized include dry bulb and wet bulb temperature, percent relative humidity, humidity ratio, wind speed, percent possible sunshine, percent diffuse solar radiation, total solar radiation on horizontal and vertical surfaces, and solar heat gain through standard DSA glass. Monthly and annual summaries, in some cases by time of day, are available. These summaries are produced in a series of nine computer generated tables.

  3. A Simplified and Reliable Damage Method for the Prediction of the Composites Pieces (United States)

    Viale, R.; Coquillard, M.; Seytre, C.


    Structural engineers are often faced to test results on composite structures largely tougher than predicted. By attempting to reduce this frequent gap, a survey of some extensive synthesis works relative to the prediction methods and to the failure criteria was led. This inquiry dealts with the plane stress state only. All classical methods have strong and weak points wrt practice and reliability aspects. The main conclusion is that in the plane stress case, the best usaul industrial methods give predictions rather similar. But very generally they do not explain the often large discrepancies wrt the tests, mainly in the cases of strong stress gradients or of bi-axial laminate loadings. It seems that only the methods considering the complexity of the composites damages (so-called physical methods or Continuum Damage Mechanics “CDM”) bring a clear mending wrt the usual methods..The only drawback of these methods is their relative intricacy mainly in urged industrial conditions. A method with an approaching but simplified representation of the CDM phenomenology is presented. It was compared to tests and other methods: - it brings a fear improvement of the correlation with tests wrt the usual industrial methods, - it gives results very similar to the painstaking CDM methods and very close to the test results. Several examples are provided. In addition this method is really thrifty wrt the material characterization as well as for the modelisation and the computation efforts.

  4. Simplified Analytical Methods to Analyze Lock Gates Submitted to Ship Collisions and Earthquakes

    Directory of Open Access Journals (Sweden)

    Buldgen Loic


    Full Text Available This paper presents two simplified analytical methods to analyze lock gates submitted to two different accidental loads. The case of an impact involving a vessel is first investigated. In this situation, the resistance of the struck gate is evaluated by assuming a local and a global deforming mode. The super-element method is used in the first case, while an equivalent beam model is simultaneously introduced to capture the overall bending motion of the structure. The second accidental load considered in this paper is the seismic action, for which an analytical method is presented to evaluate the total hydrodynamic pressure applied on a lock gate during an earthquake, due account being taken of the fluid-structure interaction. For each of these two actions, numerical validations are presented and the analytical results are compared to finite-element solutions.

  5. Scarless assembly of unphosphorylated DNA fragments with a simplified DATEL method. (United States)

    Ding, Wenwen; Weng, Huanjiao; Jin, Peng; Du, Guocheng; Chen, Jian; Kang, Zhen


    Efficient assembly of multiple DNA fragments is a pivotal technology for synthetic biology. A scarless and sequence-independent DNA assembly method (DATEL) using thermal exonucleases has been developed recently. Here, we present a simplified DATEL (sDATEL) for efficient assembly of unphosphorylated DNA fragments with low cost. The sDATEL method is only dependent on Taq DNA polymerase and Taq DNA ligase. After optimizing the committed parameters of the reaction system such as pH and the concentration of Mg2+ and NAD+, the assembly efficiency was increased by 32-fold. To further improve the assembly capacity, the number of thermal cycles was optimized, resulting in successful assembly 4 unphosphorylated DNA fragments with an accuracy of 75%. sDATEL could be a desirable method for routine manual and automated assembly.

  6. Order statistics & inference estimation methods

    CERN Document Server

    Balakrishnan, N


    The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co

  7. Simplified unified model for estimating the motion of magnetic nanoparticles within electrohydrodynamic field. (United States)

    Seo, Hyeon-Seok; Lee, Sangyoup; Lee, Jong-Chul


    In previous research, we studied the electrical breakdown characteristics of a transformer oil-based magnetic fluid; mailnly, those were carried out by the experimental measurements. The first study was aimed at enhancing the dielectric breakdown voltage of transformer oil by adding magnetic nanoparticles experimentally under the official testing condition of dielectric liquids. The next study was focused on explaining the reason why the dielectric characterisitics of the fluids were changed through optically visualizing the particles motion in a microchannel using an optical microscopic measurement and numerically calculating the dielectrophoretic force induced in the fluids with considering only the properties of magnetic nanoparticles. In this study, we developed a simplified unified model for calculating further the motion of magnetic nanoparticles suspended in the presence of electrohydrodynamic field using the COMSOL multiphysics finite element simulation suite and investigated the effects of magnetic nanoparticle dielectrophoretic activity aimed at enhancing the electrical breakdown characteristics of transformer oil.

  8. Accuracy analysis of simplified and rigorous numerical methods applied to binary nanopatterning gratings in non-paraxial domain

    Energy Technology Data Exchange (ETDEWEB)

    Francés, Jorge; Bleda, Sergio; Gallego, Sergi; Neipp, Cristian; Márquez, Andrés [Instituto Universitario de Física Aplicada a las Ciencias y las Tecnologías, Universidad de Alicante, Crtra. San Vicente del Raspeig S/N, Alicante E-03080 (Spain); Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante, Crtra. San Vicente del Raspeig S/N, Alicante E-03080 (Spain); Pascual, Inmaculada [Instituto Universitario de Física Aplicada a las Ciencias y las Tecnologías, Universidad de Alicante, Crtra. San Vicente del Raspeig S/N, Alicante E-03080 (Spain); Departamento de Óptica, Farmacología y Anatomía, Universidad de Alicante, Crtra. San Vicente del Raspeig S/N, Alicante E-03080 (Spain); Beléndez, Augusto, E-mail: [Instituto Universitario de Física Aplicada a las Ciencias y las Tecnologías, Universidad de Alicante, Crtra. San Vicente del Raspeig S/N, Alicante E-03080 (Spain); Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante, Crtra. San Vicente del Raspeig S/N, Alicante E-03080 (Spain)


    A set of simplified and rigorous electromagnetic vector theories is used for analyzing the transmittance characteristics of diffraction phase gratings. The scalar diffraction theory and the effective medium theory are validated with the exact results obtained via the rigorous coupled-wave theory and the finite-difference time-domain method. The effects of surface profile parameters and also the angle of incidence is demonstrated to be a limiting factor in the accuracy of these theories. Therefore, the error of both simplified theories is also analyzed in non-paraxial domain with the intention of establishing a specific range of validity for both simplified theories.

  9. Variational estimation of process parameters in a simplified atmospheric general circulation model (United States)

    Lv, Guokun; Koehl, Armin; Stammer, Detlef


    Parameterizations are used to simulate effects of unresolved sub-grid-scale processes in current state-of-the-art climate model. The values of the process parameters, which determine the model's climatology, are usually manually adjusted to reduce the difference of model mean state to the observed climatology. This process requires detailed knowledge of the model and its parameterizations. In this work, a variational method was used to estimate process parameters in the Planet Simulator (PlaSim). The adjoint code was generated using automatic differentiation of the source code. Some hydrological processes were switched off to remove the influence of zero-order discontinuities. In addition, the nonlinearity of the model limits the feasible assimilation window to about 1day, which is too short to tune the model's climatology. To extend the feasible assimilation window, nudging terms for all state variables were added to the model's equations, which essentially suppress all unstable directions. In identical twin experiments, we found that the feasible assimilation window could be extended to over 1-year and accurate parameters could be retrieved. Although the nudging terms transform to a damping of the adjoint variables and therefore tend to erases the information of the data over time, assimilating climatological information is shown to provide sufficient information on the parameters. Moreover, the mechanism of this regularization is discussed.

  10. Unrecorded Alcohol Consumption: Quantitative Methods of Estimation


    Razvodovsky, Y. E.


    unrecorded alcohol; methods of estimation In this paper we focused on methods of estimation of unrecorded alcohol consumption level. Present methods of estimation of unrevorded alcohol consumption allow only approximate estimation of unrecorded alcohol consumption level. Tacking into consideration the extreme importance of such kind of data, further investigation is necessary to improve the reliability of methods estimation of unrecorded alcohol consumption.

  11. Simplified Analytical Method for Optimized Initial Shape Analysis of Self-Anchored Suspension Bridges and Its Verification

    Directory of Open Access Journals (Sweden)

    Myung-Rag Jung


    Full Text Available A simplified analytical method providing accurate unstrained lengths of all structural elements is proposed to find the optimized initial state of self-anchored suspension bridges under dead loads. For this, equilibrium equations of the main girder and the main cable system are derived and solved by evaluating the self-weights of cable members using unstrained cable lengths and iteratively updating both the horizontal tension component and the vertical profile of the main cable. Furthermore, to demonstrate the validity of the simplified analytical method, the unstrained element length method (ULM is applied to suspension bridge models based on the unstressed lengths of both cable and frame members calculated from the analytical method. Through numerical examples, it is demonstrated that the proposed analytical method can indeed provide an optimized initial solution by showing that both the simplified method and the nonlinear FE procedure lead to practically identical initial configurations with only localized small bending moment distributions.

  12. Preparing technical text for translation: A comparison between International English and methods for simplifying language

    Energy Technology Data Exchange (ETDEWEB)

    Buican, I.; Hriscu, V.; Amador, M.


    For the past four and a half years, the International Communication Committee at Los Alamos National Laboratory has been working to develop a set of guidelines for writing technical and scientific documents in International English, that is, English for those whose native language is not English. Originally designed for documents intended for presentation in English to an international audience of technical experts, the International English guidelines apply equally well to the preparation of English text for translation. This is the second workshop in a series devoted to the topic of translation. The authors focus on the advantages of using International English, rather than various methods of simplifying language, to prepare scientific and technical text for translation.

  13. A numerical simulation of wheel spray for simplified vehicle model based on discrete phase method

    Directory of Open Access Journals (Sweden)

    Xingjun Hu


    Full Text Available Road spray greatly affects vehicle body soiling and driving safety. The study of road spray has attracted increasing attention. In this article, computational fluid dynamics software with widely used finite volume method code was employed to investigate the numerical simulation of spray induced by a simplified wheel model and a modified square-back model proposed by the Motor Industry Research Association. Shear stress transport k-omega turbulence model, discrete phase model, and Eulerian wall-film model were selected. In the simulation process, the phenomenon of breakup and coalescence of drops were considered, and the continuous and discrete phases were treated as two-way coupled in momentum and turbulent motion. The relationship between the vehicle external flow structure and body soiling was also discussed.

  14. Three-dimensional simplified and unconditionally stable lattice Boltzmann method for incompressible isothermal and thermal flows (United States)

    Chen, Z.; Shu, C.; Tan, D.


    In this paper, a three-dimensional simplified and unconditionally stable lattice Boltzmann method (3D-USLBM) is proposed for simulating incompressible isothermal/thermal flows. This method is developed by reconstructing solutions to the macroscopic governing equations recovered from the lattice Boltzmann equation and resolved in a predictor-corrector scheme. The final formulations of 3D-USLBM only involve the equilibrium and the non-equilibrium distribution functions. Among them, the former is calculated from the macroscopic variables and the latter is evaluated from the difference between two equilibrium distribution functions at different locations and time levels. Thus, 3D-USLBM directly tracks the evolution of macroscopic variables, which yields lower cost in virtual memory and facilitates the implementation of physical boundary conditions. A von Neumann stability analysis was performed on the present method to theoretically prove its unconditional stability. By imposing a regular Lagrange interpolation algorithm, this method can be flexibly extended to a non-uniform Cartesian mesh or body-fitted mesh with curved boundaries. Four numerical tests, that is, plane Poiseuille flow, 3D lid-driven cavity flow and 3D natural convection in a cubic cavity, and concentric annulus, were conducted to verify the stability, accuracy, and flexibility of the presented method.

  15. Simplified LCA and matrix methods in identifying the environmental aspects of a product system. (United States)

    Hur, Tak; Lee, Jiyong; Ryu, Jiyeon; Kwon, Eunsun


    In order to effectively integrate environmental attributes into the product design and development processes, it is crucial to identify the significant environmental aspects related to a product system within a relatively short period of time. In this study, the usefulness of life cycle assessment (LCA) and a matrix method as tools for identifying the key environmental issues of a product system were examined. For this, a simplified LCA (SLCA) method that can be applied to Electrical and Electronic Equipment (EEE) was developed to efficiently identify their significant environmental aspects for eco-design, since a full scale LCA study is usually very detailed, expensive and time-consuming. The environmentally responsible product assessment (ERPA) method, which is one of the matrix methods, was also analyzed. Then, the usefulness of each method in eco-design processes was evaluated and compared using the case studies of the cellular phone and vacuum cleaner systems. It was found that the SLCA and the ERPA methods provided different information but they complemented each other to some extent. The SLCA method generated more information on the inherent environmental characteristics of a product system so that it might be useful for new design/eco-innovation when developing a completely new product or method where environmental considerations play a major role from the beginning. On the other hand, the ERPA method gave more information on the potential for improving a product so that it could be effectively used in eco-redesign which intends to alleviate environmental impacts of an existing product or process.

  16. A Simplified Method for Evaluating Building Sustainability in the Early Design Phase for Architects

    Directory of Open Access Journals (Sweden)

    Jernej Markelj


    Full Text Available With society turning increasingly to sustainable development, sharper demands are being made concerning energy efficiency and other properties that mean reductions in the negative effects of the building on the environment and people. This means that architects must have a suitably adapted solution already in the early design phase, as this has the greatest influence on the final result. Current tools and methods used for this are either focused only on individual topics or are too complex and not adapted for independent use by architects. The paper presents a simplified method for evaluating building sustainability (SMEBS which addresses these needs. It is intended as a tool to aid architects in the early project planning phases as it allows a quick evaluation of the extent to which the demands of sustainable building are fulfilled. The method was developed on the basis of a study of international building sustainability assessment methods (BSAM and standards in this field. Experts in sustainable construction were invited to determine weights for assessment parameters using the analytical hierarchy process (AHP. Their judgments reflect the specific characteristics of the local environment.

  17. RADTRAD: A simplified model for RADionuclide Transport and Removal And Dose estimation

    Energy Technology Data Exchange (ETDEWEB)

    Humphreys, S.L.; Miller, L.A.; Monroe, D.K. [Sandia National Labs., Albuquerque, NM (United States); Heames, T.J. [ITSC, Albuquerque, NM (United States)


    This report documents the RADTRAD computer code developed for the U.S. Nuclear Regulatory Commission (NRC) Office of Nuclear Reactor Regulation (NRR) to estimate transport and removal of radionuclides and dose at selected receptors. The document includes a users` guide to the code, a description of the technical basis for the code, the quality assurance and code acceptance testing documentation, and a programmers` guide. The RADTRAD code can be used to estimate the containment release using either the NRC TID-14844 or NUREG-1465 source terms and assumptions, or a user-specified table. In addition, the code can account for a reduction in the quantity of radioactive material due to containment sprays, natural deposition, filters, and other natural and engineered safety features. The RADTRAD code uses a combination of tables and/or numerical models of source term reduction phenomena to determine the time-dependent dose at user-specified locations for a given accident scenario. The code system also provides the inventory, decay chain, and dose conversion factor tables needed for the dose calculation. The RADTRAD code can be used to assess occupational radiation exposures, typically in the control room; to estimate site boundary doses; and to estimate dose attenuation due to modification of a facility or accident sequence.

  18. Simplified Evaluation Method of Drive Characteristics for Computer-Aided Design of Switched Reluctance Motors (United States)

    Kano, Yoshiaki; Fubuki, Shingo; Kosaka, Takashi; Matsui, Nobuyuki

    Since Switched Reluctance Motors (SRM) have simple and rugged construction, they are suitable for low-cost variable speed drives in many industrial applications. However, it is rather difficult to design the motor and to predict the drive performance because of high magnetic non-linearity of the motors. Although FEM is useful for the SRM design, one of disadvantages is a long computation time. This paper proposes a simplified and fast evaluation method of the drive characteristics of SRM whose dimensions are given. The proposed method is composed of an analytical expression of magnetizing curvebased modeling approach and a simple non-linear magnetic analysis. At first, the comparative studies using 12/8 SRM show that the calculated current waveform and stiffness characteristic of the proposed modeling approach are in good agreement with those of experiment. Secondly, it is shown that the proposed magnetic analysis provides accurate and extremely fast magnetizing curves computation for the given motor dimensions compared to 3D-FEM. From the standpoints of analytical accuracy and the required computation time, the effectiveness of the proposed method is verified through the comparisons with 3D-FEM using two SRMs with different specifications.

  19. Novel Simplified and Rapid Method for Screening and Isolation of Polyunsaturated Fatty Acids Producing Marine Bacteria

    Directory of Open Access Journals (Sweden)

    Ashwini Tilay


    Full Text Available Bacterial production of polyunsaturated fatty acids (PUFAs is a potential biotechnological approach for production of valuable nutraceuticals. Reliable method for screening of number of strains within short period of time is great need. Here, we report a novel simplified method for screening and isolation of PUFA-producing bacteria by direct visualization using the H2O2-plate assay. The oxidative stability of PUFAs in growing bacteria towards added H2O2 is a distinguishing characteristic between the PUFAs producers (no zone of inhibition and non-PUFAs producers (zone of inhibition by direct visualization. The confirmation of assay results was performed by injecting fatty acid methyl esters (FAMEs produced by selected marine bacteria to Gas Chromatography-Mass Spectrometry (GCMS. To date, this assay is the most effective, inexpensive, and specific method for bacteria producing PUFAs and shows drastically reduction in the number of samples thus saves the time, effort, and cost of screening and isolating strains of bacterial PUFAs producers.

  20. Simplified skeletal maturity scoring system: learning curve and methods to improve reliability. (United States)

    Verma, Kushagra; Sitoula, Prakash; Gabos, Peter; Loveland, Kerry; Sanders, James; Verma, Satyendra; Shah, Suken A


    Retrospective radiographical review by 5 independent observers. To validate the intra- and interobserver reliability of the simplified skeletal maturity scoring (SSMS) system in a large cohort for each stage and for the overall cohort. The SSMS has been used to successfully predict curve progression in idiopathic scoliosis. A total of 275 patients with scoliosis (8-16 yr) with 1 hand radiograph were included from 2005 to 2011. Five participants independently scored images on 2 separate occasions using the SSMS (stage, 1-8). Observers (listed in order of increasing SSMS experience) included orthopedic surgery resident, clinical fellow (CF), research fellow, and senior faculty. Intraobserver agreement between the 2 sets of scores was estimated using the Pearson and Spearman rank correlation coefficients. Interobserver agreement was estimated with the unweighted Fleiss κ coefficient for the overall cohort and for junior (orthopedic surgery resident, CF, research fellow) versus senior faculty. Intrarater reliability for orthopedic surgery resident, CF, research fellow, senior faculty was 0.956, 0.967, 0.986, 0.991, and 0.998, respectively (Spearman). Intrarater agreement improved with greater familiarity using the SSMS. The inter-rater reliability for junior faculty (κ = 0.65), senior faculty (κ = 0.652), and the overall group (κ = 0.66) indicated agreement between all observers but no improved inter-rater agreement with experience. However, 98% of disagreements occurred only within 1 stage. Stages 2, 3, and 4 accounted for most of the variability; stage 3 was the most commonly scored stage, corresponding to peak growth velocity. The SSMS has excellent intraobserver agreement with substantial interobserver agreement. Intraobserver--but not interobserver agreement--improves with familiarity using the SSMS. Expectancy bias may contribute to a higher likelihood of assigning an SSMS 3. Discrepancies when classifying stages 2 to 4 may be resolved by improved

  1. On-Road Validation of a Simplified Model for Estimating Real-World Fuel Economy: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Wood, Eric; Gonder, Jeff; Jehlik, Forrest


    On-road fuel economy is known to vary significantly between individual trips in real-world driving conditions. This work introduces a methodology for rapidly simulating a specific vehicle's fuel economy over the wide range of real-world conditions experienced across the country. On-road test data collected using a highly instrumented vehicle is used to refine and validate this modeling approach. Model accuracy relative to on-road data collection is relevant to the estimation of 'off-cycle credits' that compensate for real-world fuel economy benefits that are not observed during certification testing on a chassis dynamometer.

  2. Development of simplified methods and data bases for radiation shielding calculations for concrete

    Energy Technology Data Exchange (ETDEWEB)

    Bhuiyan, S.I.; Roussin, R.W.; Lucius, J.L.; Marable, J.H.; Bartine, D.A.


    Two simplified methods have been developed which allow rapid and accurate calculations of the attenuation of neutrons and gamma rays through concrete shields. One method, called the BEST method, uses sensitivity coefficients to predict changes in the transmitted dose from a fission source that are due to changes in the composition of the shield. The other method uses transmission factors based on adjoint calculations to predict the transmitted dose from an arbitrary source incident on a given shield. The BEST method, utilizing an exponential molecule that is shown to be a significant improvement over the traditional linear model, has been successfully applied to slab shields of standard concrete and rebar concrete. It has also been tested for a special concrete that has been used in many shielding experiments at the ORNL Tower Shielding Facility, as well as for a deep-penetration sodium problem. A comprehensive data base of concrete sensitivity coefficients generated as part of this study is available for use in the BEST model. For problems in which the changes are energy independent, application of the model and data base can be accomplished with a desk calculator. Larger-scale calculations required for problems that are energy dependent are facilitated by employing a simple computer code, which is included, together with the data base and other calculational aids, in a data package that can be obtained from the ORNL Radiation Shielding Information Center (request DLC-102/CONSENT). The transmission factors used by the second method are a byproduct of the sensitivity calculations and are mathematically equivalent to the surface adjoint function phi*, which gives the dose equivalent transmitted through a slab of thickness T due to one particle incident on the surface in the gth energy group and jth direction. 18 refs., 1 fig., 50 tabs.

  3. Comparison of traditional and simplified methods for repairing CAD/CAM feldspathic ceramics (United States)

    Louca, Chris; Ferrari, Marco


    PURPOSE To evaluate the adhesion to CAD/CAM feldspathic blocks by failure analysis and shear bond strength test (SBSt) of different restorative systems and different surface treatments, for purpose of moderate chipping repair. MATERIALS AND METHODS A self-adhering flowable composite (Vertise Flow, Kerr) containing bi-functional phosphate monomers and a conventional flowable resin composite (Premise Flow, Kerr) applied with and without adhesive system (Optibond Solo Plus, Kerr) were combined with three different surface treatments (Hydrofluoric Acid Etching, Sandblasting, combination of both) for repairing feldspathic ceramics. Two commercial systems for ceramic repairing were tested as controls (Porcelain Repair Kit, Ultradent, and CoJet System, 3M). SBSt was performed and failure mode was evaluated using a digital microscope. A One-Way ANOVA (Tukey test for post hoc) was applied to the SBSt data and the Fisher's Exact Test was applied to the failure analysis data. RESULTS The use of resin systems containing bi-functional phosphate monomers combined with hydrofluoric acid etching of the ceramic surface gave the highest values in terms of bond strength and of more favorable failure modalities. CONCLUSION The simplified repairing method based on self-adhering flowable resin combined with the use of hydrofluoric acid etching showed high bond strength values and a favorable failure mode. Repairing of ceramic chipping with a self-adhering flowable resin associated with hydrofluoric acid etching showed high bond strength with a less time consuming and technique-sensitive procedure compared to standard procedure. PMID:28874992

  4. Simplified cephalometric lines for the estimation of muscular lines of action. (United States)

    Ferrario, V F; Sforza, C; Miani, A; Colombo, A


    In the study of masticatory muscle performance, one of the biomechanical variables that can be estimated is the mechanical advantage of the masticatory muscles, namely, the ratio between the muscular moment arm and the bite force moment arm. In the present study, the position of the estimated line of action of the masseter muscle, drawn between gonion and orbitale (Go-Or) relative to dental (occlusal plane) and skeletal (Frankfort plane) references was analyzed in 431 pretreatment lateral cephalograms of orthodontic patients (195 males, 236 females) aged 6 to 50 years, and in the lateral tracings of the Bolton standards (6 to 18 years of age). The following measurements were evaluated: (1) skeletocutaneous class (soft tissue equivalent of Wits appraisal, linear distance in millimeters between the projections of points A' and B' on the occlusal plane); (2) angle between the Go-Or line and the perpendicular to the occlusal plane at the molar occlusal point; and (3) angle between the Go-Or line and the Frankfort plane. In the patients, the skeletocutaneous class ranged between--14.5 and 15.5 mm, without any sex- or age-related differences. The angle between the Go-Or line and the perpendicular to the occlusal plane was, on average, 39 degrees (range 15 to 53 degrees), and it decreased with advancing age; while the average angle between Go-Or and Frankfort plane was 42 degrees (range 30 to 54 degrees), and it increased in older patients. No effects of sex were found. The two angles were significantly correlated to each other, while no correlations were found with the sagittal jaw discrepancy. Similar results were obtained on the Bolton tracings. Overall, the present cephalometric analogue could be useful in biomechanical simulations.

  5. Simplified formulae for the estimation of offshore wind turbines clutter on marine radars. (United States)

    Grande, Olatz; Cañizo, Josune; Angulo, Itziar; Jenn, David; Danoon, Laith R; Guerra, David; de la Vega, David


    The potential impact that offshore wind farms may cause on nearby marine radars should be considered before the wind farm is installed. Strong radar echoes from the turbines may degrade radars' detection capability in the area around the wind farm. Although conventional computational methods provide accurate results of scattering by wind turbines, they are not directly implementable in software tools that can be used to conduct the impact studies. This paper proposes a simple model to assess the clutter that wind turbines may generate on marine radars. This method can be easily implemented in the system modeling software tools for the impact analysis of a wind farm in a real scenario.

  6. Coping with Nasty Surprises: Improving Risk Management in the Public Sector Using Simplified Bayesian Methods

    National Research Council Canada - National Science Library

    Matthews, Mark; Kompas, Tom


    .... Given the utility of these approaches to public policy, this article considers the case for refreshing the general practice of risk management in governance by using a simplified B ayesian approach...

  7. Simplified Equations to Estimate Flushline Diameter for Subsurface Drip Irrigation Systems (United States)

    A formulation of the Hazen-Williams equation is typically used to determine the diameter of the common flushline that is often used at the distal end of subsurface drip irrigation systems to aid in joint flushing of a group of driplines. Although this method is accurate, its usage is not intuitive a...

  8. An improved method for preparing Agrobacterium cells that simplifies the Arabidopsis transformation protocol

    Directory of Open Access Journals (Sweden)

    Ülker Bekir


    Full Text Available Abstract Background The Agrobacterium vacuum (Bechtold et al 1993 and floral-dip (Clough and Bent 1998 are very efficient methods for generating transgenic Arabidopsis plants. These methods allow plant transformation without the need for tissue culture. Large volumes of bacterial cultures grown in liquid media are necessary for both of these transformation methods. This limits the number of transformations that can be done at a given time due to the need for expensive large shakers and limited space on them. Additionally, the bacterial colonies derived from solid media necessary for starting these liquid cultures often fail to grow in such large volumes. Therefore the optimum stage of plant material for transformation is often missed and new plant material needs to be grown. Results To avoid problems associated with large bacterial liquid cultures, we investigated whether bacteria grown on plates are also suitable for plant transformation. We demonstrate here that bacteria grown on plates can be used with similar efficiency for transforming plants even after one week of storage at 4°C. This makes it much easier to synchronize Agrobacterium and plants for transformation. DNA gel blot analysis was carried out on the T1 plants surviving the herbicide selection and demonstrated that the surviving plants are indeed transgenic. Conclusion The simplified method works as efficiently as the previously reported protocols and significantly reduces the workload, cost and time. Additionally, the protocol reduces the risk of large scale contaminations involving GMOs. Most importantly, many more independent transformations per day can be performed using this modified protocol.

  9. Proposal of a simplified method for the assessment of carrying capacity of woods in territorial forest planning

    Directory of Open Access Journals (Sweden)


    Full Text Available Methods for carrying capacity estimation are standardised for pastoral resources, but in particular environments it is important to assess the potential utilisation of forests as forage source for grazing animals. Some experimental methods are money and time consuming and they need the accurate evaluation of production from woody species that can be utilised by animals. This work suggests a simplified methodology of vegetation sampling to be used inside forest planning to evaluate the carrying capacity of woods. The proposed technique needs the determination of a maximum animal stocking rate and the assessment of herbaceous and woody vegetation and their palatability to evaluate the eventual occurrence of negative parameters to be taken into account as reductive factors of the maximum stocking rate previously identified. The research was carried out inside the forest planning of the complex “Gallipoli-Cognato” (Basilicata. About 100 sample areas were conducted to assess the woody and the herbaceous layers present in the forest. Based on these findings, three classes of overall forage quality were identified and to them a different and decreasing level of potential stocking rate was attributed, starting from the maximum that for the studied area was considered as 0.3 LU ha-1 year-1. Proposed methodology resulted simple and fast, even if some improvements are necessary concerning the evaluation of the herbaceous vegetation and the assessment of the damages caused by animals grazing to soil and forest.

  10. Simplified rotor load models and fatigue damage estimates for offshore wind turbines. (United States)

    Muskulus, M


    The aim of rotor load models is to characterize and generate the thrust loads acting on an offshore wind turbine. Ideally, the rotor simulation can be replaced by time series from a model with a few parameters and state variables only. Such models are used extensively in control system design and, as a potentially new application area, structural optimization of support structures. Different rotor load models are here evaluated for a jacket support structure in terms of fatigue lifetimes of relevant structural variables. All models were found to be lacking in accuracy, with differences of more than 20% in fatigue load estimates. The most accurate models were the use of an effective thrust coefficient determined from a regression analysis of dynamic thrust loads, and a novel stochastic model in state-space form. The stochastic model explicitly models the quasi-periodic components obtained from rotational sampling of turbulent fluctuations. Its state variables follow a mean-reverting Ornstein-Uhlenbeck process. Although promising, more work is needed on how to determine the parameters of the stochastic model and before accurate lifetime predictions can be obtained without comprehensive rotor simulations. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  11. Simplified Formulae for the Estimation of Offshore Wind Turbines Clutter on Marine Radars

    Directory of Open Access Journals (Sweden)

    Olatz Grande


    Full Text Available The potential impact that offshore wind farms may cause on nearby marine radars should be considered before the wind farm is installed. Strong radar echoes from the turbines may degrade radars’ detection capability in the area around the wind farm. Although conventional computational methods provide accurate results of scattering by wind turbines, they are not directly implementable in software tools that can be used to conduct the impact studies. This paper proposes a simple model to assess the clutter that wind turbines may generate on marine radars. This method can be easily implemented in the system modeling software tools for the impact analysis of a wind farm in a real scenario.

  12. Estimating the biological half-life for radionuclides in homoeothermic vertebrates: a simplified allometric approach

    Energy Technology Data Exchange (ETDEWEB)

    Beresford, N.A. [Lancaster Environment Centre, NERC Centre for Ecology and Hydrology, Lancaster (United Kingdom); Vives i Batlle, J. [Belgian Nuclear Research Centre, Mol (Belgium)


    The application of allometric, or mass-dependent, relationships within radioecology has increased with the evolution of models to predict the exposure of organisms other than man. Allometry presents a method of addressing the lack of empirical data on radionuclide transfer and metabolism for the many radionuclide-species combinations which may need to be considered. However, sufficient data across a range of species with different masses are required to establish allometric relationships and this is not always available. Here, an alternative allometric approach to predict the biological half-life of radionuclides in homoeothermic vertebrates which does not require such data is derived. Biological half-life values are predicted for four radionuclides and compared to available data for a range of species. All predictions were within a factor of five of the observed values when the model was parameterised appropriate to the feeding strategy of each species. This is an encouraging level of agreement given that the allometric models are intended to provide broad approximations rather than exact values. However, reasons why some radionuclides deviate from what would be anticipated from Kleiber's law need to be determined to allow a more complete exploitation of the potential of allometric extrapolation within radioecological models. (orig.)

  13. Forecast skill score assessment of a relocatable ocean prediction system, using a simplified objective analysis method (United States)

    Onken, Reiner


    A relocatable ocean prediction system (ROPS) was employed to an observational data set which was collected in June 2014 in the waters to the west of Sardinia (western Mediterranean) in the framework of the REP14-MED experiment. The observational data, comprising more than 6000 temperature and salinity profiles from a fleet of underwater gliders and shipborne probes, were assimilated in the Regional Ocean Modeling System (ROMS), which is the heart of ROPS, and verified against independent observations from ScanFish tows by means of the forecast skill score as defined by Murphy(1993). A simplified objective analysis (OA) method was utilised for assimilation, taking account of only those profiles which were located within a predetermined time window W. As a result of a sensitivity study, the highest skill score was obtained for a correlation length scale C = 12.5 km, W = 24 h, and r = 1, where r is the ratio between the error of the observations and the background error, both for temperature and salinity. Additional ROPS runs showed that (i) the skill score of assimilation runs was mostly higher than the score of a control run without assimilation, (i) the skill score increased with increasing forecast range, and (iii) the skill score for temperature was higher than the score for salinity in the majority of cases. Further on, it is demonstrated that the vast number of observations can be managed by the applied OA method without data reduction, enabling timely operational forecasts even on a commercially available personal computer or a laptop.

  14. Clinical implementation of a GPU-based simplified Monte Carlo method for a treatment planning system of proton beam therapy. (United States)

    Kohno, R; Hotta, K; Nishioka, S; Matsubara, K; Tansho, R; Suzuki, T


    We implemented the simplified Monte Carlo (SMC) method on graphics processing unit (GPU) architecture under the computer-unified device architecture platform developed by NVIDIA. The GPU-based SMC was clinically applied for four patients with head and neck, lung, or prostate cancer. The results were compared to those obtained by a traditional CPU-based SMC with respect to the computation time and discrepancy. In the CPU- and GPU-based SMC calculations, the estimated mean statistical errors of the calculated doses in the planning target volume region were within 0.5% rms. The dose distributions calculated by the GPU- and CPU-based SMCs were similar, within statistical errors. The GPU-based SMC showed 12.30-16.00 times faster performance than the CPU-based SMC. The computation time per beam arrangement using the GPU-based SMC for the clinical cases ranged 9-67 s. The results demonstrate the successful application of the GPU-based SMC to a clinical proton treatment planning.

  15. Simplified method for the computation of parameters of power-law rate equations from time-series. (United States)

    Díaz-Sierra, R; Fairén, V


    Modeling biological processes from time-series data is a resourceful procedure which has received much attention in the literature. For models established in the context of non-linear differential equations, parameter-dependent phenomenological tentative response functions are tested by comparing would-be solutions of those models to the experimental time-series. Those values of the parameters for which a tested solution is a best fit are then retained. It is done with the help of some appropriate optimization algorithm which simplifies the searching procedure within the range of variability of the parameters that are to be estimated. The procedure works well in problems with a small number of adjustable parameters or/and with narrow searching ranges. However, it may start to be problematic for models with a large number of problem parameters inasmuch as convergence to the best fit is not necessarily ensured. In this case, a reduction in size of the parameter estimation problem must be undertaken. We presently address this issue by proposing a systematic procedure that does so in problems in which the system's response to a sufficiently small pulse perturbation of steady-state can be obtained. The response is then assumed to be a solution of the linearized equations, the Jacobian of which can be retrieved by a simple multilinear regression. The calculated n(2) Jacobian entries provide as many relationships among problem parameters, thus cutting substantially the size of the starting problem. After this preliminary treatment is applied, only (kappa-n(2)) of the initial kappa adjustable parameters are left for evaluation by means of a non-linear optimization procedure. The benefits of the present variant are both in economy of computation and in accuracy in determining the parameter values. The performance of the method is established under different circumstances. It is illustrated in the context of power-law rates, although this does not preclude its applicability

  16. Detailed disc assembly temperature prediction: comparison between CFD and simplified engineering methods

    CSIR Research Space (South Africa)

    Snedden, Glen C


    Full Text Available Previous simulations of a turbojet disc cavity with full Navier-Stokes CFD and simplified geometry and boundary conditions have been improved to reduce the level of approximation. A new grid was built using a multi-block approach. The case...

  17. A Simplified Method for Calculating Propeller Thrust Decrease for a Ship Sailing on a Given Shipping Lane

    Directory of Open Access Journals (Sweden)

    Zelazny Katarzyna


    Full Text Available During ship sailing on rough water, relative ship motions can be observed which make the propeller emerge from the water, and decrease its thrust as a consequence. The article presents a simplified method for calculating the thrust decrease and the time of propeller emergence from water for the ship on a regular an irregular wave. The method can be used for predicting the operating speed of the ship on a given shipping lane.

  18. Applications of Generalized Method of Moments Estimation


    Wooldridge, Jeffrey M.


    I describe how the method of moments approach to estimation, including the more recent generalized method of moments (GMM) theory, can be applied to problems using cross section, time series, and panel data. Method of moments estimators can be attractive because in many circumstances they are robust to failures of auxiliary distributional assumptions that are not needed to identify key parameters. I conclude that while sophisticated GMM estimators are indispensable for complicated estimation ...

  19. Empirical methods in the evaluation of estimators (United States)

    Gerald S. Walton; C.J. DeMars; C.J. DeMars


    The authors discuss the problem of selecting estimators of density and survival by making use of data on a forest-defoliating larva, the spruce budworm. Varlous estimators are compared. The results show that, among the estimators considered, ratio-type estimators are superior in terms of bias and variance. The methods used in making comparisons, particularly simulation...

  20. Applications of probabilistic methods in geotechnical engineering. Part 1. Simplified reliability analyses for geotechnical problems

    Energy Technology Data Exchange (ETDEWEB)

    Kavazanjian, E. Jr.; Chameau, J.L.; Clough, G.W.; Hadk-Hamou, T.


    In this report, several approximate methods for determining the first and second moments of the probability density function of a random design variable have been presented. Once these moments are known, the probability of failure or reliability of the system can be calculated by assuming a distribution shape and using standard probability tables. This technique can be used on a variety of geotechnical problems to transform existing deterministic analyses into probabilistic design analyses. Examples of the application of the methodology developed herein are presented in the areas of shear strength, bearing capacity, and slope stability. The examples are used to illustrate how a reliability type of design analyses might be performed, the influence of the choice of distribution on calculated results, and the contrasts between conventional factor of safety analyses and probabilistic analyses. The point estimate method presented in this report is a simple and powerful tool that can be used in conjunction with any numerical analysis. By identifying the N random variables in a design analysis and assigning to them one sigma bounds to reflect subjective uncertainty or parameter variability, the mean and variances of the design parameters can be estimated from 2 to the nth point estimates. The primary obstacle to successful implementation of the techniques presented in this report is the lack of information on appropriate distribution shapes for various geotechnical design parameters.

  1. A simplified method for determination of radioactive iron in whole-blood samples

    DEFF Research Database (Denmark)

    Bukhave, Klaus; Sørensen, Anne Dorthe; Hansen, M.


    for simultaneous determination of Fe-55 and Fe-59 in blood, using a dry-ashing procedure and recrystallization of the remaining iron. The detection Limit of the method permits measurements of 0.1 Bq/ml blood thus allowing detection of Less than 1% absorption from a 40 kBq dose, which is ethically acceptable......For studies on iron absorption in man radioisotopes represent an easy and simple tool. However, measurement of the orbital electron emitting radioiron, Fe-55, in blood is difficult and insufficiently described in the literature. The present study describes a relatively simple method...... a sensitive method for studying the intestinal absorption of Fe-55 and Fe-59 in man and at the same time allows estimation of the amount of radioiron Located in the vascular compartment....

  2. A simplified method for generation of pseudo natural colours from colour infrared aerial photos

    DEFF Research Database (Denmark)

    Knudsen, Thomas; Olsen, Brian Pilemann

    In spite of their high potential for automated discrimination between vegetation and human made objects, colour-infrared (CIR) aerial photos have not been in widespread use for traditional photogrammetric mapping. This is probably due to their awkward colour representation invalidating the visual...... analytical experience of the stereo analysts doing the actual registration of the topographical data. In this paper, we present a method for generating pseudo natural colour (PNC) representations from CIR photos. This enables the combination of automated vegetation discrimination with traditional manual....... In the second step the blue colour component is estimated using tailored models for each domain. Green and red colour components are taken directly fron the CIR photo. The visual impression of the results from the 2 step method is only slightly inferior to the original 7 step method. The implementation, however...

  3. Simplified Method for Preliminary EIA of WE Installations based on Newtechnology Classification

    DEFF Research Database (Denmark)

    Margheritini, Lucia


    The Environmental Impact Assessment (EIA) is an environmental management instrument implemented worldwide. Full scale WECs are expected to be subjects to EIA. The consents application process can be a very demanding for Wave Energy Converters (WECs) developers. The process is possibly aggravated...... depending on few strategic parameters to simplify and speed up the scoping procedure and to provide an easier understanding of the technologies to the authorities and bodies involved in the EIA of WECs....

  4. Simplified Method of Optimal Sizing of a Renewable Energy Hybrid System for Schools

    Directory of Open Access Journals (Sweden)

    Jiyeon Kim


    Full Text Available Schools are a suitable public building for renewable energy systems. Renewable energy hybrid systems (REHSs have recently been introduced in schools following a new national regulation that mandates renewable energy utilization. An REHS combines the common renewable-energy sources such as geothermal heat pumps, solar collectors for water heating, and photovoltaic systems with conventional energy systems (i.e., boilers and air-source heat pumps. Optimal design of an REHS by adequate sizing is not a trivial task because it usually requires intensive work including detailed simulation and demand/supply analysis. This type of simulation-based approach for optimization is difficult to implement in practice. To address this, this paper proposes simplified sizing equations for renewable-energy systems of REHSs. A conventional optimization process is used to calculate the optimal combinations of an REHS for cases of different numbers of classrooms and budgets. On the basis of the results, simplified sizing equations that use only the number of classrooms as the input are proposed by regression analysis. A verification test was carried out using an initial conventional optimization process. The results show that the simplified sizing equations predict similar sizing results to the initial process, consequently showing similar capital costs within a 2% error.

  5. A simplified method for quantitative assessment of the relative health and safety risk of environmental management activities

    Energy Technology Data Exchange (ETDEWEB)

    Eide, S.A.; Smith, T.H.; Peatross, R.G.; Stepan, I.E.


    This report presents a simplified method to assess the health and safety risk of Environmental Management activities of the US Department of Energy (DOE). The method applies to all types of Environmental Management activities including waste management, environmental restoration, and decontamination and decommissioning. The method is particularly useful for planning or tradeoff studies involving multiple conceptual options because it combines rapid evaluation with a quantitative approach. The method is also potentially applicable to risk assessments of activities other than DOE Environmental Management activities if rapid quantitative results are desired.

  6. A simplified approach to estimating the distribution of occasionally-consumed dietary components, applied to alcohol intake

    Directory of Open Access Journals (Sweden)

    Julia Chernova


    Full Text Available Abstract Background Within-person variation in dietary records can lead to biased estimates of the distribution of food intake. Quantile estimation is especially relevant in the case of skewed distributions and in the estimation of under- or over-consumption. The analysis of the intake distributions of occasionally-consumed foods presents further challenges due to the high frequency of zero records. Two-part mixed-effects models account for excess-zeros, daily variation and correlation arising from repeated individual dietary records. In practice, the application of the two-part model with random effects involves Monte Carlo (MC simulations. However, these can be time-consuming and the precision of MC estimates depends on the size of the simulated data which can hinder reproducibility of results. Methods We propose a new approach based on numerical integration as an alternative to MC simulations to estimate the distribution of occasionally-consumed foods in sub-populations. The proposed approach and MC methods are compared by analysing the alcohol intake distribution in a sub-population of individuals at risk of developing metabolic syndrome. Results The rate of convergence of the results of MC simulations to the results of our proposed method is model-specific, depends on the number of draws from the target distribution, and is relatively slower at the tails of the distribution. Our data analyses also show that model misspecification can lead to incorrect model parameter estimates. For example, under the wrong model assumption of zero correlation between the components, one of the predictors turned out as non-significant at 5 % significance level (p-value 0.062 but it was estimated as significant in the correctly specified model (p-value 0.016. Conclusions The proposed approach for the analysis of the intake distributions of occasionally-consumed foods provides a quicker and more precise alternative to MC simulation methods, particularly in the

  7. Simplified performance estimation of ISM-band, OFDM-based WSNs according to the sensitivity/SINAD parameters


    van Rhyn, Pierre; Gerhard P. Hancke


    A novel method is proposed to estimate committed information rate (CIR) variations in typical orthogonal frequency division multiplexing (OFDM) wireless local area networks (WLANs) that are applied in wireless sensor networks (WSNs) and operate within the industrial, scientific and medical (ISM) frequency bands. The method is based on the observation of a phenomenon of which the significance has not previously been recognized nor documented; here termed the service level differential zone (SL...

  8. Simplified methods of determining treatment retention in Malawi: ART cohort reports vs. pharmacy stock cards. (United States)

    Chan, A K; Singogo, E; Changamire, R; Ratsma, Y E C; Tassie, J-M; Harries, A D


    Rapid scale-up of antiretroviral therapy (ART) has challenged the health system in Malawi to monitor large numbers of patients effectively. To compare two methods of determining retention on treatment: quarterly ART clinic data aggregation vs. pharmacy stock cards. Between October 2010 and March 2011, data on ART outcomes were extracted from monitoring tools at five facilities. Pharmacy data on ART consumption were extracted. Workload for each method was observed and timed. We used intraclass correlation and Bland-Altman plots to compare the agreeability of both methods to determine treatment retention. There is wide variability between ART clinic cohort data and pharmacy data to determine treatment retention due to divergence in data at sites with large numbers of patients. However, there is a non-significant trend towards agreeability between the two methods (intraclass correlation coefficient > 0.9; P > 0.05). Pharmacy stock card monitoring is more time-efficient than quarterly ART data aggregation (81 min vs. 573 min). In low-resource settings, pharmacy records could be used to improve drug forecasting and estimate ART retention in a more time-efficient manner than quarterly data aggregation; however, a necessary precondition would be capacity building around pharmacy data management, particularly for large-sized cohorts.

  9. A quick method to estimate low voltage problem (United States)

    He, Yuqing; He, Hongbin; Liu, Cong; Jiang, Zhuohan; Liu, Bo


    In order to solve the problem in the prediction of low voltage problem in distribution network, a method of estimating low voltage problem is proposed from two aspects: network simplification and load simplification. In the basis of the difference construction of the backbone and branch line, a backbone-branch network simplified model is proposed, and also the large input parameters problem is solved through the parameter estimation. In the basis of the division of the trunk that a branch load model is structured to realize a rapid distribution of the load. And finally, by using the voltage droptheoretical, a simple and practical low voltage loss quick check is formed to make it easy for grassroots staffs to use.

  10. Methods of Estimating Strategic Intentions (United States)


    a considerable Impact on subsequent escala - tion and responses. The previous research performed by MATHTECH dealt with an assessment of methods for...the experts has been termed "bootstrapping" (Dawes, 1971; Goldberg , 1970). Models of the expert, judges out- perform the judges themselves because...1971; Goldberg , 1970). Bootstrapping will Improve judgments silghtly under almost any real istic task conditions and It can be applied blindly, In

  11. Characteristic Analysis and DSP Realization of Fractional-Order Simplified Lorenz System Based on Adomian Decomposition Method (United States)

    Wang, Huihai; Sun, Kehui; He, Shaobo


    By adopting Adomian decomposition method, the fractional-order simplified Lorenz system is solved and implemented on a digital signal processor (DSP). The Lyapunov exponent (LE) spectra of the system is calculated based on QR-factorization, and it accords well with the corresponding bifurcation diagrams. We analyze the influence of the parameter and the fractional derivative order on the system characteristics by color maximum LE (LEmax) and chaos diagrams. It is found that the smaller the order is, the larger the LEmax is. The iteration step size also affects the lowest order at which the chaos exists. Further, we implement the fractional-order simplified Lorenz system on a DSP platform. The phase portraits generated on DSP are consistent with the results that were obtained by computer simulations. It lays a good foundation for applications of the fractional-order chaotic systems.

  12. Estimating inelastic heavy-particle - hydrogen collision data. II. Simplified model for ionic collisions and application to barium-hydrogen ionic collisions (United States)

    Belyaev, Andrey K.; Yakovleva, Svetlana A.


    Aims: A simplified model is derived for estimating rate coefficients for inelastic processes in low-energy collisions of heavy particles with hydrogen, in particular, the rate coefficients with high and moderate values. Such processes are important for non-local thermodynamic equilibrium modeling of cool stellar atmospheres. Methods: The derived method is based on the asymptotic approach for electronic structure calculations and the Landau-Zener model for nonadiabatic transition probability determination. Results: It is found that the rate coefficients are expressed via statistical probabilities and reduced rate coefficients. It is shown that the reduced rate coefficients for neutralization and ion-pair formation processes depend on single electronic bound energies of an atomic particle, while the reduced rate coefficients for excitation and de-excitation processes depend on two electronic bound energies. The reduced rate coefficients are calculated and tabulated as functions of electronic bound energies. The derived model is applied to barium-hydrogen ionic collisions. For the first time, rate coefficients are evaluated for inelastic processes in Ba+ + H and Ba2+ + H- collisions for all transitions between the states from the ground and up to and including the ionic state. Tables with calculated data are only available at the CDS via anonymous ftp to ( or via

  13. An assessment of roadway capacity estimation methods

    NARCIS (Netherlands)

    Minderhoud, M.M.; Botma, H.; Bovy, P.H.L.


    This report is an attempt to describe existing capacity estimation methods with their characteristic data demands and assumptions. After studying the methods, one should have a better idea about the capacity estimation problem which can be encountered in traffic engineering. Moreover, decisions to

  14. A comparative study between a simplified Kalman filter and Sliding Window Averaging for single trial dynamical estimation of event-related potentials

    DEFF Research Database (Denmark)

    Vedel-Larsen, Esben; Fuglø, Jacob; Channir, Fouad


    , are variable and depend on cognitive function. This study compares the performance of a simplified Kalman filter with Sliding Window Averaging in tracking dynamical changes in single trial P300. The comparison is performed on simulated P300 data with added background noise consisting of both simulated and real...... background EEG in various input signal to noise ratios. While both methods can be applied to track dynamical changes, the simplified Kalman filter has an advantage over the Sliding Window Averaging, most notable in a better noise suppression when both are optimized for faster changing latency and amplitude...

  15. a Simplified Parameter Design Method for Transformation Optics-Based Metamaterial Innovative Cloak (United States)

    Li, Ting-Hua; Huang, Ming; Yang, Jing-Jing; Lu, Jin; Cao, Hui-Lu


    Transformation optics-based innovative cloak which combines the virtues of both internal and external cloaks to enable arbitrary multi-objects hidden with visions and movements was first proposed by Huang et al. [Appl. Phys. Lett.101, 151901 (2012)]. But it is rather difficult to implement in practice, for the required material parameters vary with radius and even have singular values. To accelerate its practical realization but still keep good performance of invisibility, a simplified innovative cloak with only spatially varying axial parameter is developed via choosing appropriate transformation function. The advantage of such a cloak is that both radial and azimuthal parameters are constants, and all three components are nonsingular and finite. Full-wave simulation confirms the perfect cloaking effect of the cloak. Besides, the influences of metamaterials loss and parameter deviation on the performance of cloak are also investigated. This work provides a simple and feasible solution to push metamaterial-assisted innovative cloak more closely to the practice.

  16. Age Estimation Methods in Forensic Odontology

    Directory of Open Access Journals (Sweden)

    Phuwadon Duangto


    Full Text Available Forensically, age estimation is a crucial step for biological identification. Currently, there are many methods with variable accuracy to predict the age for dead or living persons such as a physical examination, radiographs of the left hand, and dental assessment. Age estimation using radiographic tooth development has been found to be an accurate method because it is mainly genetically influenced and less affected by nutritional and environmental factors. The Demirjian et al. method has long been the most commonly used for dental age estimation using radiological technique in many populations. This method, based on tooth developmental changes, is an easy-to-apply method since different stages of tooth development is clearly defined. The aim of this article is to elaborate age estimation by using tooth development with a focus on the Demirjian et al. method.

  17. Simplified Boardgames


    Kowalski, Jakub; Sutowicz, Jakub; Szykuła, Marek


    We formalize Simplified Boardgames language, which describes a subclass of arbitrary board games. The language structure is based on the regular expressions, which makes the rules easily machine-processable while keeping the rules concise and fairly human-readable.

  18. The Flight Optimization System Weights Estimation Method (United States)

    Wells, Douglas P.; Horvath, Bryce L.; McCullers, Linwood A.


    FLOPS has been the primary aircraft synthesis software used by the Aeronautics Systems Analysis Branch at NASA Langley Research Center. It was created for rapid conceptual aircraft design and advanced technology impact assessments. FLOPS is a single computer program that includes weights estimation, aerodynamics estimation, engine cycle analysis, propulsion data scaling and interpolation, detailed mission performance analysis, takeoff and landing performance analysis, noise footprint estimation, and cost analysis. It is well known as a baseline and common denominator for aircraft design studies. FLOPS is capable of calibrating a model to known aircraft data, making it useful for new aircraft and modifications to existing aircraft. The weight estimation method in FLOPS is known to be of high fidelity for conventional tube with wing aircraft and a substantial amount of effort went into its development. This report serves as a comprehensive documentation of the FLOPS weight estimation method. The development process is presented with the weight estimation process.

  19. A Simplified Multiband Sampling and Detection Method Based on MWC Structure for Mm Wave Communications in 5G Wireless Networks

    Directory of Open Access Journals (Sweden)

    Min Jia


    Full Text Available The millimeter wave (mm wave communications have been proposed to be an important part of the 5G mobile communication networks, and it will bring more difficulties to signal processing, especially signal sampling, and also cause more pressures on hardware devices. In this paper, we present a simplified sampling and detection method based on MWC structure by using the idea of blind source separation for mm wave communications, which can avoid the challenges of signal sampling brought by high frequencies and wide bandwidth for mm wave systems. This proposed method takes full advantage of the beneficial spectrum aliasing to achieve signal sampling at sub-Nyquist rate. Compared with the traditional MWC system, it provides the exact quantity of sampling channels which is far lower than that of MWC. In the reconstruction stage, the proposed method simplifies the computational complexity by exploiting simple linear operations instead of CS recovery algorithms and provides more stable performance of signal recovery. Moreover, MWC structure has the ability to apply to different bands used in mm wave communications by mixed processing, which is similar to spread spectrum technology.

  20. A reliable and simplified sj/beta-TREC ratio quantification method for human thymic output measurement. (United States)

    Ferrando-Martínez, Sara; Franco, Jaime M; Ruiz-Mateos, Ezequiel; Hernández, Ana; Ordoñez, Antonio; Gutierrez, Encarnación; Leal, Manuel


    Current techniques to peripherally assess thymic function are: the signal-joint T-cell receptor excision circle (sj-TREC) level measurement and the naive T cell and CD31+ TREC-rich subset determination. However, all of them are indirect approaches and none could be considered a direct recent thymic emigrant (RTE) marker. To overcome their limitations, Dion et al. (2004) described the sj/beta-TREC ratio that allows the peripheral quantification of the double negative to double positive intrathymic proliferation step. Nevertheless, the protocol described is expensive, sample and time-consuming, thus, limiting its usefulness. In this study, we describe a simplified protocol that reduces from 33 to 9 the amount of PCR reaction needed but maintaining the sensitivity and reproducibility of the original technique. In addition, we corroborated the effectiveness of our technique as an accurate thymic output-related marker by correlating the peripheral sj/beta-TREC ratio with a direct measurement of thymic function as the percentage of double positive thymocytes (r=0.601, p<0.001). Copyright 2009 Elsevier B.V. All rights reserved.

  1. Estimation of railroad capacity using parametric methods. (United States)


    This paper reviews different methodologies used for railroad capacity estimation and presents a user-friendly method to measure capacity. The objective of this paper is to use multivariate regression analysis to develop a continuous relation of the d...

  2. Simplifying ART cohort monitoring: Can pharmacy stocks provide accurate estimates of patients retained on antiretroviral therapy in Malawi?

    Directory of Open Access Journals (Sweden)

    Tweya Hannock


    Full Text Available Abstract Background Routine monitoring of patients on antiretroviral therapy (ART is crucial for measuring program success and accurate drug forecasting. However, compiling data from patient registers to measure retention in ART is labour-intensive. To address this challenge, we conducted a pilot study in Malawi to assess whether patient ART retention could be determined using pharmacy records as compared to estimates of retention based on standardized paper- or electronic based cohort reports. Methods Twelve ART facilities were included in the study: six used paper-based registers and six used electronic data systems. One ART facility implemented an electronic data system in quarter three and was included as a paper-based system facility in quarter two only. Routine patient retention cohort reports, paper or electronic, were collected from facilities for both quarter two [April–June] and quarter three [July–September], 2010. Pharmacy stock data were also collected from the 12 ART facilities over the same period. Numbers of ART continuation bottles recorded on pharmacy stock cards at the beginning and end of each quarter were documented. These pharmacy data were used to calculate the total bottles dispensed to patients in each quarter with intent to estimate the number of patients retained on ART. Information for time required to determine ART retention was gathered through interviews with clinicians tasked with compiling the data. Results Among ART clinics with paper-based systems, three of six facilities in quarter two and four of five facilities in quarter three had similar numbers of patients retained on ART comparing cohort reports to pharmacy stock records. In ART clinics with electronic systems, five of six facilities in quarter two and five of seven facilities in quarter three had similar numbers of patients retained on ART when comparing retention numbers from electronically generated cohort reports to pharmacy stock records. Among

  3. Simplified methods for estimation of doses, received from a ground contamination and from flying through a cloud of radioactive particles, at long distances from nuclear weapon explosions; Mallar foer dosuppskattning fraan markbelaeggning och vid flygning genom radioaktiva partikelmoln paa stora avstaand fraan kaernvapenexplosioner

    Energy Technology Data Exchange (ETDEWEB)

    Thaning, L.; Tovedal, H.


    By using a simple Lagrangian model for atmospheric dispersion, the activity deposited on ground from nuclear weapon explosions of different yields has been calculated on different distances (in hours after the detonation). The external dose for the first month to a person without any shelter, is estimated and presented in simple figures. Both situations with dry-deposition only and with wet-deposition included are considered. The external dose from the cloud and the internal dose from inhaled activity received when flying through the radioactive cloud are estimated as functions of flight level and distance from the detonation and shown in a number of tables. In order to determine where the cloud and contamination will occur, some kind of trajectory calculations have to be done. In this work only ground bursts have been considered.

  4. Oral hygiene assessment by school teachers and peer leaders using simplified method. (United States)

    Haleem, Abdul; Siddiqui, Muhammad Irfanullah; Khan, Ayyaz Ali


    A significant proportion of children in developing countries are having plaque-induced gingivitis. A public health strategy may involve teachers and peer leaders to motivate and train school children for regular and thorough removal of dental plaque. The monitoring and evaluation of such a strategy may require teachers and peer leaders to assess oral hygiene status of children at periodic intervals. To validate the simplified dental examination performed by teachers and peer leaders to detect dental plaque and calculus. This longitudinal study involved 632 adolescents studying in sixteen schools of Karachi, Pakistan. Eight schools each were randomly allocated to the peer-led and teacher-led strategies of examination. One section of class six was selected at random in each school to be included in the study. In each selected section of class six the trained teacher-in-charge or a peer-leader undertook dental examinations at baseline, 6-month and 18-month intervals and their findings were compared with those of a dentist. The outcome measures included the Kappa values for examiner agreement as well as the sensitivity, specificity, positive and negative predictive values. All teachers and peer leaders showed a substantial degree of agreement (Kappa ≥ 0.8) with the dentist in detecting plaque and calculus at all three examinations. The values of validity measures for teachers' and peer leaders' examination were in the range of 87-90%. The examinations performed by teachers and peer leaders were reasonably valid to detect plaque and calculus. However, booster training sessions are needed to maintain their performance as dental examiners.

  5. AgarTrap: a simplified Agrobacterium-mediated transformation method for sporelings of the liverwort Marchantia polymorpha L. (United States)

    Tsuboyama, Shoko; Kodama, Yutaka


    The liverwort Marchantia polymorpha L. is being developed as an emerging model plant, and several transformation techniques were recently reported. Examples are biolistic- and Agrobacterium-mediated transformation methods. Here, we report a simplified method for Agrobacterium-mediated transformation of sporelings, and it is termed Agar-utilized Transformation with Pouring Solutions (AgarTrap). The procedure of the AgarTrap was carried out by simply exchanging appropriate solutions in a Petri dish, and completed within a week, successfully yielding sufficient numbers of independent transformants for molecular analysis (e.g. characterization of gene/protein function) in a single experiment. The AgarTrap method will promote future molecular biological study in M. polymorpha.

  6. Effects of different simplified milk recording methods on genetic evaluation with test-day animal model


    Portolano, B.; Maizon, D. O.; Riggio, V.; Tolone, M.; Cacioppo, D.


    The aims of the present study were to compare estimated breeding values (EBV) for milk yield using different testing schemes with a test-day animal model and to evaluate the effect of different testing schemes on the ranking of top sheep. Alternative recording schemes that use less information than that currently obtained with a monthly test-day schedule were employed to estimate breeding values. A random regression animal mixed model that used a spline function of days in milk was fitted. EB...

  7. Method of estimation of turbulence characteristic scales

    CERN Document Server

    Kulikov, V A; Koryabin, A V; Shmalhausen, V I


    Here we propose an optical method that use phase data of a laser beam obtained from Shack-Hartmann sensor to estimate both inner and outer scales of turbulence. The method is based on the sequential analysis of normalized correlation functions of Zernike coefficients. It allows excluding the value of refractive index structural constant from the analysis and reduces the solution of a two-parameter problem to sequential solution of two single-parameter problems. The method has been applied to analyze the results of measurements of the laser beam that propagated through a water cell with induced turbulence and yielded estimates for outer and inner scales.

  8. Effects of different simplified milk recording methods on genetic evaluation with test-day animal model

    NARCIS (Netherlands)

    Portolano, B.; Maizon, D.O.; Riggio, V.; Tolone, M.; Cacioppo, D.


    The aims of the present study were to compare estimated breeding values (EBV) for milk yield using different testing schemes with a test-day animal model and to evaluate the effect of different testing schemes on the ranking of top sheep. Alternative recording schemes that use less information than

  9. Methods for estimation loads transported by rivers

    Directory of Open Access Journals (Sweden)

    T. S. Smart


    Full Text Available Ten methods for estimating the loads of constituents in a river were tested using data from the River Don in North-East Scotland. By treating loads derived from flow and concentration data collected every 2 days as a truth to be predicted, the ten methods were assessed for use when concentration data are collected fortnightly or monthly by sub-sampling from the original data. Estimates of coefficients of variation, bias and mean squared errors of the methods were compared; no method consistently outperformed all others and different methods were appropriate for different constituents. The widely used interpolation methods can be improved upon substantially by modelling the relationship of concentration with flow or seasonality but only if these relationships are strong enough.

  10. A simplified method to measure choroidal thickness using adaptive compensation in enhanced depth imaging optical coherence tomography.

    Directory of Open Access Journals (Sweden)

    Preeti Gupta

    Full Text Available PURPOSE: To evaluate a simplified method to measure choroidal thickness (CT using commercially available enhanced depth imaging (EDI spectral domain optical coherence tomography (SD-OCT. METHODS: We measured CT in 31 subjects without ocular diseases using Spectralis EDI SD-OCT. The choroid-scleral interface of the acquired images was first enhanced using a post-processing compensation algorithm. The enhanced images were then analysed using Photoshop. Two graders independently graded the images to assess inter-grader reliability. One grader re-graded the images after 2 weeks to determine intra-grader reliability. Statistical analysis was performed using intra-class correlation coefficient (ICC and Bland-Altman plot analyses. RESULTS: Using adaptive compensation both the intra-grader reliability (ICC: 0.95 to 0.97 and inter-grader reliability (ICC: 0.93 to 0.97 were perfect for all five locations of CT. However, with the conventional technique of manual CT measurements using built-in callipers provided with the Heidelberg explorer software, the intra- (ICC: 0.87 to 0.94 and inter-grader reliability (ICC: 0.90 to 0.93 for all the measured locations is lower. Using adaptive compensation, the mean differences (95% limits of agreement for intra- and inter-grader sub-foveal CT measurements were -1.3 (-3.33 to 30.8 µm and -1.2 (-36.6 to 34.2 µm, respectively. CONCLUSIONS: The measurement of CT obtained from EDI SD-OCT using our simplified method was highly reliable and efficient. Our method is an easy and practical approach to improve the quality of choroidal images and the precision of CT measurement.

  11. Effects of different simplified milk recording methods on genetic evaluation with Test-Day animal model

    Directory of Open Access Journals (Sweden)

    D. Cacioppo


    Full Text Available The aims of the present study were to compare estimated breeding values (EBV for milk yield using different testing schemes with a test-day animal model and to evaluate the effect of different testing schemes on the ranking of top sheep. Alternative recording schemes that use less information than that currently obtained with a monthly test-day schedule were employed to estimate breeding values. A random regression animal mixed model that used a spline function of days in milk was fitted. EBVs obtained with alternative recording schemes showed different degrees of Spearman correlation with EBVs obtained using the monthly recording scheme. These correlations ranged from 0.77 to 0.92. A reduction in accuracy and intensity of selection could be anticipated if these alternative schemes are used; more research in this area is needed to reduce the costs of test-day recording.

  12. Novel method for quantitative estimation of biofilms

    DEFF Research Database (Denmark)

    Syal, Kirtimaan


    were quantified by the proposed protocol. For ease in referring, this method has been described as the Syal method for biofilm quantification. This new method was found to be useful for the estimation of early phase biofilm and aerobic biofilm layer formed at the liquid-air interphase. The biofilms...... formed by all three tested bacteria-B. subtilis, E. coli and M. smegmatis, were precisely quantified....

  13. A Simplified and Inexpensive Method for Measuring Dissolved Oxygen in Water. (United States)

    Austin, John


    A modified Winkler method for determining dissolved oxygen in water is described. The method does not require use of a burette or starch indicator, is simple and inexpensive and can be used in the field or laboratory. Reagents/apparatus needed and specific procedures are included. (JN)

  14. Method for estimating road salt contamination of Norwegian lakes (United States)

    Kitterød, Nils-Otto; Wike Kronvall, Kjersti; Turtumøygaard, Stein; Haaland, Ståle


    Consumption of road salt in Norway, used to improve winter road conditions, has been tripled during the last two decades, and there is a need to quantify limits for optimal use of road salt to avoid further environmental harm. The purpose of this study was to implement methodology to estimate chloride concentration in any given water body in Norway. This goal is feasible to achieve if the complexity of solute transport in the landscape is simplified. The idea was to keep computations as simple as possible to be able to increase spatial resolution of input functions. The first simplification we made was to treat all roads exposed to regular salt application as steady state sources of sodium chloride. This is valid if new road salt is applied before previous contamination is removed through precipitation. The main reasons for this assumption are the significant retention capacity of vegetation; organic matter; and soil. The second simplification we made was that the groundwater table is close to the surface. This assumption is valid for major part of Norway, which means that topography is sufficient to delineate catchment area at any location in the landscape. Given these two assumptions, we applied spatial functions of mass load (mass NaCl pr. time unit) and conditional estimates of normal water balance (volume of water pr. time unit) to calculate steady state chloride concentration along the lake perimeter. Spatial resolution of mass load and estimated concentration along the lake perimeter was 25 m x 25 m while water balance had 1 km x 1 km resolution. The method was validated for a limited number of Norwegian lakes and estimation results have been compared to observations. Initial results indicate significant overlap between measurements and estimations, but only for lakes where the road salt is the major contribution for chloride contamination. For lakes in catchments with high subsurface transmissivity, the groundwater table is not necessarily following the

  15. A simplified method to detect epididymal sperm aneuploidy (ESA) in mice using three-chromosome fish

    Energy Technology Data Exchange (ETDEWEB)

    Lowe, X.; O`Hogan, S.; Wyrobek, A. [Lawrence Livermore National Lab., CA (United States)


    We developed a new method (ESA) to detect aneuploidy and polyploidy in epididymal sperm of mice using three-chromosome FISH. In comparison to a previous method (TSA-testicular spermatid aneuploidy), which required late-step spermatids, the ESA method utilizes epididymal sperm, which are easier to collect than testicular cells. The ESA method also provides a homogenous population of cells, which significantly speeds up the scoring procedure. A total of 6 mice were investigated by the ESA method and results compared with those obtained by the TSA method: 2 mice each of Robertsonian (8.14) heterozygotes, Rb(8.14) homozygotes and B6C3F1. About 10,000 sperm were scored per mouse. For the ESA method, epididimides were cut into small pieces and filtered. Sperm were prepared for hybridization by sonication and a modification of the DTT/LIS method previously described. Sperm aneuploidy was detected by multi-color FISH using three DNA probes specific for mouse chromosomes X, Y and 8. The sex ratio of X8(49.7%) and Y8(49.6%) did not differ from the expected 1:1. The efficiency of ESA was very high; -0.3% of the cells showed no hybridization domain. Hyperhaploidy frequencies for chromosomes X, Y and 8 compared well between the ESA and TSA methods for Rb(8.14) heterozygous (p=0.79) and B6C3F1 mice (p>0.05). The data obtained from Rb(8.14) homozygotes were similar to those from B6C3F1, as predicted (p=0.3). This highly efficient ESA assay is therefore, recommended for future studies of the mechanism of induction of aneuploidy in male germ cells. It also lays a solid foundation for automated scoring.

  16. Adsorptive performance of granular activated carbon in aquaculture and aquaria: a simplified method

    DEFF Research Database (Denmark)

    Taylor, Daniel; Kuhn, David D.; Smith, Stephen


    method for characterizing activated carbon quality and filter performance utilizing readily available and relatively safe indicator compounds to test adsorptive capabilities between different sources of granular activated carbon. Methylene blue and a commercial mix of humic and tannic substances were...


    Directory of Open Access Journals (Sweden)



    Full Text Available There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation.

  18. Angstrom equation parameter estimation by unrestricted method

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Zeka I. [Istanbul Technical Univ., Hydraulics Div., Istanbul (Turkey)


    Global solar irradiation is directly related to the sunshine duration record through a linear model which was first proposed by Angstrom in 1924. Generally, the parameter estimations of this model are achieved by using the regression technique based on the least squares method. This technique imposes procedural restrictions on the parameter estimations leading to under-estimations for clear sky condition and over-estimations in the case of overcast sky conditions. These restrictions include linearity, normality, means of conditional distribution, homoscedasticity, autocorrelation and lack of measurement error. In this paper, an alternative unrestricted method (UM) is proposed for preserving the means and variances of the global irradiation and the sunshine duration data. In the restrictive regression approach (Angstrom equation) the cross-correlation coefficient represents only linear relationships. By not considering this coefficient in the UM, some nonlinearities in the solar irradiation-sunshine duration relationship are taken into account. Especially when the scatter diagram of solar irradiation versus sunshine duration does not show any distinguishable pattern such as a straight line or a curve, then the use of UM is recommended for parameter estimations. The UM is contrasted with the Angstrom regression method for 27 solar irradiation stations in Turkey. (Author)

  19. A simple method to estimate interwell autocorrelation

    Energy Technology Data Exchange (ETDEWEB)

    Pizarro, J.O.S.; Lake, L.W. [Univ. of Texas, Austin, TX (United States)


    The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.

  20. Bayesian Inference Methods for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand


    This thesis deals with sparse Bayesian learning (SBL) with application to radio channel estimation. As opposed to the classical approach for sparse signal representation, we focus on the problem of inferring complex signals. Our investigations within SBL constitute the basis for the development...... of Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation...... complex prior representation achieve improved sparsity representations in low signalto- noise ratio as opposed to state-of-the-art sparse estimators. This result is of particular importance for the applicability of the algorithms in the field of channel estimation. We then derive various iterative...

  1. Simplifying NFP: preliminary report of a pilot study of the 'collar' method in Brazil. (United States)

    Faundes, A; Lamprecht, V; Osis, M J; Lopes, B C


    Natural methods of fertility regulation are acceptable in most cultures. Many couples worldwide do not wish to use contraceptives or do not have access to them but wish to limit their family size or lengthen the time between births. Barriers to expanding use of natural family planning (NFP) methods include a lack of providers who can teach NFP and a lack of time to teach and follow couples during the initial months of NFP use. If simple yet effective methods of NFP are available, then NFP could be introduced to a wider audience. Recently, calendar rules have been revised that use a set interval to identify fertile days. These new rules provide better coverage of fertile days and require less abstinence than the rules traditionally used with the calendar method. One of these new rules is being field tested in a pilot study in Brazil. Couples are asked to abstain from day 9-19 (inclusive) of the menstrual cycle, using a beaded necklace (the 'collar') as a mnemonic device. Focus groups with the teacher-monitors and in-depth interviews with female and male users were carried out to evaluate the acceptability of the 'collar' method. A preliminary analysis of these focus groups and interviews from the first site is presented.

  2. A simplified guide ruler from numeric table method in doing rotational osteotomy. (United States)

    Liaw, Chen-Kun; Yang, Rong-Sen; Hou, Sheng-Mou; Wu, Tai-Yin; Fuh, Chiou-Shann


    Cobeljić et al. recently reported a numeric table method to provide precise rotational osteotomy which is a well established orthopaedic procedure. The numeric table requires four pages in length that is rather inconvenient during performing an osteotomy operation. We thus develop our own method by summarizing the data of the four-page table into a small ruler, which is easy to carry and use in operation room. An electrical version of this ruler is also available. We also build a computer model to verify Cobeljić et al. method. The error of Cobeljić et al. is between -37% to 16% (mean +/- SD = -6% +/- 9%). We verify our ruler by calculating the absolute difference between our method and that of Cobeljić et al. The difference is less than 0.1 mm. Our ruler is convenient for practical use for the rotational osteotomy procedure with equal precision. Further clinical verification is needed to justify its real significance.

  3. Simplified Method for the Characterization of Rectangular Straw Bales (RSB) Thermal Conductivity (United States)

    Conti, Leonardo; Goli, Giacomo; Monti, Massimo; Pellegrini, Paolo; Rossi, Giuseppe; Barbari, Matteo


    This research aims to design and implement tools and methods focused at the assessment of the thermal properties of full size Rectangular Straw Bales (RSB) of various nature and origin, because their thermal behaviour is one of the key topics in market development of sustainable building materials. As a first approach a method based on a Hot-Box in agreement with the ASTM C1363 – 11 standard was adopted. This method was found to be difficult for the accurate measurement of energy flows. Instead, a method based on a constant energy input was developed. With this approach the thermal conductivity of a Rectangular Straw-Bale (RSB λ) can be determined by knowing the thermal conductivity of the materials used to build the chamber and the internal and external temperature of the samples and of the chamber. A measurement a metering chamber was built and placed inside a climate chamber, maintained at constant temperature. A known quantity of energy was introduced inside the metering chamber. A series of thermopiles detects the temperature of the internal and external surfaces of the metering chamber and of the specimens allowing to calculate the thermal conductivity of RSB in its natural shape. Different cereal samples were tested. The values were found consistent with those published in scientific literature.

  4. 20 CFR 404.241 - 1977 simplified old-start method. (United States)


    ... Section 404.241 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Old-Start Method of Computing Primary..., we compute your average monthly wage, as described in paragraph (d) of this section. (3) Next, we...

  5. A Simplified Method for Upscaling Composite Materials with High Contrast of the Conductivity

    KAUST Repository

    Ewing, R.


    A large class of industrial composite materials, such as metal foams, fibrous glass materials, mineral wools, and the like, are widely used in insulation and advanced heat exchangers. These materials are characterized by a substantial difference between the thermal properties of the highly conductive materials (glass or metal) and the insulator (air) as well as low volume fractions and complex network-like structures of the highly conductive components. In this paper we address the important issue for the engineering practice of developing fast, reliable, and accurate methods for computing the macroscopic (upscaled) thermal conductivities of such materials. We assume that the materials have constant macroscopic thermal conductivity tensors, which can be obtained by upscaling techniques based on the postprocessing of a number of linearly independent solutions of the steady-state heat equation on representative elementary volumes (REVs). We propose, theoretically justify, and computationally study a numerical method for computing the effective conductivities of materials for which the ratio δ of low and high conductivities satisfies δ ≪ 1. We show that in this case one needs to solve the heat equation in the region occupied by the highly conductive media only. Further, we prove that under certain conditions on the microscale geometry the proposed method gives an approximation that is O(δ)-close to the upscaled conductivity. Finally, we illustrate the accuracy and the limitations of the method on a number of numerical examples. © 2009 Society for Industrial and Applied Mathematics.

  6. A simplified method for selecting a carbon-fiber electrode in pulse voltammetry. (United States)

    Liao, B Y; Lio, H P; Wang, C Y; Young, M S; Ho, M T; Lin, M T


    A method for selecting a usable carbon-fiber electrode using the equivalent resistance and capacitance is presented. This method uses an instrument with a PC-based look-up table for measuring the electrical characteristics of a carbon-fiber electrode in pulse voltammetry. Using this instrument, the equivalent resistance and capacitance of the carbon-fiber electrode in saturated sodium chloride solution can be obtained. This instrument includes a decade resistance box, a peak current detection and hold circuit, a half peak comparator and a decay duration counter. A look-up table is established by using RC circuits to emulate the electrochemical reaction of the carbon-fiber electrode in pulse voltammetry. The equivalent resistance is obtained from the decade resistance box according to Kirchhoff's law. Then the equivalent capacitance is determined from the decay duration counter reading and equivalent resistance with the look-up table via a PC interpolation program. After obtaining the equivalent resistance and capacitance of an electrode, the values are compared with the usable thresholds. This method provides an effective quality evaluation index of carbon-fiber electrode for the user in order to reduce electrode-induced experimental failure. The method is also available for other kinds of carbon-fiber electrodes as long as their look-up table and desired thresholds are established.

  7. Estimation method for serial dilution experiments. (United States)

    Ben-David, Avishai; Davidson, Charles E


    Titration of microorganisms in infectious or environmental samples is a corner stone of quantitative microbiology. A simple method is presented to estimate the microbial counts obtained with the serial dilution technique for microorganisms that can grow on bacteriological media and develop into a colony. The number (concentration) of viable microbial organisms is estimated from a single dilution plate (assay) without a need for replicate plates. Our method selects the best agar plate with which to estimate the microbial counts, and takes into account the colony size and plate area that both contribute to the likelihood of miscounting the number of colonies on a plate. The estimate of the optimal count given by our method can be used to narrow the search for the best (optimal) dilution plate and saves time. The required inputs are the plate size, the microbial colony size, and the serial dilution factors. The proposed approach shows relative accuracy well within ±0.1log10 from data produced by computer simulations. The method maintains this accuracy even in the presence of dilution errors of up to 10% (for both the aliquot and diluent volumes), microbial counts between 10(4) and 10(12) colony-forming units, dilution ratios from 2 to 100, and plate size to colony size ratios between 6.25 to 200. Published by Elsevier B.V.

  8. Building energy optimization in the early design stages: A simplified method

    DEFF Research Database (Denmark)

    Negendahl, Kristoffer; Nielsen, Toke Rammer


    This paper presents the application of multi-objective genetic algorithms for holistic building design that considers multiple criteria; building energy use, capital cost, daylight distribution and thermal indoor environment. The optimization focus is related to building envelope parameters...... evaluations, a Radiance implementation for daylight simulations and a scripted algorithm for capital cost evaluations. The application of the method is developed around an integrated dynamic model which allows visual design feedback from all evaluations to be an integrated part of the design tool experience....... It is concluded, that quasi-steady-state methods implemented as part of integrated dynamic models are fast and flexible enough to support building energy-, indoor environment- and cost-optimization the early design stages....

  9. Simplified Method to Produce Human Bioactive Leukemia Inhibitory Factor in Escherichia coli

    Directory of Open Access Journals (Sweden)

    Houman Kahroba


    Full Text Available Background Human leukemia inhibitory factor (hLIF is a poly functional cytokine with numerous regulatory effects on different cells. Main application of hLIF is maintaining pluripotency of embryonic stem cells. hLIF indicated effective work in implantation rate of fertilized eggs and multiple sclerosis (MS treatment. Low production of hLIF in eukaryotic cells and prokaryotic host’s problems for human protein production convinced us to develop a simple way to reach high amount of this widely used clinical and research factor. Objectives In this study we want to purify recombinant human leukemia inhibitory factor in single simple method. Materials and Methods This is an experimental study, gene expression: human LIF gene was codon optimized for expression in Escherichia coli and attached his-tag tail to make it extractable. After construction and transformation of vector to E. coli, isopropyl β-D-1-thiogalactopyranoside (IPTG used for induction. Single step immobilized metal affinity chromatography (IMAC used for purification confirmed by Sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS PAGE and western blotting. Bioactivity of the hLIF were tested by MTT assay with TF-1 cells and CISH gene stimulation in monocyte and TF-1 by real-time PCR. Induction by 0.4 mM of IPTG in 25°C for 3 hours indicated best result for soluble expression. SPSS indicated P ˂ 0.05 that is significant for our work. Results Cloning, expression, and extraction of bio active rhLIF was successfully achieved according MTT assay and real time PCR after treatment of TF-1 and monocyte cell lines. Conclusions We developed an effective single step purification method to produce bioactive recombinant hLIF in E. coli. For the first time we used CISH gene stimulating for bioactivity test for qualifying of recombinant hLIF for application.

  10. Simplified Method for Predicting a Functional Class of Proteins in Transcription Factor Complexes

    KAUST Repository

    Piatek, Marek J.


    Background:Initiation of transcription is essential for most of the cellular responses to environmental conditions and for cell and tissue specificity. This process is regulated through numerous proteins, their ligands and mutual interactions, as well as interactions with DNA. The key such regulatory proteins are transcription factors (TFs) and transcription co-factors (TcoFs). TcoFs are important since they modulate the transcription initiation process through interaction with TFs. In eukaryotes, transcription requires that TFs form different protein complexes with various nuclear proteins. To better understand transcription regulation, it is important to know the functional class of proteins interacting with TFs during transcription initiation. Such information is not fully available, since not all proteins that act as TFs or TcoFs are yet annotated as such, due to generally partial functional annotation of proteins. In this study we have developed a method to predict, using only sequence composition of the interacting proteins, the functional class of human TF binding partners to be (i) TF, (ii) TcoF, or (iii) other nuclear protein. This allows for complementing the annotation of the currently known pool of nuclear proteins. Since only the knowledge of protein sequences is required in addition to protein interaction, the method should be easily applicable to many species.Results:Based on experimentally validated interactions between human TFs with different TFs, TcoFs and other nuclear proteins, our two classification systems (implemented as a web-based application) achieve high accuracies in distinguishing TFs and TcoFs from other nuclear proteins, and TFs from TcoFs respectively.Conclusion:As demonstrated, given the fact that two proteins are capable of forming direct physical interactions and using only information about their sequence composition, we have developed a completely new method for predicting a functional class of TF interacting protein partners


    Directory of Open Access Journals (Sweden)

    Nicole SELLESKI


    Full Text Available Background Celiac disease is an autoimmune enteropathy triggered by the ingestion of gluten in genetically susceptible individuals. Genetic susceptibility is associated with two sets of alleles, DQA1*05 - DQB1*02 and DQA1*03 - DQB1*03:02, which code for class II MHC DQ2 and DQ8 molecules, respectively. Approximately 90%-95% of celiac patients are HLA-DQ2 positive, and half of the remaining patients are HLA-DQ8 positive. In fact, during a celiac disease diagnostic workup, the absence of these specific DQA and DQB alleles has a near perfect negative predictive value. Objective Improve the detection of celiac disease predisposing alleles by combining the simplicity and sensitivity of real-time PCR (qPCR and melting curve analysis with the specificity of sequence-specific primers (SSP. Methods Amplifications of sequence-specific primers for DQA1*05 (DQ2, DQB1*02 (DQ2, and DQA1*03 (DQ8 were performed by the real time PCR method to determine the presence of each allele in independent reactions. Primers for Human Growth Hormone were used as an internal control. A parallel PCR-SSP protocol was used as a reference method to validate our results. Results Both techniques yielded equal results. From a total of 329 samples the presence of HLA predisposing alleles was determined in 187 (56.8%. One hundred fourteen samples (61% were positive for a single allele, 68 (36.3% for two alleles, and only 5 (2.7% for three alleles. Conclusion Results obtained by qPCR technique were highly reliable with no discordant results when compared with those obtained using PCR-SSP.

  12. A simplified method for estimation of groundwater contamination surrounding accelerators and high power targets

    CERN Document Server

    Prolingheuer, N; Heuel-Fabianek, B; Moormann, R; Nabbi, R; Schlögl, B; Vanderborght, J

    The 11th International Conference on Radiation Shielding (ICRS-11) and the 15th Topical Meeting of the Radiation Protection and Shielding Division (RPSD-2008) of the American Nuclear Society, April13-18, 2008, Georgia, USA

  13. Simplified method to predict mutual interactions of human transcription factors based on their primary structure

    KAUST Repository

    Schmeier, Sebastian


    Background: Physical interactions between transcription factors (TFs) are necessary for forming regulatory protein complexes and thus play a crucial role in gene regulation. Currently, knowledge about the mechanisms of these TF interactions is incomplete and the number of known TF interactions is limited. Computational prediction of such interactions can help identify potential new TF interactions as well as contribute to better understanding the complex machinery involved in gene regulation. Methodology: We propose here such a method for the prediction of TF interactions. The method uses only the primary sequence information of the interacting TFs, resulting in a much greater simplicity of the prediction algorithm. Through an advanced feature selection process, we determined a subset of 97 model features that constitute the optimized model in the subset we considered. The model, based on quadratic discriminant analysis, achieves a prediction accuracy of 85.39% on a blind set of interactions. This result is achieved despite the selection for the negative data set of only those TF from the same type of proteins, i.e. TFs that function in the same cellular compartment (nucleus) and in the same type of molecular process (transcription initiation). Such selection poses significant challenges for developing models with high specificity, but at the same time better reflects real-world problems. Conclusions: The performance of our predictor compares well to those of much more complex approaches for predicting TF and general protein-protein interactions, particularly when taking the reduced complexity of model utilisation into account. © 2011 Schmeier et al.

  14. A simplified FTIR chemometric method for simultaneous determination of four oxidation parameters of frying canola oil. (United States)

    Talpur, M Younis; Hassan, S Sara; Sherazi, S T H; Mahesar, S A; Kara, Huseyin; Kandhro, Aftab A; Sirajuddin


    Transmission Fourier transform infrared (FTIR) spectroscopic method using 100 μm KCl cell was applied for the determination of total polar compounds (TPC), carbonyl value (CV), conjugated diene (CD) and conjugated triene (CT) in canola oil (CLO) during potato chips frying at 180 °C. The calibration models were developed for TPC, CV, CD and CT using partial least square (PLS) chemometric technique. Excellent regression coefficients (R(2)) and root mean square error of prediction values for TPC, CV, CD and CT were found to be 0.999, 0.992, 0.998 and 0.999 and 0.809, 0.690, 1.26 and 0.735, respectively. The developed calibration models were applied on samples of canola oil drawn during potato chips frying process. A linear relationship was obtained between CD and TPC with a good correlation of coefficient (R(2)=0.9816). Results of the study clearly indicated that transmission FTIR-PLS method could be used for quick and precise evaluation of oxidative changes during the frying process without using any organic solvent. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. ezRAD: a simplified method for genomic genotyping in non-model organisms

    Directory of Open Access Journals (Sweden)

    Robert J. Toonen


    Full Text Available Here, we introduce ezRAD, a novel strategy for restriction site–associated DNA (RAD that requires little technical expertise or investment in laboratory equipment, and demonstrate its utility for ten non-model organisms across a wide taxonomic range. ezRAD differs from other RAD methods primarily through its use of standard Illumina TruSeq library preparation kits, which makes it possible for any laboratory to send out to a commercial genomic core facility for library preparation and next-generation sequencing with virtually no additional investment beyond the cost of the service itself. This simplification opens RADseq to any lab with the ability to extract DNA and perform a restriction digest. ezRAD also differs from others in its flexibility to use any restriction enzyme (or combination of enzymes that cuts frequently enough to generate fragments of the desired size range, without requiring the purchase of separate adapters for each enzyme or a sonication step, which can further decrease the cost involved in choosing optimal enzymes for particular species and research questions. We apply this method across a wide taxonomic diversity of non-model organisms to demonstrate the utility and flexibility of our approach. The simplicity of ezRAD makes it particularly useful for the discovery of single nucleotide polymorphisms and targeted amplicon sequencing in natural populations of non-model organisms that have been historically understudied because of lack of genomic information.

  16. An optimized and simplified method for analysing urea and ammonia in freshwater aquaculture systems

    DEFF Research Database (Denmark)

    Larsen, Bodil Katrine; Dalsgaard, Anne Johanne Tang; Pedersen, Per Bovbjerg


    This study presents a simple urease method for analysis of ammonia and urea in freshwater aquaculture systems. Urea is hydrolysed into ammonia using urease followed by analysis of released ammonia using the salicylate-hypochlorite method. The hydrolysis of urea is performed at room temperature...... and without addition of a buffer. A number of tests were performed on water samples obtained from a commercial rainbow trout farm to determine the optimal urease concentration and time for complete hydrolysis. One mL of water sample was spiked with 1.3 mL urea at three different concentrations: 50 lg L 1, 100...... lg L 1 and 200 lg L 1 urea-N. In addition, five concentrations of urease were tested, ranging from 0.1 U mL 1 to 4 U mL 1. Samples were hydrolysed for various time periods ranging from 5 to 120 min. A urease concentration of 0.4 U mL 1 and a hydrolysis period of 120 min gave the best results, with 99...

  17. A simplified two-dimensional boundary element method with arbitrary uniform mean flow

    Directory of Open Access Journals (Sweden)

    Bassem Barhoumi


    Full Text Available To reduce computational costs, an improved form of the frequency domain boundary element method (BEM is proposed for two-dimensional radiation and propagation acoustic problems in a subsonic uniform flow with arbitrary orientation. The boundary integral equation (BIE representation solves the two-dimensional convected Helmholtz equation (CHE and its fundamental solution, which must satisfy a new Sommerfeld radiation condition (SRC in the physical space. In order to facilitate conventional formulations, the variables of the advanced form are expressed only in terms of the acoustic pressure as well as its normal and tangential derivatives, and their multiplication operators are based on the convected Green’s kernel and its modified derivative. The proposed approach significantly reduces the CPU times of classical computational codes for modeling acoustic domains with arbitrary mean flow. It is validated by a comparison with the analytical solutions for the sound radiation problems of monopole, dipole and quadrupole sources in the presence of a subsonic uniform flow with arbitrary orientation. Keywords: Two-dimensional convected Helmholtz equation, Two-dimensional convected Green’s function, Two-dimensional convected boundary element method, Arbitrary uniform mean flow, Two-dimensional acoustic sources

  18. Simplifying the defatting of full-thickness skin using “razor strop” method

    Directory of Open Access Journals (Sweden)

    Masaki Fujioka


    Full Text Available Defatting is usually required when patients undergo fullthickness skin grafting. However, this procedure is often burdensome, because it requires both stable fixation and appropriate tensioning of the skin piece. Furthermore, a skin piece may drop on the floor by accident during the defatting process To solve these problems, we present a unique method of full-thickness skin harvesting, which is easy and convenient, and does not require any special devices. The distal end of elliptical skin is not cut off but remains attached to the body like a pedicled flap. The surgeon pulls the end of the skin and puts his finger on the skin surface during the defatting procedure. This looks similar to a barber sharpening a razor using a strop belt. Our “razor strop” defatting method facilitates full-thickness skin harvesting without any need to purchase specialized equipment. We believe that this technique is an effective option for defatting in full-thickness skin grafting

  19. 29 CFR 2520.104-48 - Alternative method of compliance for model simplified employee pensions-IRS Form 5305-SEP. (United States)


    ... employee pensions-IRS Form 5305-SEP. 2520.104-48 Section 2520.104-48 Labor Regulations Relating to Labor... compliance for model simplified employee pensions—IRS Form 5305-SEP. Under the authority of section 110 of... Security Act of 1974 in the case of a simplified employee pension (SEP) described in section 408(k) of the...

  20. A simplified method for obtaining high-purity perchlorate from groundwater for isotope analyses.

    Energy Technology Data Exchange (ETDEWEB)

    vonKiparski, G; Hillegonds, D


    Investigations into the occurrence and origin of perchlorate (ClO{sub 4}{sup -}) found in groundwater from across North America have been sparse until recent years, and there is mounting evidence that natural formation mechanisms are important. New opportunities for identifying groundwater perchlorate and its origin have arisen with the utilization of improved detection methods and sampling techniques. Additionally, application of the forensic potential of isotopic measurements has begun to elucidate sources, potential formation mechanisms and natural attenuation processes. Procedures developed appear to be amenable to enable high precision stable isotopic analyses, as well as lower precision AMS analyses of {sup 36}Cl. Immediate work is in analyzing perchlorate isotope standards and developing full analytical accuracy and uncertainty expectations. Field samples have also been collected, and will be analyzed when final qa/qc samples are deemed acceptable.

  1. Simplified Method for Modeling the Impact of Arbitrary Partial Shading Conditions on PV Array Performance: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    MacAlpine, Sara; Deline, Chris


    It is often difficult to model the effects of partial shading conditions on PV array performance, as shade losses are nonlinear and depend heavily on a system's particular configuration. This work describes and implements a simple method for modeling shade loss: a database of shade impact results (loss percentages), generated using a validated, detailed simulation tool and encompassing a wide variety of shading scenarios. The database is intended to predict shading losses in crystalline silicon PV arrays and is accessed using basic inputs generally available in any PV simulation tool. Performance predictions using the database are within 1-2% of measured data for several partially shaded PV systems, and within 1% of those predicted by the full, detailed simulation tool on an annual basis. The shade loss database shows potential to considerably improve performance prediction for partially shaded PV systems.

  2. A simplified proximal isovelocity surface area method for mitral valve area calculation in mitral stenosis: not requiring angle correction and calculator

    Directory of Open Access Journals (Sweden)

    Omer Yiginer


    Full Text Available Aim To simplify proximal isovelocity surface area (PISA method for mitral valve area (MVA calculation that does not necessitate the usage of a calculator and angle correction, and to compare values estimated using this novel method with the values obtained by the conventional PISA, planimetry and pressure half-time (PHT methods.Methods We evaluated patients with a wide range of mitral stenosis (MS severity. The MVA was measured by the methods of PHT (MVA PHT, planimetry (MVApl, conventional PISA (MVAC-PISA and the novel method of simple PISA (MVAS-PISA. Application of simple PISA was performed subsequently by division of the peak mitral inflow velocity by four; measurement of the radius by adjusting the aliasing velocity to this value; square of the radius gives the MVAS-PISA. Results Twenty patients were enrolled in the study. Peak and mean pressure gradients of patients were 20 ± 6 mmHg and 10±4 mmHg,respectively. The average values of MVApl, MVAPHT, MVAC-PISA, and MVA S-PISA were 1,54±0,41, 1,65±0,40, 1,58±0,42, 1,57 ± 0,44 cm2, respectively. MVAS-PISA had a strong correlation with the MVAC-PISA, MVApl and MVAPHT . Furthermore, there was no signi- ficant difference between simple PISA and the other methods. The agreement between planimetry and simple PISA methods for detecting severe mitral stenosis (MVA<1.5 cm2 determined by ROC analysis was very good with a sensitivity and specificity of 100 % and 92%, respectively. Conclusion Simple PISA is a user friendly method which does not take time and gives simple and correct results. If the diagnostic power of the technique is proven by more comprehensive studies, it can supersede the conventional PISA method.

  3. Reinforcing the role of the conventional C-arm - a novel method for simplified distal interlocking

    Directory of Open Access Journals (Sweden)

    Windolf Markus


    Full Text Available Abstract Background The common practice for insertion of distal locking screws of intramedullary nails is a freehand technique under fluoroscopic control. The process is technically demanding, time-consuming and afflicted to considerable radiation exposure of the patient and the surgical personnel. A new concept is introduced utilizing information from within conventional radiographic images to help accurately guide the surgeon to place the interlocking bolt into the interlocking hole. The newly developed technique was compared to conventional freehand in an operating room (OR like setting on human cadaveric lower legs in terms of operating time and radiation exposure. Methods The proposed concept (guided freehand, generally based on the freehand gold standard, additionally guides the surgeon by means of visible landmarks projected into the C-arm image. A computer program plans the correct drilling trajectory by processing the lens-shaped hole projections of the interlocking holes from a single image. Holes can be drilled by visually aligning the drill to the planned trajectory. Besides a conventional C-arm, no additional tracking or navigation equipment is required. Ten fresh frozen human below-knee specimens were instrumented with an Expert Tibial Nail (Synthes GmbH, Switzerland. The implants were distally locked by performing the newly proposed technique as well as the conventional freehand technique on each specimen. An orthopedic resident surgeon inserted four distal screws per procedure. Operating time, number of images and radiation time were recorded and statistically compared between interlocking techniques using non-parametric tests. Results A 58% reduction in number of taken images per screw was found for the guided freehand technique (7.4 ± 3.4 (mean ± SD compared to the freehand technique (17.6 ± 10.3 (p p = 0.001. Operating time per screw (from first shot to screw tightened was on average 22% reduced by guided freehand (p = 0


    NARCIS (Netherlands)


    We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.

  5. A Policy Alternative Analysis and Simplified Scoring Method to Assess Policy Options for Marine Conservation Areas (United States)

    Sharuga, S. M.; Reams, M.


    Traditional approaches to marine conservation and management are increasingly being found as inadequate; and, consequently, more complex ecosystem-based approaches to protecting marine ecosystems are growing in popularity. Ecosystem-based approaches, however, can be particularly challenging at a local level where resources and knowledge of specific marine conservation components may be limited. Marine conservation areas are known by a variety of names globally, but can be divided into four general types: Marine Protected Areas (MPAs), Marine Reserves, Fishery Reserves, and Ecological Reserves (i.e. "no take zones"). Each type of conservation area involves specific objectives, program elements and likely socioeconomic consequences. As an aid to community stakeholders and decision makers considering establishment of a marine conservation area, a simple method to compare and score the objectives and attributes of these four approaches is presented. A range of evaluation criteria are considered, including conservation of biodiversity and habitat, effective fishery management, overall cost-effectiveness, fairness to current users, enhancement of recreational activities, fairness to taxpayers, and conservation of genetic diversity. Environmental and socioeconomic costs and benefits of each type of conservation area are also considered. When exploring options for managing the marine environment, particular resource conservation needs must be evaluated individually on a case-by-case basis and the type of conservation area established must be tailored accordingly. However, MPAs are often more successful than other conservation areas because they offer a compromise between the needs of society and the environment, and therefore represent a viable option for ecosystem-based management.

  6. In vitro culture of functionally active buffalo hepatocytes isolated by using a simplified manual perfusion method.

    Directory of Open Access Journals (Sweden)

    Santanu Panda

    Full Text Available In farm animals, there is no suitable cell line available to understand liver-specific functions. This has limited our understanding of liver function and metabolism in farm animals. Culturing and maintenance of functionally active hepatocytes is difficult, since they survive no more than few days. Establishing primary culture of hepatocytes can help in studying cellular metabolism, drug toxicity, hepatocyte specific gene function and regulation. Here we provide a simple in vitro method for isolation and short-term culture of functionally active buffalo hepatocytes.Buffalo hepatocytes were isolated from caudate lobes by using manual enzymatic perfusion and mechanical disruption of liver tissue. Hepatocyte yield was (5.3 ± 0.66×107 cells per gram of liver tissue with a viability of 82.3 ± 3.5%. Freshly isolated hepatocytes were spherical with well contrasted border. After 24 hours of seeding onto fibroblast feeder layer and different extracellular matrices like dry collagen, matrigel and sandwich collagen coated plates, hepatocytes formed confluent monolayer with frequent clusters. Cultured hepatocytes exhibited typical cuboidal and polygonal shape with restored cellular polarity. Cells expressed hepatocyte-specific marker genes or proteins like albumin, hepatocyte nuclear factor 4α, glucose-6-phosphatase, tyrosine aminotransferase, cytochromes, cytokeratin and α1-antitrypsin. Hepatocytes could be immunostained with anti-cytokeratins, anti-albumin and anti α1-antitrypsin antibodies. Abundant lipid droplets were detected in the cytosol of hepatocytes using oil red stain. In vitro cultured hepatocytes could be grown for five days and maintained for up to nine days on buffalo skin fibroblast feeder layer. Cultured hepatocytes were viable for functional studies.We developed a convenient and cost effective technique for hepatocytes isolation for short-term culture that exhibited morphological and functional characteristics of active hepatocytes

  7. Analyzing the nonlinear vibrational wave differential equation for the simplified model of Tower Cranes by Algebraic Method (United States)

    Akbari, M. R.; Ganji, D. D.; Ahmadi, A. R.; Kachapi, Sayyid H. Hashemi


    In the current paper, a simplified model of Tower Cranes has been presented in order to investigate and analyze the nonlinear differential equation governing on the presented system in three different cases by Algebraic Method (AGM). Comparisons have been made between AGM and Numerical Solution, and these results have been indicated that this approach is very efficient and easy so it can be applied for other nonlinear equations. It is citable that there are some valuable advantages in this way of solving differential equations and also the answer of various sets of complicated differential equations can be achieved in this manner which in the other methods, so far, they have not had acceptable solutions. The simplification of the solution procedure in Algebraic Method and its application for solving a wide variety of differential equations not only in Vibrations but also in different fields of study such as fluid mechanics, chemical engineering, etc. make AGM be a powerful and useful role model for researchers in order to solve complicated nonlinear differential equations.

  8. A Simplified Method for the Collection and Analysis of a Cosmogenic Radioactive Age Tracer: Sodium-22 (United States)

    Deinhart, A. L.; Thaw, M.; Bibby, R. K.; Egnatuk, C. M.; Torretto, P.; Visser, A.; Esser, B.; Wooddy, T.


    The residence time distributions of surface waters give important information on watershed functioning and subsurface storage. Sodium-22 (half-life 2.6 years) can provide residence time information in addition to traditional age tracers (stable isotopes, tritium). A new and straightforward analytical method was created to determine Sodium-22 activities from various water sources. In the field, approximately 400 L of water are passed through a column containing 1.8 kg of cation exchange resin at 4-5 L/min. Columns are returned to the laboratory where they are processed by passing 4L of 3M HCl through the resin columns to elute all cations. The 4L acid is loaded into a Marinelli beaker for gamma ray spectroscopy analysis. The samples are counted for 22Na (1274 keV), determined by using a coaxial germanium detector with a 0.5 keV resolution and less than 3×10-3 counts per second. Samples are counted for seven days in order to acquire sufficient counts above background. The counting efficiency (11.6×10-3 cps/Bq) was obtained by counting a known activity of 22Na standard in the same geometry and matrix (3M HCl in a 4 L Marinelli beaker) as the samples. The net peak area of the samples in a 2.5 keV window is compared to the net peak area of the standard to calculate the 22Na activity. Two entire-season snow samples were collected from the Southern Sierra Critical Zone Observatory in the winter of 2015 and 2016. Approximately 300L of snow melt was collected and processed. The sodium-22 activity was determined to be 141±35 mBq/L and 153±29 mBq/L. Sodium-22 activities in streams, in conjunction with sulfur-35 (87 days) and tritium (12.3), reveal the presence or absence of water with intermediate residence times (0-10 years).

  9. Update and Improve Subsection NH - Simplified Elastic and Inelastic Design Analysis Methods

    Energy Technology Data Exchange (ETDEWEB)

    Jeries J. Abou-Hanna; Douglas L. Marriott; Timothy E. McGreevy


    The objective of this subtask is to develop a template for the 'Ideal' high temperature design Code, in which individual topics can be identified and worked on separately in order to provide the detail necessary to comprise a comprehensive Code. Like all ideals, this one may not be attainable as a practical matter. The purpose is to set a goal for what is believed the 'Ideal' design Code should address, recognizing that some elements are not mutually exclusive and that the same objectives can be achieved in different way. Most, if not all existing Codes may therefore be found to be lacking in some respects, but this does not mean necessarily that they are not comprehensive. While this subtask does attempt to list the elements which individually or in combination are considered essential in such a Code, the authors do not presume to recommend how these elements should be implemented or even, that they should all be implemented at all. The scope of this subtask is limited to compiling the list of elements thought to be necessary or at minimum, useful in such an 'Ideal' Code; suggestions are provided as to their relationship to one another. Except for brief descriptions, where these are needed for clarification, neither this repot, nor Task 9 as a whole, attempts to address details of the contents of all these elements. Some, namely primary load limits (elastic, limit load, reference stress), and ratcheting (elastic, e-p, reference stress) are dealt with specifically in other subtasks of Task 9. All others are merely listed; the expectation is that they will either be the focus of attention of other active DOE-ASME GenIV Materials Tasks, e.g. creep-fatigue, or to be considered in future DOE-ASME GenIV Materials Tasks. Since the focus of this Task is specifically approximate methods, the authors have deemed it necessary to include some discussion on what is meant by 'approximate'. However, the topic will be addressed in one or

  10. Simplified and rapid method for extraction of ergosterol from natural samples and detection with quantitative and semi-quantitative methods using thin-layer chromatography


    Larsen, Cand.scient Thomas; Ravn, Senior scientist Helle; Axelsen, Senior Scientist Jørgen


    A new and simplified method for extraction of ergosterol (ergoste-5,7,22-trien-3-beta-ol) from fungi in soil and litter was developed using pre-soaking extraction and paraffin oil for recovery. Recoveries of ergosterol were in the range of 94 - 100% depending on the solvent to oil ratio. Extraction efficiencies equal to heat-assisted extraction treatments were obtained with pre-soaked extraction. Ergosterol was detected with thin-layer chromatography (TLC) using fluorodensitometry with a quan...

  11. Simplified method to assess soil organic matter in landscape and carbon sequestration studies (United States)

    Pallasser, Robert; Minasny, Budiman; McBratney, Alex; de Bruyn, Hank


    , relative DTGA responses were quantified in a similar way to the approaches discussed by Plante et. al., (2009) and references therein. TGA-MS analyses were conducted using a TA SDT Q 600 - Thermostar quadrupole, so as to provide a distinctive set of major ions or markers for the two organic matter types which can be indicative of the parent material. Furthermore, since a mass change event from an inorganic component (e.g. dehydroxylation) can contribute to an SOM related response, correlation with MS data needed to be carried out. This method of analysis can be used to reliably fingerprint SOM and should be an enormously useful addition when assessing depositional or agricultural soil environments. Quantifying the relative amounts of SOM can be achieved by coupling with elemental analysis with the added scope of being able to separate (Kasozi et. al., 2009) the contribution from inorganic carbon (carbonates), a common soil constituent. Important applications for the DTGA technique include environmental pedological studies such as evaluating SOM after severe bushfire events or agricultural monitoring particularly during carbon sequestration and changed land management practices. References Kasozi G.N., Nkedi-Kizza P., Harris W.G., 2009. Varied carbon content of organic matter in histosols, spodosols and carbonatic soils. Soil Science Society of America Journal 73, 1313-1318. Laird D.A., Chappell M.A., Martens D.A., Wershaw R.L., Thompson M., 2008 Distinguishing black carbon from biogenic humic substances in soil clay fractions. Geoderma 143, 115-122. Lopez-Capel E., Abbott G.D., Thomas K.M., Manning D.A.C., 2006. Coupling of thermal analysis with quadrupole mass spectrometry and isotope ratio mass spectrometry for simultaneous determination of evolved gases and their carbon isotopic composition. Journal of Analytical and Applied Pyrolysis, 75, 82-89. Plante A.F., Fernandez J.M., Leifeld J., 2009. Application of thermal analysis techniques in soil science. Geoderma 153

  12. Development of a simplified method for intelligent glazed façade design under different control strategies and verified by building simulation tool BSim

    DEFF Research Database (Denmark)

    Liu, Mingzhe; Wittchen, Kim Bjarne; Heiselberg, Per


    The research aims to develop a simplified calculation method for intelligent glazed facade under different control conditions (night shutter, solar shading and natural ventilation) to simulate the energy performance and indoor environment of an office room installed with the intelligent facade....... The method took the angle dependence of the solar characteristic into account, including the simplified hourly building model developed according to EN 13790 to evaluate the influence of the controlled façade on both the indoor environment (indoor air temperature, solar transmittance through the façade...

  13. FastCloning: a highly simplified, purification-free, sequence- and ligation-independent PCR cloning method

    Directory of Open Access Journals (Sweden)

    Lu Jia


    Full Text Available Abstract Background Although a variety of methods and expensive kits are available, molecular cloning can be a time-consuming and frustrating process. Results Here we report a highly simplified, reliable, and efficient PCR-based cloning technique to insert any DNA fragment into a plasmid vector or into a gene (cDNA in a vector at any desired position. With this method, the vector and insert are PCR amplified separately, with only 18 cycles, using a high fidelity DNA polymerase. The amplified insert has the ends with ~16-base overlapping with the ends of the amplified vector. After DpnI digestion of the mixture of the amplified vector and insert to eliminate the DNA templates used in PCR reactions, the mixture is directly transformed into competent E. coli cells to obtain the desired clones. This technique has many advantages over other cloning methods. First, it does not need gel purification of the PCR product or linearized vector. Second, there is no need of any cloning kit or specialized enzyme for cloning. Furthermore, with reduced number of PCR cycles, it also decreases the chance of random mutations. In addition, this method is highly effective and reproducible. Finally, since this cloning method is also sequence independent, we demonstrated that it can be used for chimera construction, insertion, and multiple mutations spanning a stretch of DNA up to 120 bp. Conclusion Our FastCloning technique provides a very simple, effective, reliable, and versatile tool for molecular cloning, chimera construction, insertion of any DNA sequences of interest and also for multiple mutations in a short stretch of a cDNA.

  14. Neutron transport in hexagonal reactor cores modeled by trigonal-geometry diffusion and simplified P{sub 3} nodal methods

    Energy Technology Data Exchange (ETDEWEB)

    Duerigen, Susan


    The superior advantage of a nodal method for reactor cores with hexagonal fuel assemblies discretized as cells consisting of equilateral triangles is its mesh refinement capability. In this thesis, a diffusion and a simplified P{sub 3} (or SP{sub 3}) neutron transport nodal method are developed based on trigonal geometry. Both models are implemented in the reactor dynamics code DYN3D. As yet, no other well-established nodal core analysis code comprises an SP{sub 3} transport theory model based on trigonal meshes. The development of two methods based on different neutron transport approximations but using identical underlying spatial trigonal discretization allows a profound comparative analysis of both methods with regard to their mathematical derivations, nodal expansion approaches, solution procedures, and their physical performance. The developed nodal approaches can be regarded as a hybrid NEM/AFEN form. They are based on the transverse-integration procedure, which renders them computationally efficient, and they use a combination of polynomial and exponential functions to represent the neutron flux moments of the SP{sub 3} and diffusion equations, which guarantees high accuracy. The SP{sub 3} equations are derived in within-group form thus being of diffusion type. On this basis, the conventional diffusion solver structure can be retained also for the solution of the SP{sub 3} transport problem. The verification analysis provides proof of the methodological reliability of both trigonal DYN3D models. By means of diverse hexagonal academic benchmark and realistic detailed-geometry full-transport-theory problems, the superiority of the SP{sub 3} transport over the diffusion model is demonstrated in cases with pronounced anisotropy effects, which is, e.g., highly relevant to the modeling of fuel assemblies comprising absorber material.

  15. Estimating the L5S1 flexion/extension moment in symmetrical lifting using a simplified ambulatory measurement system. (United States)

    Koopman, Axel S; Kingma, Idsart; Faber, Gert S; Bornmann, Jonas; van Dieën, Jaap H


    Mechanical loading of the spine has been shown to be an important risk factor for the development of low-back pain. Inertial motion capture (IMC) systems might allow measuring lumbar moments in realistic working conditions, and thus support evaluation of measures to reduce mechanical loading. As the number of sensors limits applicability, the objective of this study was to investigate the effect of the number of sensors on estimates of L5S1 moments. Hand forces, ground reaction forces (GRF) and full-body kinematics were measured using a gold standard (GS) laboratory setup. In the ambulatory setup, hand forces were estimated based on the force plates measured GRF and body kinematics that were measured using (subsets of) an IMC system. Using top-down inverse dynamics, L5S1 flexion/extension moments were calculated. RMSerrors (Nm) were lowest (16.6) with the full set of 17 sensors and increased to 20.5, 22 and 30.6, for 8, 6 and 4 sensors. Absolute errors in peak moments (Nm) ranged from 17.7 to 16.4, 16.9 and 49.3 Nm, for IMC setup's with 17, 8, 6 and 4 sensors, respectively. When horizontal GRF were neglected for 6 sensors, RMSerrors and peak moment errors decreased from 22 to 17.3 and from 16.9 to 13 Nm, respectively. In conclusion, while reasonable moment estimates can be obtained with 6 sensors, omitting the forearm sensors led to unacceptable errors. Furthermore, vertical GRF information is sufficient to estimate L5S1 moments in lifting. Copyright © 2017. Published by Elsevier Ltd.

  16. A simplified model for the estimation of life-cycle greenhouse gas emissions of enhanced geothermal systems


    Lacirignola, Martino; Hage Meany, Bechara; Padey, Pierryves; Blanc, Isabelle


    International audience; Background: The development of 'enhanced geothermal systems' (EGS), designed to extract energy from deep low-enthalpy reservoirs, is opening new scenarios of growth for the whole geothermal sector. A relevant tool to estimate the environmental performances of such emerging renewable energy (RE) technology is Life Cycle Assessment (LCA). However, the application of this cradle-to-grave approach is complex and time-consuming. Moreover, LCA results available for EGS case ...

  17. A simplified model for estimating population-scale energy impacts of building envelope air-tightening and mechanical ventilation retrofits

    Energy Technology Data Exchange (ETDEWEB)

    Logue, Jennifer M. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Turner, William J. N. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Trinity College Dublin, Dublin (Ireland); Walker, Iain S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Singer, Brett C. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)


    Changing the air exchange rate of a home (the sum of the infiltration and mechanical ventilation airflow rates) affects the annual thermal conditioning energy. Large-scale changes to air exchange rates of the housing stock can significantly alter the residential sector's energy consumption. However, the complexity of existing residential energy models is a barrier to the accurate quantification of the impact of policy changes on a state or national level. The Incremental Ventilation Energy (IVE) model developed in this study combines the output of simple air exchange models with a limited set of housing characteristics to estimate the associated change in energy demand of homes. The IVE model was designed specifically to enable modellers to use existing databases of housing characteristics to determine the impact of ventilation policy change on a population scale. The IVE model estimates of energy change when applied to US homes with limited parameterisation are shown to be comparable to the estimates of a well-validated, complex residential energy model.

  18. A dual estimate method for aeromagnetic compensation (United States)

    Ma, Ming; Zhou, Zhijian; Cheng, Defu


    Scalar aeromagnetic surveys have played a vital role in prospecting. However, before analysis of the surveys’ aeromagnetic data is possible, the aircraft’s magnetic interference should be removed. The extensively adopted linear model for aeromagnetic compensation is computationally efficient but faces an underfitting problem. On the other hand, the neural model proposed by Williams is more powerful at fitting but always suffers from an overfitting problem. This paper starts off with an analysis of these two models and then proposes a dual estimate method to combine them together to improve accuracy. This method is based on an unscented Kalman filter, but a gradient descent method is implemented over the iteration so that the parameters of the linear model are adjustable during flight. The noise caused by the neural model’s overfitting problem is suppressed by introducing an observation noise.

  19. Método simplificado para determinar el potencial de crecimiento en pacientes de Ortodoncia A simplified method to determine the potential growth in Orthodontics patients

    Directory of Open Access Journals (Sweden)

    Gladia Toledo Mayarí


    Full Text Available Se realizó una investigación de innovación tecnológica, de corte transversal, con el objetivo de presentar un método simplificado para determinar el potencial de crecimiento en pacientes tributarios de tratamiento ortodóncico, en una muestra de 150 pacientes entre 8 y 16 años, que ingresaron en la Clínica de Ortodoncia de la Facultad de Estomatología de la Habana, entre los años 2004 y 2006. A cada paciente se le realizó una radiografía de la mano izquierda y por primera vez en Cuba se estudiaron en la misma muestra, tres métodos de evaluación del potencial de crecimiento (métodos TW2, Grave y Brown, y determinación de los estadios de maduración de la falange media del tercer dedo. Una vez determinados éstos, se calcularon la correlación y la concordancia entre los mismos. Hubo altos coeficientes de correlación (hembras rho= 0,888 y varones rho= 0,921 y de concordancia (hembras Kappa= 1,000 y varones Kappa= 0,964. Se concluyó que la evaluación del potencial de crecimiento que presentaron los pacientes de Ortodoncia puede ser efectuada mediante la realización de una radiografía de la falange media del tercer dedo de la mano izquierda, lo cual constituye un útil método simplificado de evaluación.A cross-sectional technological innovation research was conducted to tender a simplified method to determine the potential growth of 150 Orthodontics patients aged between 8 and 16 admitted in the Orthodontics Clinic from the Stomatology of Ciudad de La Habana in 2004 and 2006. Each patient underwent left hand X-ray and for the first time in Cuba and in the same sample it was possible to study three assessment methods of potential growth (TW2 Method, Grave and Brown and stage determination of maturation of middle phalanx of third finger. After determination, we estimated the correlation and concordance among them, noting high correlation coefficients (rho females= 0,888 and rho= males 0,921 and of concordance (Kappa females= 1

  20. A simplified method using commercial milk powder for hand-rearing of the Caesarean-derived infant rabbits (author's transl). (United States)

    Murata, Y; Tada, M; Sugimoto, S; Sato, M; Katsumata, Y


    An efficient and simplified method for hand-rearing of Caesarean-derived infant rabbits under gnotobiotic condition was devised. The Caesarean-derived infant Dutch or Japanese-White rabbits and their hybrids (F1; Japanese-White female x Dutch male) were reared in sterilized vinyl-isolators by hand-feeding with two kinds of milk diets, A and B, consisted mainly ofa commercial milk powder for dogs and cats (Esbilac) supplemented with several minor components (Table 2) and administered intragastrically once a day through a Nelaton's catheter. Bacteriological examinations revealed that feces and urine were sterile for the first three days. On the third day, Escherichia coli, Staphylococcus epidermidis, Streptococcus faecalis, Bacillus subtilis, Enterococcus, and Bacteroides sp. were given with milk diet and the infant rabbits were reared until 10 to 12 weeks of age. The weaning rate at 5 weeks of age and the raising rate at 3 months of age were approximately 78% and 77%, respectively (Table 3), indicating that feeding once-a-day with a milk diet mainly composed of Esbilac is suitable for the hand-rearing of infant rabbits. There was, however, no significant difference milk diet A and B on the weaning rate.

  1. Simplified Qualitative Discrete Numerical Model to Determine Cracking Pattern in Brittle Materials by Means of Finite Element Method

    Directory of Open Access Journals (Sweden)

    J. Ochoa-Avendaño


    Full Text Available This paper presents the formulation, implementation, and validation of a simplified qualitative model to determine the crack path of solids considering static loads, infinitesimal strain, and plane stress condition. This model is based on finite element method with a special meshing technique, where nonlinear link elements are included between the faces of the linear triangular elements. The stiffness loss of some link elements represents the crack opening. Three experimental tests of bending beams are simulated, where the cracking pattern calculated with the proposed numerical model is similar to experimental result. The advantages of the proposed model compared to discrete crack approaches with interface elements can be the implementation simplicity, the numerical stability, and the very low computational cost. The simulation with greater values of the initial stiffness of the link elements does not affect the discontinuity path and the stability of the numerical solution. The exploded mesh procedure presented in this model avoids a complex nonlinear analysis and regenerative or adaptive meshes.

  2. Demographic estimation methods for plants with dormancy

    Directory of Open Access Journals (Sweden)

    Kéry, M.


    Full Text Available Demographic studies in plants appear simple because unlike animals, plants do not run away. Plant individuals can be marked with, e.g., plastic tags, but often the coordinates of an individual may be sufficient to identify it. Vascular plants in temperate latitudes have a pronounced seasonal life–cycle, so most plant demographers survey their study plots once a year often during or shortly after flowering. Life–states are pervasive in plants, hence the results of a demographic study for an individual can be summarized in a familiar encounter history, such as 0VFVVF000. A zero means that an individual was not seen in a year and a letter denotes its state for years when it was seen aboveground. V and F here stand for vegetative and flowering states, respectively. Probabilities of survival and state transitions can then be obtained by mere counting. Problems arise when there is an unobservable dormant state, i.e., when plants may stay belowground for one or more growing seasons. Encounter histories such as 0VF00F000 may then occur where the meaning of zeroes becomes ambiguous. A zero can either mean a dead or a dormant plant. Various ad hoc methods in wide use among plant ecologists have made strong assumptions about when a zero should be equated to a dormant individual. These methods have never been compared among each other. In our talk and in Kéry et al. (submitted, we show that these ad hoc estimators provide spurious estimates of survival and should not be used. In contrast, if detection probabilities for aboveground plants are known or can be estimated, capturerecapture (CR models can be used to estimate probabilities of survival and state–transitions and the fraction of the population that is dormant. We have used this approach in two studies of terrestrial orchids, Cleistes bifaria (Kéry et al., submitted and Cypripedium reginae (Kéry & Gregg, submitted in West Virginia, U.S.A. For Cleistes, our data comprised one population with a


    Directory of Open Access Journals (Sweden)

    G.V. Shevchenko


    Full Text Available A simplified method was developed for estimating the tsunami risk for a coast for possible events having recurrence periods of 50 and 100 years. The method is based on readily available seismic data and the calculation of magnitudes of events with specified return periods. A classical Gumbel statistical method was used to estimate magnitudes of small probability events. The tsunami numerical modeling study used the average earthquake coordinates in the Kuril-Kamchatka high- seismic area. The verification and testing of the method were carried out using events from the North, Middle and South Kuril Islands – the most tsunami-risk areas of Russia’s Far East. Also, the study used the regional Kuril-Kamchatka catalogue of earthquakes from 1900 to 2008 - which included earthquakes with magnitudes of at least M=6. The results of the study indicate that the proposed methodology provides reasonable estimates of tsunami risk.

  4. A Simplified 4-Site Economical Intradermal Post-Exposure Rabies Vaccine Regimen: A Randomised Controlled Comparison with Standard Methods (United States)

    Warrell, Mary J.; Riddell, Anna; Yu, Ly-Mee; Phipps, Judith; Diggle, Linda; Bourhy, Hervé; Deeks, Jonathan J.; Fooks, Anthony R.; Audry, Laurent; Brookes, Sharon M.; Meslin, François-Xavier; Moxon, Richard; Pollard, Andrew J.; Warrell, David A.


    Background The need for economical rabies post-exposure prophylaxis (PEP) is increasing in developing countries. Implementation of the two currently approved economical intradermal (ID) vaccine regimens is restricted due to confusion over different vaccines, regimens and dosages, lack of confidence in intradermal technique, and pharmaceutical regulations. We therefore compared a simplified 4-site economical PEP regimen with standard methods. Methods Two hundred and fifty-four volunteers were randomly allocated to a single blind controlled trial. Each received purified vero cell rabies vaccine by one of four PEP regimens: the currently accepted 2-site ID; the 8-site regimen using 0.05 ml per ID site; a new 4-site ID regimen (on day 0, approximately 0.1 ml at 4 ID sites, using the whole 0.5 ml ampoule of vaccine; on day 7, 0.1 ml ID at 2 sites and at one site on days 28 and 90); or the standard 5-dose intramuscular regimen. All ID regimens required the same total amount of vaccine, 60% less than the intramuscular method. Neutralising antibody responses were measured five times over a year in 229 people, for whom complete data were available. Findings All ID regimens showed similar immunogenicity. The intramuscular regimen gave the lowest geometric mean antibody titres. Using the rapid fluorescent focus inhibition test, some sera had unexpectedly high antibody levels that were not attributable to previous vaccination. The results were confirmed using the fluorescent antibody virus neutralisation method. Conclusions This 4-site PEP regimen proved as immunogenic as current regimens, and has the advantages of requiring fewer clinic visits, being more practicable, and having a wider margin of safety, especially in inexperienced hands, than the 2-site regimen. It is more convenient than the 8-site method, and can be used economically with vaccines formulated in 1.0 or 0.5 ml ampoules. The 4-site regimen now meets all requirements of immunogenicity for PEP and can be

  5. A Simplified Approach to Estimate the Urban Expressway Capacity after Traffic Accidents Using a Micro-Simulation Model

    Directory of Open Access Journals (Sweden)

    Hong Chen


    Full Text Available Based on the decomposition of the evolution processes of the urban expressway capacity after traffic accidents and the influence factors analysis, an approach for estimating the capacity has been proposed. Firstly, the approach introduces the Decision Tree ID algorithm, solves the accident delay time of different accident types by the Information Gain Value, and determines congestion dissipation time by the Traffic Flow Wave Theory. Secondly, taking the accident delay time as the observation cycle, the maximum number of the vehicles through the accident road per unit time was considered as its capacity. Finally, the attenuation simulation of the capacity for different accident types was calculated by the VISSIM software. The simulation results suggest that capacity attenuation of vehicle anchor is minimal and the rate is 30.074%; the next is vehicles fire, rear-end, and roll-over, and the rate is 38.389%, 40.204%, and 43.130%, respectively; the capacity attenuation of vehicle collision is the largest, and the rate is 50.037%. Moreover, the further research shows that the accident delay time is proportional to congestion dissipation time, time difference, and the ratio between them, but it is an inverse relationship with the residual capacity of urban expressway.

  6. Two-Dimensional DOA Estimation for Uniform Rectangular Array Using Reduced-Dimension Propagator Method

    Directory of Open Access Journals (Sweden)

    Ming Zhou


    Full Text Available A novel algorithm is proposed for two-dimensional direction of arrival (2D-DOA estimation with uniform rectangular array using reduced-dimension propagator method (RD-PM. The proposed algorithm requires no eigenvalue decomposition of the covariance matrix of the receive data and simplifies two-dimensional global searching in two-dimensional PM (2D-PM to one-dimensional local searching. The complexity of the proposed algorithm is much lower than that of 2D-PM. The angle estimation performance of the proposed algorithm is better than that of estimation of signal parameters via rotational invariance techniques (ESPRIT algorithm and conventional PM algorithms, also very close to 2D-PM. The angle estimation error and Cramér-Rao bound (CRB are derived in this paper. Furthermore, the proposed algorithm can achieve automatically paired 2D-DOA estimation. The simulation results verify the effectiveness of the algorithm.

  7. The impact and cost-effectiveness of nonavalent HPV vaccination in the United States: Estimates from a simplified transmission model


    Chesson, Harrell W.; Markowitz, Lauri E.; Hariri, Susan; Ekwueme, Donatus U.; Saraiya, Mona


    Introduction: The objective of this study was to assess the incremental costs and benefits of the 9-valent HPV vaccine (9vHPV) compared with the quadrivalent HPV vaccine (4vHPV). Like 4vHPV, 9vHPV protects against HPV types 6, 11, 16, and 18. 9vHPV also protects against 5 additional HPV types 31, 33, 45, 52, and 58. Methods: We adapted a previously published model of the impact and cost-effectiveness of 4vHPV to include the 5 additional HPV types in 9vHPV. The vaccine strategies we examined w...

  8. A simplified 4-site economical intradermal post-exposure rabies vaccine regimen: a randomised controlled comparison with standard methods.

    Directory of Open Access Journals (Sweden)

    Mary J Warrell


    Full Text Available The need for economical rabies post-exposure prophylaxis (PEP is increasing in developing countries. Implementation of the two currently approved economical intradermal (ID vaccine regimens is restricted due to confusion over different vaccines, regimens and dosages, lack of confidence in intradermal technique, and pharmaceutical regulations. We therefore compared a simplified 4-site economical PEP regimen with standard methods.Two hundred and fifty-four volunteers were randomly allocated to a single blind controlled trial. Each received purified vero cell rabies vaccine by one of four PEP regimens: the currently accepted 2-site ID; the 8-site regimen using 0.05 ml per ID site; a new 4-site ID regimen (on day 0, approximately 0.1 ml at 4 ID sites, using the whole 0.5 ml ampoule of vaccine; on day 7, 0.1 ml ID at 2 sites and at one site on days 28 and 90; or the standard 5-dose intramuscular regimen. All ID regimens required the same total amount of vaccine, 60% less than the intramuscular method. Neutralising antibody responses were measured five times over a year in 229 people, for whom complete data were available.All ID regimens showed similar immunogenicity. The intramuscular regimen gave the lowest geometric mean antibody titres. Using the rapid fluorescent focus inhibition test, some sera had unexpectedly high antibody levels that were not attributable to previous vaccination. The results were confirmed using the fluorescent antibody virus neutralisation method.This 4-site PEP regimen proved as immunogenic as current regimens, and has the advantages of requiring fewer clinic visits, being more practicable, and having a wider margin of safety, especially in inexperienced hands, than the 2-site regimen. It is more convenient than the 8-site method, and can be used economically with vaccines formulated in 1.0 or 0.5 ml ampoules. The 4-site regimen now meets all requirements of immunogenicity for PEP and can be introduced without further

  9. 3D imaging of optically cleared tissue using a simplified CLARITY method and on-chip microscopy

    KAUST Repository

    Zhang, Yibo


    High-throughput sectioning and optical imaging of tissue samples using traditional immunohistochemical techniques can be costly and inaccessible in resource-limited areas. We demonstrate three-dimensional (3D) imaging and phenotyping in optically transparent tissue using lens-free holographic on-chip microscopy as a low-cost, simple, and high-throughput alternative to conventional approaches. The tissue sample is passively cleared using a simplified CLARITY method and stained using 3,3′-diaminobenzidine to target cells of interest, enabling bright-field optical imaging and 3D sectioning of thick samples. The lens-free computational microscope uses pixel super-resolution and multi-height phase recovery algorithms to digitally refocus throughout the cleared tissue and obtain a 3D stack of complex-valued images of the sample, containing both phase and amplitude information. We optimized the tissue-clearing and imaging system by finding the optimal illumination wavelength, tissue thickness, sample preparation parameters, and the number of heights of the lens-free image acquisition and implemented a sparsity-based denoising algorithm to maximize the imaging volume and minimize the amount of the acquired data while also preserving the contrast-to-noise ratio of the reconstructed images. As a proof of concept, we achieved 3D imaging of neurons in a 200-μm-thick cleared mouse brain tissue over a wide field of view of 20.5 mm2. The lens-free microscope also achieved more than an order-of-magnitude reduction in raw data compared to a conventional scanning optical microscope imaging the same sample volume. Being low cost, simple, high-throughput, and data-efficient, we believe that this CLARITY-enabled computational tissue imaging technique could find numerous applications in biomedical diagnosis and research in low-resource settings.

  10. Estimation of drug receptor occupancy when non-displaceable binding differs between brain regions – extending the simplified reference tissue model. (United States)

    Kågedal, Matts; Varnäs, Katarina; Hooker, Andrew C; Karlsson, Mats O


    The simplified reference tissue model (SRTM) is used for estimation of receptor occupancy assuming that the non-displaceable binding in the reference region is identical to the brain regions of interest. The aim of this work was to extend the SRTM to also account for inter-regional differences in non-displaceable concentrations, and to investigate if this model allowed estimation of receptor occupancy using white matter as reference. It was also investigated if an apparent higher affinity in caudate compared with other brain regions, could be better explained by a difference in the extent of non-displaceable binding. The analysis was based on a PET study in six healthy volunteers using the 5-HT1B receptor radioligand [(11)C]-AZ10419369. The radioligand was given intravenously as a tracer dose alone and following different oral doses of the 5-HT1B receptor antagonist AZD3783. Non-linear mixed effects models were developed where differences between regions in non-specific concentrations were accounted for. The properties of the models were also evaluated by means of simulation studies. The estimate (95% CI) of Ki(PL) was 10.2 ng ml(-1) (5.4, 15) and 10.4 ng ml(-1) (8.1, 13.6) based on the extended SRTM with white matter as reference and based on the SRTM using cerebellum as reference, respectively. The estimate (95% CI) of Ki(PL) for caudate relative to other brain regions was 55% (48, 62%). The extended SRTM allows consideration of white matter as reference region when no suitable grey matter region exists. AZD3783 affinity appears to be higher in the caudate compared with other brain regions. © 2014 The British Pharmacological Society.

  11. Development of a simplified and dynamic method for double glazing façade with night insulation and validated by full-scale façade element

    DEFF Research Database (Denmark)

    Liu, Mingzhe; Wittchen, Kim Bjarne; Heiselberg, Per


    The study aims to develop a simplified calculation method to simulate the performance of double glazing fac¸ ade with night insulation. This paper describes the method to calculate the thermal properties (Uvalue) and comfort performance (internal surface temperature of glazing) of the double...... with night insulation is calculated and compared with that of the facade without the night insulation. Based on standards EN 410 and EN 673, the method takes the thermal mass of glazing and the infiltration between the insulation layer and glazing into account. Furthermore it is capable of implementing whole...... glazing facade with the night insulation. The calculation result of the internal glazing surface temperature has been validated with experimental data collected in a full-scale fac¸ ade element test facility at Aalborg University (DK). With the help of the simplified method, dynamic U-value of the facade...

  12. A Control Variate Method for Probabilistic Performance Assessment. Improved Estimates for Mean Performance Quantities of Interest

    Energy Technology Data Exchange (ETDEWEB)

    MacKinnon, Robert J.; Kuhlman, Kristopher L


    We present a method of control variates for calculating improved estimates for mean performance quantities of interest, E(PQI) , computed from Monte Carlo probabilistic simulations. An example of a PQI is the concentration of a contaminant at a particular location in a problem domain computed from simulations of transport in porous media. To simplify the presentation, the method is described in the setting of a one- dimensional elliptical model problem involving a single uncertain parameter represented by a probability distribution. The approach can be easily implemented for more complex problems involving multiple uncertain parameters and in particular for application to probabilistic performance assessment of deep geologic nuclear waste repository systems. Numerical results indicate the method can produce estimates of E(PQI)having superior accuracy on coarser meshes and reduce the required number of simulations needed to achieve an acceptable estimate.

  13. An improved adaptive weighting function method for State Estimation in Power Systems with VSC-MTDC (United States)

    Zhao, Kun; Yang, Xiaonan; Lang, Yansheng; Song, Xuri; Wang, Minkun; Luo, Yadi; Wu, Lingyun; Liu, Peng


    This paper presents an effective approach for state estimation in power systems that include multi-terminal voltage source converter based high voltage direct current (VSC-MTDC), called improved adaptive weighting function method. The proposed approach is simplified in which the VSC-MTDC system is solved followed by the AC system. Because the new state estimation method only changes the weight and keeps the matrix dimension unchanged. Accurate and fast convergence of AC/DC system can be realized by adaptive weight function method. This method also provides the technical support for the simulation analysis and accurate regulation of AC/DC system. Both the oretical analysis and numerical tests verify practicability, validity and convergence of new method.

  14. Some methods of estimating uncertainty in accident reconstruction


    Batista, Milan


    In the paper four methods for estimating uncertainty in accident reconstruction are discussed: total differential method, extreme values method, Gauss statistical method, and Monte Carlo simulation method. The methods are described and the program solutions are given.

  15. Visual Tilt Estimation for Planar-Motion Methods in Indoor Mobile Robots

    Directory of Open Access Journals (Sweden)

    David Fleer


    Full Text Available Visual methods have many applications in mobile robotics problems, such as localization, navigation, and mapping. Some methods require that the robot moves in a plane without tilting. This planar-motion assumption simplifies the problem, and can lead to improved results. However, tilting the robot violates this assumption, and may cause planar-motion methods to fail. Such a tilt should therefore be corrected. In this work, we estimate a robot’s tilt relative to a ground plane from individual panoramic images. This estimate is based on the vanishing point of vertical elements, which commonly occur in indoor environments. We test the quality of two methods on images from several environments: An image-space method exploits several approximations to detect the vanishing point in a panoramic fisheye image. The vector-consensus method uses a calibrated camera model to solve the tilt-estimation problem in 3D space. In addition, we measure the time required on desktop and embedded systems. We previously studied visual pose-estimation for a domestic robot, including the effect of tilts. We use these earlier results to establish meaningful standards for the estimation error and time. Overall, we find the methods to be accurate and fast enough for real-time use on embedded systems. However, the tilt-estimation error increases markedly in environments containing relatively few vertical edges.

  16. A single-blood-sample method using inulin for estimating feline glomerular filtration rate. (United States)

    Katayama, M; Saito, J; Katayama, R; Yamagishi, N; Murayama, I; Miyano, A; Furuhama, K


    Application of a multisample method using inulin to estimate glomerular filtration rate (GFR) in cats is cumbersome. To establish a simplified procedure to estimate GFR in cats, a single-blood-sample method using inulin was compared with a conventional 3-sample method. Nine cats including 6 clinically healthy cats and 3 cats with spontaneous chronic kidney disease. Retrospective study. Inulin was administered as an intravenous bolus at 50 mg/kg to cats, and blood was collected at 60, 90, and 120 minutes later for the 3-sample method. Serum inulin concentrations were colorimetrically determined by an autoanalyzer method. The GFR in the single-blood-sample method was calculated from the dose injected, serum concentration, sampling time, and estimated volume of distribution on the basis of the data of the 3-sample method. An excellent correlation was observed (r = 0.99, P = .0001) between GFR values estimated by the single-blood-sample and 3-sample methods. The single-blood-sample method using inulin provides a practicable and ethical alternative for estimating glomerular filtration rate in cats. Copyright © 2012 by the American College of Veterinary Internal Medicine.

  17. A Method for Estimating Surveillance Video Georeferences

    Directory of Open Access Journals (Sweden)

    Aleksandar Milosavljević


    Full Text Available The integration of a surveillance camera video with a three-dimensional (3D geographic information system (GIS requires the georeferencing of that video. Since a video consists of separate frames, each frame must be georeferenced. To georeference a video frame, we rely on the information about the camera view at the moment that the frame was captured. A camera view in 3D space is completely determined by the camera position, orientation, and field-of-view. Since the accurate measuring of these parameters can be extremely difficult, in this paper we propose a method for their estimation based on matching video frame coordinates of certain point features with their 3D geographic locations. To obtain these coordinates, we rely on high-resolution orthophotos and digital elevation models (DEM of the area of interest. Once an adequate number of points are matched, Levenberg–Marquardt iterative optimization is applied to find the most suitable video frame georeference, i.e., position and orientation of the camera.

  18. A simplified geometric stiffness in stability analysis of thin-walled structures by the finite element method

    Directory of Open Access Journals (Sweden)

    Ivo Senjanović


    Full Text Available Vibration analysis of a thin-walled structure can be performed with a consistent mass matrix determined by the shape functions of all degrees of freedom (d.o.f. used for construction of conventional stiffness matrix, or with a lumped mass matrix. In similar way stability of a structure can be analysed with consistent geometric stiffness matrix or geometric stiffness matrix with lumped buckling load, related only to the rotational d.o.f. Recently, the simplified mass matrix is constructed employing shape functions of in-plane displacements for plate deflection. In this paper the same approach is used for construction of simplified geometric stiffness matrix. Beam element, and triangular and rectangular plate element are considered. Application of the new geometric stiffness is illustrated in the case of simply supported beam and square plate. The same problems are solved with consistent and lumped geometric stiffness matrix, and the obtained results are compared with the analytical solution. Also, a combination of simplified and lumped geometric stiffness matrix is analysed in order to increase accuracy of stability analysis.

  19. Application of the simplified J-estimation scheme Aramis to mismatching welds in CCP; Application du concept d`integrale J dans l`outil Aramis aux effets de mismatch sur des eprouvettes CCP

    Energy Technology Data Exchange (ETDEWEB)

    Eripret, C.; Franco, C.; Gilles, P.


    The J-based criteria give reasonable predictions of the failure behaviour of ductile cracked metallic structures, even if the material characterization may be sensitive to the size of the specimens. However in cracked welds, this phenomenon due to stress triaxiality effects could be enhanced. Furthermore, the application of conventional methods of toughness measurement (ESIS or ASTM standard) have evidenced a strong influence of the portion of the weld metal in the specimen. Several authors have shown the inadequacy of the simplified J-estimation methods developed for homogeneous materials. These heterogeneity effects mainly related to the mismatch ratio (ratio of weld metal yield strength upon base metal yield strength) as well as to the geometrical parameter h/W-a (weld width upon ligament size). In order to make decisive progress in this field, the Atomic Energy Commission (CEA), the PWR manufacturer FRAMATOME, and the French utility (EDF) have launched a large research program on cracked piping welds behaviour. As part of this program, a new J-estimation scheme, so called ARAMIS, has been developed to account for the influence of both materials, i.e. base metal and weld metal, on the structural resistance of cracked welds. It has been shown that, when the mismatch is high, and when the ligament size is small compared to the weld width, a classical J-based method using the softer material properties is very conservative. On the opposite the ARAMIS method provides a good estimate of J, because it predicts pretty well the shift of the cracked weld limit load, due to the presence of the weld. the influence of geometrical parameters such as crack size, weld width, or specimen length is property accounted for. (authors). 23 refs., 8 figs., 1 tab., 1 appendix.

  20. Can matrix solid phase dispersion (MSPD) be more simplified? Application of solventless MSPD sample preparation method for GC-MS and GC-FID analysis of plant essential oil components. (United States)

    Wianowska, Dorota; Dawidowicz, Andrzej L


    This paper proposes and shows the analytical capabilities of a new variant of matrix solid phase dispersion (MSPD) with the solventless blending step in the chromatographic analysis of plant volatiles. The obtained results prove that the use of a solvent is redundant as the sorption ability of the octadecyl brush is sufficient for quantitative retention of volatiles from 9 plants differing in their essential oil composition. The extraction efficiency of the proposed simplified MSPD method is equivalent to the efficiency of the commonly applied variant of MSPD with the organic dispersing liquid and pressurized liquid extraction, which is a much more complex, technically advanced and highly efficient technique of plant extraction. The equivalency of these methods is confirmed by the variance analysis. The proposed solventless MSPD method is precise, accurate, and reproducible. The recovery of essential oil components estimated by the MSPD method exceeds 98%, which is satisfactory for analytical purposes. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. A Rapid Empirical Method for Estimating the Gross Takeoff Weight of a High Speed Civil Transport (United States)

    Mack, Robert J.


    During the cruise segment of the flight mission, aircraft flying at supersonic speeds generate sonic booms that are usually maximum at the beginning of cruise. The pressure signature with the shocks causing these perceived booms can be predicted if the aircraft's geometry, Mach number, altitude, angle of attack, and cruise weight are known. Most methods for estimating aircraft weight, especially beginning-cruise weight, are empirical and based on least- square-fit equations that best represent a body of component weight data. The empirical method discussed in this report used simplified weight equations based on a study of performance and weight data from conceptual and real transport aircraft. Like other weight-estimation methods, weights were determined at several points in the mission. While these additional weights were found to be useful, it is the determination of beginning-cruise weight that is most important for the prediction of the aircraft's sonic-boom characteristics.

  2. Statistical methods of estimating mining costs (United States)

    Long, K.R.


    Until it was defunded in 1995, the U.S. Bureau of Mines maintained a Cost Estimating System (CES) for prefeasibility-type economic evaluations of mineral deposits and estimating costs at producing and non-producing mines. This system had a significant role in mineral resource assessments to estimate costs of developing and operating known mineral deposits and predicted undiscovered deposits. For legal reasons, the U.S. Geological Survey cannot update and maintain CES. Instead, statistical tools are under development to estimate mining costs from basic properties of mineral deposits such as tonnage, grade, mineralogy, depth, strip ratio, distance from infrastructure, rock strength, and work index. The first step was to reestimate "Taylor's Rule" which relates operating rate to available ore tonnage. The second step was to estimate statistical models of capital and operating costs for open pit porphyry copper mines with flotation concentrators. For a sample of 27 proposed porphyry copper projects, capital costs can be estimated from three variables: mineral processing rate, strip ratio, and distance from nearest railroad before mine construction began. Of all the variables tested, operating costs were found to be significantly correlated only with strip ratio.

  3. A new rapid method for rockfall energies and distances estimation (United States)

    Giacomini, Anna; Ferrari, Federica; Thoeni, Klaus; Lambert, Cedric


    Rockfalls are characterized by long travel distances and significant energies. Over the last decades, three main methods have been proposed in the literature to assess the rockfall runout: empirical, process-based and GIS-based methods (Dorren, 2003). Process-based methods take into account the physics of rockfall by simulating the motion of a falling rock along a slope and they are generally based on a probabilistic rockfall modelling approach that allows for taking into account the uncertainties associated with the rockfall phenomenon. Their application has the advantage of evaluating the energies, bounce heights and distances along the path of a falling block, hence providing valuable information for the design of mitigation measures (Agliardi et al., 2009), however, the implementation of rockfall simulations can be time-consuming and data-demanding. This work focuses on the development of a new methodology for estimating the expected kinetic energies and distances of the first impact at the base of a rock cliff, subject to the conditions that the geometry of the cliff and the properties of the representative block are known. The method is based on an extensive two-dimensional sensitivity analysis, conducted by means of kinematic simulations based on probabilistic modelling of two-dimensional rockfall trajectories (Ferrari et al., 2016). To take into account for the uncertainty associated with the estimation of the input parameters, the study was based on 78400 rockfall scenarios performed by systematically varying the input parameters that are likely to affect the block trajectory, its energy and distance at the base of the rock wall. The variation of the geometry of the rock cliff (in terms of height and slope angle), the roughness of the rock surface and the properties of the outcropping material were considered. A simplified and idealized rock wall geometry was adopted. The analysis of the results allowed finding empirical laws that relate impact energies

  4. System and method for traffic signal timing estimation

    KAUST Repository

    Dumazert, Julien


    A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.

  5. A Novel Residual Frequency Estimation Method for GNSS Receivers. (United States)

    Nguyen, Tu Thi-Thanh; La, Vinh The; Ta, Tung Hai


    In Global Navigation Satellite System (GNSS) receivers, residual frequency estimation methods are traditionally applied in the synchronization block to reduce the transient time from acquisition to tracking, or they are used within the frequency estimator to improve its accuracy in open-loop architectures. There are several disadvantages in the current estimation methods, including sensitivity to noise and wide search space size. This paper proposes a new residual frequency estimation method depending on differential processing. Although the complexity of the proposed method is higher than the one of traditional methods, it can lead to more accurate estimates, without increasing the size of the search space.

  6. A Simplified Method of Assessing the Progress of a Cataract Based on Deterioration in Color Discrimination by Scattering Light

    Directory of Open Access Journals (Sweden)

    Masashi Iwamoto


    Full Text Available Cataract is one of the most typical age-related eye diseases. As a cataract progress, the crystalline lens becomes hazier and visual ability deteriorates. Since the main cause of visual impairment is light scattered by the hazy lens, both brighter environments and hazier lenses worsen visual ability. In this study, error scores of 50 Hue test were measured for pseudo-cataract subjects (young observer with foggy filters of known as haze factor when the intensity of scattering light was increased. The rate of increase in error score of 50 Hue test was larger with higher haze factor. We propose, as an easy method for assessing the progress of a cataract, the haze estimation of elderly crystalline lens using the obtained error scores of 50 Hue test as a function of the intensity of scattering light.

  7. Simplified method of selective spleen scintigraphy with Tc-99m-labeled erythrocytes: clinical applications. Concise communication

    Energy Technology Data Exchange (ETDEWEB)

    Armas, R.R.; Thakur, M.L.; Gottschalk, A.


    We report our initial clinical experience with a simplified spleen-imaging technique that requires no red cell washing or special kits. Thirty minutes after an injection of ''cold'' pyrophosphate containing 0.5 mg of stannous chloride or fluoride, a blood sample is drawn, 2 mCi of pertechnetate (Tc-99m) are added, the sample is incubated at 49 to 50/sup 0/ for 35 min, and then reinjected into the patient. We have studied 13 patients with this technique, and have found it useful in the clarification of various questionable splenic abnormalities found on the sulfur colloid scan, as well as in the detection of splenosis.

  8. Empirical Methods for Estimating Workload Capacity (United States)


    on excercising a model of the service systetii 1 TIlE GENERAL SERVICE SYSTEM M.IODEL 2 by subjecting it to a stream of input traffic and estimating or...join different radio nets as they move around or attempt to rerolih NEOt., The antijamming radio fails less frequently and is harder to jam than the old ltowevter, it is much more time consuming to enter a net with the new radio than with the old one. Finally, if an old radio tries to contact a


    Directory of Open Access Journals (Sweden)

    azam zaka


    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE In this paper, we present some methods for estimating the parameters of the two parameter Power function distribution. We used the least squares method (LSM, relative least squares method (RELS and ridge regression method (RR. Sampling behavior of the estimates is indicated by a Monte Carlo simulation. The objective of identifying the best estimator amongst them we use the Total Deviation (T.D and Mean Square Error (M.S.E as performance index. We determined the best method for estimation using different values for the parameters and different sample sizes.

  10. Statistically Efficient Methods for Pitch and DOA Estimation

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt


    Traditionally, direction-of-arrival (DOA) and pitch estimation of multichannel, periodic sources have been considered as two separate problems. Separate estimation may render the task of resolving sources with similar DOA or pitch impossible, and it may decrease the estimation accuracy. Therefore......, it was recently considered to estimate the DOA and pitch jointly. In this paper, we propose two novel methods for DOA and pitch estimation. They both yield maximum-likelihood estimates in white Gaussian noise scenar- ios, where the SNR may be different across channels, as opposed to state-of-the-art methods...

  11. Methods for Calculating a Simplified Hydrologic Source Term for Frenchman Flat Sensitivity Studies of Radionuclide Transport Away from Underground Nuclear Tests

    Energy Technology Data Exchange (ETDEWEB)

    Tompson, A; Zavarin, M; Bruton, C J; Pawloski, G A


    The purpose of this report is to provide an approach for the development of a simplified unclassified hydrologic source term (HST) for the ten underground nuclear tests conducted in the Frenchman Flat Corrective Action Unit (CAU) at the Nevada Test Site (NTS). It is being prepared in an analytic form for incorporation into a GOLDSIM (Golder Associates, 2000) model of radionuclide release and migration in the Frenchman Flat CAU. This model will be used to explore, in an approximate and probabilistic fashion, sensitivities of the 1,000-year radionuclide contaminant boundary (FFACO, 1996; 2000) to hydrologic and other related parameters. The total inventory (or quantity) of radionuclides associated with each individual test, regardless of its form and distribution, is referred to as the radiologic source term (RST) of that test. The subsequent release of these radionuclides over time into groundwater is referred to as the hydrologic source term (HST) of that test (Tompson, et al., 2002). The basic elements of the simplified hydrologic source term model include: (1) Estimation of the volumes of geologic material physically affected by the tests. (2) Identification, quantification, and distribution of the radionuclides of importance. (3) Development of simplified release and retardation models for these radionuclides in groundwater. The simplifications used in the current HST model are based upon more fundamental analyses that are too complicated for use in a GOLDSIM sensitivity study. These analyses are based upon complex, three-dimensional flow and reactive transport simulations summarized in the original CAMBRIC hydrologic source term model (Tompson et al., 1999), unclassified improvements of this model discussed in Pawloski et al. (2000), as well as more recent studies that are part of an ongoing model of the HST at the CHESHIRE test in Pahute Mesa (Pawloski et al., 2001).

  12. portfolio optimization based on nonparametric estimation methods

    Directory of Open Access Journals (Sweden)

    mahsa ghandehari


    Full Text Available One of the major issues investors are facing with in capital markets is decision making about select an appropriate stock exchange for investing and selecting an optimal portfolio. This process is done through the risk and expected return assessment. On the other hand in portfolio selection problem if the assets expected returns are normally distributed, variance and standard deviation are used as a risk measure. But, the expected returns on assets are not necessarily normal and sometimes have dramatic differences from normal distribution. This paper with the introduction of conditional value at risk ( CVaR, as a measure of risk in a nonparametric framework, for a given expected return, offers the optimal portfolio and this method is compared with the linear programming method. The data used in this study consists of monthly returns of 15 companies selected from the top 50 companies in Tehran Stock Exchange during the winter of 1392 which is considered from April of 1388 to June of 1393. The results of this study show the superiority of nonparametric method over the linear programming method and the nonparametric method is much faster than the linear programming method.


    Directory of Open Access Journals (Sweden)

    Ashot Davtian


    Full Text Available Two methods for the estimation of number per unit volume NV of spherical particles are discussed: the (physical disector (Sterio, 1984 and Saltykov's estimator (Saltykov, 1950; Fullman, 1953. A modification of Saltykov's estimator is proposed which reduces the variance. Formulae for bias and variance are given for both disector and improved Saltykov estimator for the case of randomly positioned particles. They enable the comparison of the two estimators with respect to their precision in terms of mean squared error.

  14. Comparison of various methods for estimating wave incident angles ...

    African Journals Online (AJOL)

    Five different methods were examined for their suitability in estimating the inshore wave incident angles on a nearshore zone with a complex topography. Visual observation provided preliminary estimates. Two frequency independent methods and one frequency dependent method based on current meter measurements ...

  15. Advancing Methods for Estimating Cropland Area (United States)

    King, L.; Hansen, M.; Stehman, S. V.; Adusei, B.; Potapov, P.; Krylov, A.


    Measurement and monitoring of complex and dynamic agricultural land systems is essential with increasing demands on food, feed, fuel and fiber production from growing human populations, rising consumption per capita, the expansion of crops oils in industrial products, and the encouraged emphasis on crop biofuels as an alternative energy source. Soybean is an important global commodity crop, and the area of land cultivated for soybean has risen dramatically over the past 60 years, occupying more than 5% of all global croplands (Monfreda et al 2008). Escalating demands for soy over the next twenty years are anticipated to be met by an increase of 1.5 times the current global production, resulting in expansion of soybean cultivated land area by nearly the same amount (Masuda and Goldsmith 2009). Soybean cropland area is estimated with the use of a sampling strategy and supervised non-linear hierarchical decision tree classification for the United States, Argentina and Brazil as the prototype in development of a new methodology for crop specific agricultural area estimation. Comparison of our 30 m2 Landsat soy classification with the National Agricultural Statistical Services Cropland Data Layer (CDL) soy map shows a strong agreement in the United States for 2011, 2012, and 2013. RapidEye 5m2 imagery was also classified for soy presence and absence and used at the field scale for validation and accuracy assessment of the Landsat soy maps, describing a nearly 1 to 1 relationship in the United States, Argentina and Brazil. The strong correlation found between all products suggests high accuracy and precision of the prototype and has proven to be a successful and efficient way to assess soybean cultivated area at the sub-national and national scale for the United States with great potential for application elsewhere.

  16. Traveling Wave Resonance and Simplified Analysis Method for Long-Span Symmetrical Cable-Stayed Bridges under Seismic Traveling Wave Excitation

    Directory of Open Access Journals (Sweden)

    Zhong-ye Tian


    Full Text Available The seismic responses of a long-span cable-stayed bridge under uniform excitation and traveling wave excitation in the longitudinal direction are, respectively, computed. The numerical results show that the bridge’s peak seismic responses vary significantly as the apparent wave velocity decreases. Therefore, the traveling wave effect must be considered in the seismic design of long-span bridges. The bridge’s peak seismic responses do not vary monotonously with the apparent wave velocity due to the traveling wave resonance. A new traveling wave excitation method that can simplify the multisupport excitation process into a two-support excitation process is developed.

  17. Thermodynamic properties of organic compounds estimation methods, principles and practice

    CERN Document Server

    Janz, George J


    Thermodynamic Properties of Organic Compounds: Estimation Methods, Principles and Practice, Revised Edition focuses on the progression of practical methods in computing the thermodynamic characteristics of organic compounds. Divided into two parts with eight chapters, the book concentrates first on the methods of estimation. Topics presented are statistical and combined thermodynamic functions; free energy change and equilibrium conversions; and estimation of thermodynamic properties. The next discussions focus on the thermodynamic properties of simple polyatomic systems by statistical the

  18. Optical method of atomic ordering estimation

    Energy Technology Data Exchange (ETDEWEB)

    Prutskij, T. [Instituto de Ciencias, BUAP, Privada 17 Norte, No 3417, col. San Miguel Huyeotlipan, Puebla, Pue. (Mexico); Attolini, G. [IMEM/CNR, Parco Area delle Scienze 37/A - 43010, Parma (Italy); Lantratov, V.; Kalyuzhnyy, N. [Ioffe Physico-Technical Institute, 26 Polytekhnicheskaya, St Petersburg 194021, Russian Federation (Russian Federation)


    It is well known that within metal-organic vapor-phase epitaxy (MOVPE) grown semiconductor III-V ternary alloys atomically ordered regions are spontaneously formed during the epitaxial growth. This ordering leads to bandgap reduction and to valence bands splitting, and therefore to anisotropy of the photoluminescence (PL) emission polarization. The same phenomenon occurs within quaternary semiconductor alloys. While the ordering in ternary alloys is widely studied, for quaternaries there have been only a few detailed experimental studies of it, probably because of the absence of appropriate methods of its detection. Here we propose an optical method to reveal atomic ordering within quaternary alloys by measuring the PL emission polarization.

  19. Simplified Transient Hot-Wire Method for Effective Thermal Conductivity Measurement in Geo Materials: Microstructure and Saturation Effect

    Directory of Open Access Journals (Sweden)

    B. Merckx


    Full Text Available The thermal conductivity measurement by a simplified transient hot-wire technique is applied to geomaterials in order to show the relationships which can exist between effective thermal conductivity, texture, and moisture of the materials. After a validation of the used “one hot-wire” technique in water, toluene, and glass-bead assemblages, the investigations were performed (1 in glass-bead assemblages of different diameters in dried, water, and acetone-saturated states in order to observe the role of grain sizes and saturation on the effective thermal conductivity, (2 in a compacted earth brick at different moisture states, and (3 in a lime-hemp concrete during 110 days following its manufacture. The lime-hemp concrete allows the measurements during the setting, desiccation and carbonation steps. The recorded Δ/ln( diagrams allow the calculation of one effective thermal conductivity in the continuous and homogeneous fluids and two effective thermal conductivities in the heterogeneous solids. The first one measured in the short time acquisitions (<1 s mainly depends on the contact between the wire and grains and thus microtexture and hydrated state of the material. The second one, measured for longer time acquisitions, characterizes the mean effective thermal conductivity of the material.

  20. Control and estimation methods over communication networks

    CERN Document Server

    Mahmoud, Magdi S


    This book provides a rigorous framework in which to study problems in the analysis, stability and design of networked control systems. Four dominant sources of difficulty are considered: packet dropouts, communication bandwidth constraints, parametric uncertainty, and time delays. Past methods and results are reviewed from a contemporary perspective, present trends are examined, and future possibilities proposed. Emphasis is placed on robust and reliable design methods. New control strategies for improving the efficiency of sensor data processing and reducing associated time delay are presented. The coverage provided features: ·        an overall assessment of recent and current fault-tolerant control algorithms; ·        treatment of several issues arising at the junction of control and communications; ·        key concepts followed by their proofs and efficient computational methods for their implementation; and ·        simulation examples (including TrueTime simulations) to...

  1. Bayesian methods to estimate urban growth potential (United States)

    Smith, Jordan W.; Smart, Lindsey S.; Dorning, Monica; Dupéy, Lauren Nicole; Méley, Andréanne; Meentemeyer, Ross K.


    Urban growth often influences the production of ecosystem services. The impacts of urbanization on landscapes can subsequently affect landowners’ perceptions, values and decisions regarding their land. Within land-use and land-change research, very few models of dynamic landscape-scale processes like urbanization incorporate empirically-grounded landowner decision-making processes. Very little attention has focused on the heterogeneous decision-making processes that aggregate to influence broader-scale patterns of urbanization. We examine the land-use tradeoffs faced by individual landowners in one of the United States’ most rapidly urbanizing regions − the urban area surrounding Charlotte, North Carolina. We focus on the land-use decisions of non-industrial private forest owners located across the region’s development gradient. A discrete choice experiment is used to determine the critical factors influencing individual forest owners’ intent to sell their undeveloped properties across a series of experimentally varied scenarios of urban growth. Data are analyzed using a hierarchical Bayesian approach. The estimates derived from the survey data are used to modify a spatially-explicit trend-based urban development potential model, derived from remotely-sensed imagery and observed changes in the region’s socioeconomic and infrastructural characteristics between 2000 and 2011. This modeling approach combines the theoretical underpinnings of behavioral economics with spatiotemporal data describing a region’s historical development patterns. By integrating empirical social preference data into spatially-explicit urban growth models, we begin to more realistically capture processes as well as patterns that drive the location, magnitude and rates of urban growth.

  2. Novel Method for 5G Systems NLOS Channels Parameter Estimation

    Directory of Open Access Journals (Sweden)

    Vladeta Milenkovic


    Full Text Available For the development of new 5G systems to operate in mm bands, there is a need for accurate radio propagation modelling at these bands. In this paper novel approach for NLOS channels parameter estimation will be presented. Estimation will be performed based on LCR performance measure, which will enable us to estimate propagation parameters in real time and to avoid weaknesses of ML and moment method estimation approaches.

  3. Simplified sample preparation method for protein identification by matrix-assisted laser desorption/ionization mass spectrometry: in-gel digestion on the probe surface

    DEFF Research Database (Denmark)

    Stensballe, A; Jensen, Ole Nørregaard


    for protein identification similar to that obtained by the traditional protocols for in-gel digestion and MALDI peptide mass mapping of human proteins, i.e. approximately 60%. The overall performance of the novel on-probe digestion method is comparable with that of the standard in-gel sample preparation...... protocol while being less labour intensive and more cost-effective due to minimal consumption of reagents, enzymes and consumables. Preliminary data obtained on a MALDI quadrupole-TOF tandem mass spectrometer demonstrated the utility of the on-probe digestion protocol for peptide mass mapping and peptide....../ionization-time of flight mass spectrometry (MALDI-TOF-MS) is used as the first protein screening method in many laboratories because of its inherent simplicity, mass accuracy, sensitivity and relatively high sample throughput. We present a simplified sample preparation method for MALDI-MS that enables in-gel digestion...


    Directory of Open Access Journals (Sweden)

    Y.À. Korobeynikov


    Full Text Available The paper represents the author’s methods of estimating regional professional mobile radio market potential, that belongs to high-tech b2b markets. These methods take into consideration such market peculiarities as great range and complexity of products, technological constraints and infrastructure development for the technological systems operation. The paper gives an estimation of professional mobile radio potential in Perm region. This estimation is already used by one of the systems integrator for its strategy development.

  5. General method of boundary correction in kernel regression estimation

    African Journals Online (AJOL)

    Kernel estimators of both density and regression functions are not consistent near the nite end points of their supports. In other words, boundary eects seriously aect the performance of these estimators. In this paper, we combine the transformation and the reflection methods in order to introduce a new general method of ...

  6. Carbon footprint: current methods of estimation. (United States)

    Pandey, Divya; Agrawal, Madhoolika; Pandey, Jai Shanker


    Increasing greenhouse gaseous concentration in the atmosphere is perturbing the environment to cause grievous global warming and associated consequences. Following the rule that only measurable is manageable, mensuration of greenhouse gas intensiveness of different products, bodies, and processes is going on worldwide, expressed as their carbon footprints. The methodologies for carbon footprint calculations are still evolving and it is emerging as an important tool for greenhouse gas management. The concept of carbon footprinting has permeated and is being commercialized in all the areas of life and economy, but there is little coherence in definitions and calculations of carbon footprints among the studies. There are disagreements in the selection of gases, and the order of emissions to be covered in footprint calculations. Standards of greenhouse gas accounting are the common resources used in footprint calculations, although there is no mandatory provision of footprint verification. Carbon footprinting is intended to be a tool to guide the relevant emission cuts and verifications, its standardization at international level are therefore necessary. Present review describes the prevailing carbon footprinting methods and raises the related issues.

  7. A Comparative Study of Distribution System Parameter Estimation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup


    In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.

  8. A Fast LMMSE Channel Estimation Method for OFDM Systems

    Directory of Open Access Journals (Sweden)

    Zhou Wen


    Full Text Available A fast linear minimum mean square error (LMMSE channel estimation method has been proposed for Orthogonal Frequency Division Multiplexing (OFDM systems. In comparison with the conventional LMMSE channel estimation, the proposed channel estimation method does not require the statistic knowledge of the channel in advance and avoids the inverse operation of a large dimension matrix by using the fast Fourier transform (FFT operation. Therefore, the computational complexity can be reduced significantly. The normalized mean square errors (NMSEs of the proposed method and the conventional LMMSE estimation have been derived. Numerical results show that the NMSE of the proposed method is very close to that of the conventional LMMSE method, which is also verified by computer simulation. In addition, computer simulation shows that the performance of the proposed method is almost the same with that of the conventional LMMSE method in terms of bit error rate (BER.

  9. Joint 2-D DOA and Noncircularity Phase Estimation Method

    Directory of Open Access Journals (Sweden)

    Wang Ling


    Full Text Available Classical joint estimation methods need large calculation quantity and multidimensional search. In order to avoid these shortcoming, a novel joint two-Dimension (2-D Direction Of Arrival (DOA and noncircularity phase estimation method based on three orthogonal linear arrays is proposed. The problem of 3-D parameter estimation can be transformed to three parallel 2-D parameter estimation according to the characteristic of three orthogonal linear arrays. Further more, the problem of 2-D parameter estimation can be transformed to 1-D parameter estimation by using the rotational invariance property among signal subspace and orthogonal property of noise subspace at the same time in every subarray. Ultimately, the algorithm can realize joint estimation and pairing parameters by one eigen-decomposition of extended covariance matrix. The proposed algorithm can be applicable for low SNR and small snapshot scenarios, and can estiame 2(M −1 signals. Simulation results verify that the proposed algorithm is effective.

  10. Modulating functions-based method for parameters and source estimation in one-dimensional partial differential equations

    KAUST Repository

    Asiri, Sharefa M.


    In this paper, modulating functions-based method is proposed for estimating space–time-dependent unknowns in one-dimensional partial differential equations. The proposed method simplifies the problem into a system of algebraic equations linear in unknown parameters. The well-posedness of the modulating functions-based solution is proved. The wave and the fifth-order KdV equations are used as examples to show the effectiveness of the proposed method in both noise-free and noisy cases.

  11. Fuel Consumption Estimation System and Method with Lower Cost

    Directory of Open Access Journals (Sweden)

    Chi-Lun Lo


    Full Text Available This study proposes a fuel consumption estimation system and method with lower cost. On-board units can report vehicle speed, and user devices can send fuel information to a data analysis server. Then the data analysis server can use the proposed fuel consumption estimation method to estimate the fuel consumption based on driver behaviours without fuel sensors for cost savings. The proposed fuel consumption estimation method is designed based on a genetic algorithm which can generate gene sequences and use crossover and mutation for retrieving an adaptable gene sequence. The adaptable gene sequence can be applied as the set of fuel consumption in accordance with the pattern of driver behaviour. The practical experimental results indicated that the accuracy of the proposed fuel consumption estimation method was about 95.87%.

  12. Joint Pitch and DOA Estimation Using the ESPRIT method

    DEFF Research Database (Denmark)

    Wu, Yuntao; Amir, Leshem; Jensen, Jesper Rindom


    In this paper, the problem of joint multi-pitch and direction-of-arrival (DOA) estimation for multi-channel harmonic sinusoidal signals is considered. A spatio-temporal matrix signal model for a uniform linear array is defined, and then the ESPRIT method based on subspace techniques that exploits...... the invariance property in the time domain is first used to estimate the multi pitch frequencies of multiple harmonic signals. Followed by the estimated pitch frequencies, the DOA estimations based on the ESPRIT method are also presented by using the shift invariance structure in the spatial domain. Compared...... to the existing stateof-the-art algorithms, the proposed method based on ESPRIT without 2-D searching is computationally more efficient but performs similarly. An asymptotic performance analysis of the DOA and pitch estimation of the proposed method are also presented. Finally, the effectiveness of the proposed...

  13. Phase-Inductance-Based Position Estimation Method for Interior Permanent Magnet Synchronous Motors

    Directory of Open Access Journals (Sweden)

    Xin Qiu


    Full Text Available This paper presents a phase-inductance-based position estimation method for interior permanent magnet synchronous motors (IPMSMs. According to the characteristics of phase induction of IPMSMs, the corresponding relationship of the rotor position and the phase inductance is obtained. In order to eliminate the effect of the zero-sequence component of phase inductance and reduce the rotor position estimation error, the phase inductance difference is employed. With the iterative computation of inductance vectors, the position plane is further subdivided, and the rotor position is extracted by comparing the amplitudes of inductance vectors. To decrease the consumption of computer resources and increase the practicability, a simplified implementation is also investigated. In this method, the rotor position information is achieved easily, with several basic math operations and logical comparisons of phase inductances, without any coordinate transformation or trigonometric function calculation. Based on this position estimation method, the field orientated control (FOC strategy is established, and the detailed implementation is also provided. A series of experiment results from a prototype demonstrate the correctness and feasibility of the proposed method.

  14. Improved dose-calculation accuracy in proton treatment planning using a simplified Monte Carlo method verified with three-dimensional measurements in an anthropomorphic phantom (United States)

    Hotta, Kenji; Kohno, Ryosuke; Takada, Yoshihisa; Hara, Yousuke; Tansho, Ryohei; Himukai, Takeshi; Kameoka, Satoru; Matsuura, Taeko; Nishio, Teiji; Ogino, Takashi


    Treatment planning for proton tumor therapy requires a fast and accurate dose-calculation method. We have implemented a simplified Monte Carlo (SMC) method in the treatment planning system of the National Cancer Center Hospital East for the double-scattering beam delivery scheme. The SMC method takes into account the scattering effect in materials more accurately than the pencil beam algorithm by tracking individual proton paths. We confirmed that the SMC method reproduced measured dose distributions in a heterogeneous slab phantom better than the pencil beam method. When applied to a complex anthropomorphic phantom, the SMC method reproduced the measured dose distribution well, satisfying an accuracy tolerance of 3 mm and 3% in the gamma index analysis. The SMC method required approximately 30 min to complete the calculation over a target volume of 500 cc, much less than the time required for the full Monte Carlo calculation. The SMC method is a candidate for a practical calculation technique with sufficient accuracy for clinical application.

  15. Procedimiento para el cálculo de los parámetros de un modelo térmico simplificado del motor asincrónico Parameter estimation procedure for an asynchronous motor simplified thermal model

    Directory of Open Access Journals (Sweden)

    Julio R Gómez Sarduy


    Full Text Available En este trabajo se presenta un método para estimar las conductancias y capacitancias de un modelo térmico simplificado del motor asincrónico, utilizando una técnica de baja invasividad. El procedimiento permite predecir el incremento de temperatura del estator del motor asincrónico, tanto para régimen dinámico como en condiciones de estabilidad térmica. Se basa en la estimación paramétrica mediante un modelo de referencia, utilizando como optimizador un algoritmo genético (AG. Se logra en definitiva obtener los parámetros del modelo térmico con un ensayo más sencillo que lo requerido por otros métodos experimentales complejos o cálculos analíticos basados en datos de diseño. El procedimiento propuesto se puede llevar a cabo en condiciones propias de la industria y resulta atractivo su empleo en el análisis de calentamiento de estas máquinas. El método se valida a partir de un estudio de caso reportado en la literatura y se aplica a un caso real en la industria, lográndose una buena precisión.In this paper, an asynchronous motor simplified thermal model method for conductances and capacitances estimation is presented. A low invasive technique is used. The developed procedure allows the stator temperature rise prediction, not only for dynamic regimes, but also in case of thermal stability. A parametric estimation is done through a reference model, using a genetic algorithm (GA as optimizing method. The thermal model parameters are finally obtained with an easer experimental work, than the required by other complex experimental methods or by analytical calculations based on design data. The proposed procedure can be carry out in the particular conditions of industrial environment. Its application is specially useful for asynchronous machine thermal analysis. Using the data of a study case reported in literature, the method validation is done, and is applied in an industrial real case, with good precision resulted from it.

  16. Application of the Simplified Dow Chemical Company Relative Ranking Hazard Assessment Method for Air Combat Command Bases (United States)


    and Operability Studies { HAZOP ) . Both of these methods use systematic ways of considering the consequences of unexpected events. In the What-If Method...problems and all probable consequences, assigning a probability of hazard to each consequence based on the probability of occurrence. (Davis 1987:50) HAZOP

  17. Estimation methods for nonlinear state-space models in ecology

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro


    The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...

  18. Unemployment estimation: Spatial point referenced methods and models

    KAUST Repository

    Pereira, Soraia


    Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to

  19. A simple method for estimating thermal response of building ...

    African Journals Online (AJOL)

    A simple method for estimating thermal response of building materials in tropical climate. ... It is concluded from the model's estimates that interior temperatures for thermal comfort can be realized through the appropriate application of passive systems. Global Journal of Pure and Applied Sciences Volume , No 1 January ...

  20. Stability estimates for hp spectral element methods for general ...

    Indian Academy of Sciences (India)

    Home; Journals; Proceedings – Mathematical Sciences; Volume 113; Issue 4. Stability Estimates for ℎ- Spectral ... We establish basic stability estimates for a non-conforming ℎ- spectral element method which allows for simultaneous mesh refinement and variable polynomial degree. The spectral element functions are ...

  1. Methods for design flood estimation in South Africa

    African Journals Online (AJOL)


    Jul 4, 2012 ... The estimation of design floods is necessary for the design of hydraulic structures and to quantify the risk of failure of the structures. Most of the methods used for design flood estimation in South Africa were developed in the late 1960s and early. 1970s and are in need of updating with more than 40 years of ...

  2. Validity of common ultrasound methods of fetal weight estimation in ...

    African Journals Online (AJOL)

    Abstract. Background: Accuracy of some ultrasound equations used in our locality for fetal weight estimation is doubtful. Objective: To assess the accuracy of common ultrasound equations used for fetal weight estimation. Subjects and Methods: A longitudinal study was conducted on selected Nigerian obstetric population at ...

  3. Methods for design flood estimation in South Africa | Smithers ...

    African Journals Online (AJOL)

    The estimation of design floods is necessary for the design of hydraulic structures and to quantify the risk of failure of the structures. Most of the methods used for design flood estimation in South Africa were developed in the late 1960s and early 1970s and are in need of updating with more than 40 years of additional data ...

  4. On estimation methods and test for proportional hazards ...

    African Journals Online (AJOL)

    This work compared three estimation methods to handle tied survival time data under the semiparametric Cox proportional hazards model framework (the Exact, Breslow and Efron partial likelihood) and also two parametric proportional hazards models (the Exponential and Weibull) which utilized full likelihood estimation ...

  5. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method (United States)

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey


    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  6. Validity of common ultrasound methods of fetal weight estimation in ...

    African Journals Online (AJOL)

    Background: Accuracy of some ultrasound equations used in our locality for fetal weight estimation is doubtful. Objective: To assess the accuracy of common ultrasound equations used for fetal weight estimation. Subjects and Methods: A longitudinal study was conducted on selected Nigerian obstetric population at Central ...

  7. A Novel Monopulse Angle Estimation Method for Wideband LFM Radars

    Directory of Open Access Journals (Sweden)

    Yi-Xiong Zhang


    Full Text Available Traditional monopulse angle estimations are mainly based on phase comparison and amplitude comparison methods, which are commonly adopted in narrowband radars. In modern radar systems, wideband radars are becoming more and more important, while the angle estimation for wideband signals is little studied in previous works. As noise in wideband radars has larger bandwidth than narrowband radars, the challenge lies in the accumulation of energy from the high resolution range profile (HRRP of monopulse. In wideband radars, linear frequency modulated (LFM signals are frequently utilized. In this paper, we investigate the monopulse angle estimation problem for wideband LFM signals. To accumulate the energy of the received echo signals from different scatterers of a target, we propose utilizing a cross-correlation operation, which can achieve a good performance in low signal-to-noise ratio (SNR conditions. In the proposed algorithm, the problem of angle estimation is converted to estimating the frequency of the cross-correlation function (CCF. Experimental results demonstrate the similar performance of the proposed algorithm compared with the traditional amplitude comparison method. It means that the proposed method for angle estimation can be adopted. When adopting the proposed method, future radars may only need wideband signals for both tracking and imaging, which can greatly increase the data rate and strengthen the capability of anti-jamming. More importantly, the estimated angle will not become ambiguous under an arbitrary angle, which can significantly extend the estimated angle range in wideband radars.

  8. Surface renewal method for estimating sensible heat flux | Mengistu ...

    African Journals Online (AJOL)

    For short canopies, latent energy flux may be estimated using a shortened surface energy balance from measurements of sensible and soil heat flux and the net irradiance at the surface. The surface renewal (SR) method for estimating sensible heat, latent energy, and other scalar fluxes has the advantage over other ...

  9. Comparison of estimated creatinine clearance among five formulae (Cockroft–Gault, Jelliffe, Sanaka, simplified 4-variable MDRD and DAF) and the 24hours-urine-collection creatinine clearance (United States)

    Diamandopoulos, A; Goudas, P; Arvanitis, A


    Background: GFR estimation is of major importance in everyday clinical practice. Usually it is done using one of the many eGFR equations available. In this study we compared in our population four widespread eGFR equations and our own empirical eGFR, with creatinine clearance calculated through a timed urine collection. Patients and methods: We collected laboratory data of 907 patients from our clinic and outpatient department through a ten-year period and statistically compared the eGFR results between them and with the timed urine collection creatinine clearances. Results: All eGFR equations gave acceptable approximations to the timed urine collection creatinine clearances. However, in different subpopulations some equations did better than others, without any clear advantage of any equation overall. Surprisingly, our empirical equation named DAF also gave acceptable approximations regardless of age, weight and sex of the patient. Conclusions: In our population our empirical eGFR method (DAF) gave satisfactory results regarding the monitoring of renal function, compared with four other eGFR methods. We suggest that it could provide a very fast and easy to use means of eGFR calculation. PMID:20596264

  10. A bootstrap method for estimating uncertainty of water quality trends (United States)

    Hirsch, Robert M.; Archfield, Stacey A.; DeCicco, Laura


    Estimation of the direction and magnitude of trends in surface water quality remains a problem of great scientific and practical interest. The Weighted Regressions on Time, Discharge, and Season (WRTDS) method was recently introduced as an exploratory data analysis tool to provide flexible and robust estimates of water quality trends. This paper enhances the WRTDS method through the introduction of the WRTDS Bootstrap Test (WBT), an extension of WRTDS that quantifies the uncertainty in WRTDS-estimates of water quality trends and offers various ways to visualize and communicate these uncertainties. Monte Carlo experiments are applied to estimate the Type I error probabilities for this method. WBT is compared to other water-quality trend-testing methods appropriate for data sets of one to three decades in length with sampling frequencies of 6–24 observations per year. The software to conduct the test is in the EGRETci R-package.

  11. Methods of multicriterion estimations in system total quality management

    Directory of Open Access Journals (Sweden)

    Nikolay V. Diligenskiy


    Full Text Available In this article the method of multicriterion comparative estimation of efficiency (Data Envelopment Analysis and possibility of its application in system of total quality management is considered.


    Directory of Open Access Journals (Sweden)

    A. M. Baranovskiy


    Full Text Available The article presents the method of estimation of national security in air space use management, considering safety, economic and regularity of air traffic with respect to defensive capacity.

  13. comparison of estimation methods for fitting weibull distribution

    African Journals Online (AJOL)


    QuercusroburL.) stands in northwest Spain with the beta distribution. Investigación Agraria: Sistemasy Recursos Forestales 17(3):. 271-281. COMPARISON OF ESTIMATION METHODS FOR FITTING WEIBULL DISTRIBUTION TO THE NATURAL ...

  14. Simplified methods for assessment of renal function as the ratio of glomerular filtration rate to extracellular fluid volume

    DEFF Research Database (Denmark)

    Jødal, Lars; Brøchner-Mortensen, Jens


    -pool approximations Cl1 and V1 were calculated using only the final-slope curve. Four calculation methods were compared: simple one-pool values; GFR/ECV according to Peters and colleagues; ECV according to Brøchner-Mortensen (BM); and ECV according to a new method (JBM): y=2x – 1, where x = Cl1/Cl and y = V1/ECV...... into contact with the ‘regulator’ (i.e. the kidneys). Aim: The aim of the present study was as follows: to analyse two published calculation methods for determining ECV and GFR/ECV; to develop a new simple and accurate formula for determining ECV; and to compare and evaluate these methods. Materials...... and methods: GFR was determined as 51Cr-EDTA clearance. The study comprised 128 individuals (35 women, 66 men and 27 children) with a full 51Cr-EDTA plasma concentration curve, determined from injection until 4–5 h p.i. Reference values for GFR and ECV were calculated from the full curve. One...

  15. Estimation of Open Boundary Conditions for an Internal Tidal Model with Adjoint Method: A Comparative Study on Optimization Methods

    Directory of Open Access Journals (Sweden)

    Haibo Chen


    Full Text Available Based on an internal tidal model, the practical performances of the limited-memory BFGS (L-BFGS method and two gradient descent (GD methods (the normal one with Wolfe’s line search and the simplified one are investigated computationally through a series of ideal experiments in which the open boundary conditions (OBCs are inverted by assimilating the interior observations with the adjoint method. In the case that the observations closer to the unknown boundary are included for assimilation, the L-BFGS method performs the best. As compared with the simplified GD method, the normal one really uses less iteration to reach a satisfactory solution, but its advantage over the simplified one is much smaller than expected. In the case that only the observations that are further from the unknown boundary are assimilated, the simplified GD method performs the best instead, whereas the performances of the other two methods are not satisfactory. The advanced L-BFGS algorithm and Wolfe’s line search still need to be improved when applied to the practical cases. The simplified GD method, which is controllable and easy to implement, should be regarded seriously as a choice, especially when the classical advanced optimization techniques fail or perform poorly.

  16. Recent Developments in the Methods of Estimating Shooting Distance


    Zeichner, Arie; Glattstein, Baruch


    A review of developments during the past 10 years in the methods of estimating shooting distance is provided. This review discusses the examination of clothing targets, cadavers, and exhibits that cannot be processed in the laboratory. The methods include visual/microscopic examinations, color tests, and instrumental analysis of the gunshot residue deposits around the bullet entrance holes. The review does not cover shooting distance estimation from shotguns that fired pellet loads.

  17. Recent Developments in the Methods of Estimating Shooting Distance

    Directory of Open Access Journals (Sweden)

    Arie Zeichner


    Full Text Available A review of developments during the past 10 years in the methods of estimating shooting distance is provided. This review discusses the examination of clothing targets, cadavers, and exhibits that cannot be processed in the laboratory. The methods include visual/microscopic examinations, color tests, and instrumental analysis of the gunshot residue deposits around the bullet entrance holes. The review does not cover shooting distance estimation from shotguns that fired pellet loads.

  18. A simplified method for rapid quantification of intracellular nucleoside triphosphates by one-dimensional thin-layer chromatography

    DEFF Research Database (Denmark)

    Jendresen, Christian Bille; Kilstrup, Mogens; Martinussen, Jan


    -pyrophosphate (PRPP), and inorganic pyrophosphate (PPi) in cell extracts. The method uses one-dimensional thin-layer chromatography (TLC) and radiolabeled biological samples. Nucleotides are resolved at the level of ionic charge in an optimized acidic ammonium formate and chloride solvent, permitting...

  19. Development of a new simplified load testing method of a pile. Kui no shinsaika shikenho no kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    Aoki, H.; Maruyama, O. (Japan Railway Construction Public Corp., Tokyo (Japan)); Fujioka, T. (Chiyoda Corp., Tokyo (Japan)); Kato, H. (Daido Concrete Co. Ltd., Tokyo (Japan))


    It is preferable to use a vertically loading test to verify bearing power of a pile. However, the conventional standard loading test could not be used easily because of much expense required for reactive piles and a loading device. Accordingly, a new load testing method was developed. The method places a load on a jack mounted initially near a pile tip, and measures the load on the jack, its vertical displacement and the strain and displacement in the pile including the pile head at each depth. Thus, the relationship between circumferential friction force at each part of the pile and the displacement, and the relationship between the jack load and the downward displacement are derived, which are converted to the relationship between the load on the pile head against the pile head loading and the amount of subsidence, using a load transmission analysis. The method features no necessity of reactive piles or loading girders, capability of measuring separately the circumferential friction force and the tip bearing power, safety, short construction time, and inexpensiveness. The jacks include types that are recovered after a test and not recovered. This new load testing method has been used on 18 different kinds of piles, in addition to verification test at sandy alluvial fan areas between Takasaki and Karuizawa in the planned Hokuriku Shinkansen line. 11 figs.

  20. A Channelization-Based DOA Estimation Method for Wideband Signals. (United States)

    Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping


    In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method.

  1. A novel method for estimating distributions of body mass index. (United States)

    Ng, Marie; Liu, Patrick; Thomson, Blake; Murray, Christopher J L


    Understanding trends in the distribution of body mass index (BMI) is a critical aspect of monitoring the global overweight and obesity epidemic. Conventional population health metrics often only focus on estimating and reporting the mean BMI and the prevalence of overweight and obesity, which do not fully characterize the distribution of BMI. In this study, we propose a novel method which allows for the estimation of the entire distribution. The proposed method utilizes the optimization algorithm, L-BFGS-B, to derive the distribution of BMI from three commonly available population health statistics: mean BMI, prevalence of overweight, and prevalence of obesity. We conducted a series of simulations to examine the properties, accuracy, and robustness of the method. We then illustrated the practical application of the method by applying it to the 2011-2012 US National Health and Nutrition Examination Survey (NHANES). Our method performed satisfactorily across various simulation scenarios yielding empirical (estimated) distributions which aligned closely with the true distributions. Application of the method to the NHANES data also showed a high level of consistency between the empirical and true distributions. In situations where there were considerable outliers, the method was less satisfactory at capturing the extreme values. Nevertheless, it remained accurate at estimating the central tendency and quintiles. The proposed method offers a tool that can efficiently estimate the entire distribution of BMI. The ability to track the distributions of BMI will improve our capacity to capture changes in the severity of overweight and obesity and enable us to better monitor the epidemic.

  2. A Channelization-Based DOA Estimation Method for Wideband Signals (United States)

    Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping


    In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566

  3. A Channelization-Based DOA Estimation Method for Wideband Signals

    Directory of Open Access Journals (Sweden)

    Rui Guo


    Full Text Available In this paper, we propose a novel direction of arrival (DOA estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR using direct wideband radio frequency (RF digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method.

  4. Examination of Newton's Method Used for Indirect Frequency Offset Estimation

    Directory of Open Access Journals (Sweden)

    J. Dzubera


    Full Text Available This paper deals with the topic of an indirect carrier frequency offset estimation and elimination. The main goal is to modify a conventional method as an attempt to develop a different approach and then to compare the performance of the modified method with the performance of the conventional one. The conventional approach is here represented by the gradient optimization method called the steepest descent. It is the base for the modification which utilizes Newton's method for the indirect carrier offset estimation. Both algorithms are implemented as phase-locked loops in a model of communication system. The simulation is processed in Matlab.


    Directory of Open Access Journals (Sweden)

    C. Cheng


    Full Text Available To solve the problem of relative camera pose estimation, a method using optimization with respect to the manifold is proposed. Firstly from maximum-a-posteriori (MAP model to nonlinear least squares (NLS model, the general state estimation model using optimization is derived. Then the camera pose estimation model is applied to the general state estimation model, while the parameterization of rigid body transformation is represented by Lie group/algebra. The jacobian of point-pose model with respect to Lie group/algebra is derived in detail and thus the optimization model of rigid body transformation is established. Experimental results show that compared with the original algorithms, the approaches with optimization can obtain higher accuracy both in rotation and translation, while avoiding the singularity of Euler angle parameterization of rotation. Thus the proposed method can estimate relative camera pose with high accuracy and robustness.

  6. A simplified high-throughput method for pyrethroid knock-down resistance (kdr detection in Anopheles gambiae

    Directory of Open Access Journals (Sweden)

    Walker Edward D


    Full Text Available Abstract Background A single base pair mutation in the sodium channel confers knock-down resistance to pyrethroids in many insect species. Its occurrence in Anopheles mosquitoes may have important implications for malaria vector control especially considering the current trend for large scale pyrethroid-treated bednet programmes. Screening Anopheles gambiae populations for the kdr mutation has become one of the mainstays of programmes that monitor the development of insecticide resistance. The screening is commonly performed using a multiplex Polymerase Chain Reaction (PCR which, since it is reliant on a single nucleotide polymorphism, can be unreliable. Here we present a reliable and potentially high throughput method for screening An. gambiae for the kdr mutation. Methods A Hot Ligation Oligonucleotide Assay (HOLA was developed to detect both the East and West African kdr alleles in the homozygous and heterozygous states, and was optimized for use in low-tech developing world laboratories. Results from the HOLA were compared to results from the multiplex PCR for field and laboratory mosquito specimens to provide verification of the robustness and sensitivity of the technique. Results and Discussion The HOLA assay, developed for detection of the kdr mutation, gives a bright blue colouration for a positive result whilst negative reactions remain colourless. The results are apparent within a few minutes of adding the final substrate and can be scored by eye. Heterozygotes are scored when a sample gives a positive reaction to the susceptible probe and the kdr probe. The technique uses only basic laboratory equipment and skills and can be carried out by anyone familiar with the Enzyme-linked immunosorbent assay (ELISA technique. A comparison to the multiplex PCR method showed that the HOLA assay was more reliable, and scoring of the plates was less ambiguous. Conclusion The method is capable of detecting both the East and West African kdr alleles

  7. A simple and traceless solid phase method simplifies the assembly of large peptides and the access to challenging proteins. (United States)

    Ollivier, N; Desmet, R; Drobecq, H; Blanpain, A; Boll, E; Leclercq, B; Mougel, A; Vicogne, J; Melnyk, O


    Chemical protein synthesis gives access to well-defined native or modified proteins that are useful for studying protein structure and function. The majority of proteins synthesized up to now have been produced using native chemical ligation (NCL) in solution. Although there are significant advantages to assembling large peptides or proteins by solid phase ligation, reports of such approaches are rare. We report a novel solid phase method for protein synthesis which relies on the chemistry of the acetoacetyl group and ketoxime ligation for the attachment of the peptide to the solid support, and on a tandem transoximation/rearrangement process for the detachment of the target protein. Importantly, we show that the combination of solid phase and solution ligation techniques facilitates the production of a challenging and biologically active protein made of 180 amino acids. We show also that the solid phase method enables the purification of complex peptide segments through a chemoselective solid phase capture/release approach.

  8. Islanding detection scheme based on adaptive identifier signal estimation method. (United States)

    Bakhshi, M; Noroozian, R; Gharehpetian, G B


    This paper proposes a novel, passive-based anti-islanding method for both inverter and synchronous machine-based distributed generation (DG) units. Unfortunately, when the active/reactive power mismatches are near to zero, majority of the passive anti-islanding methods cannot detect the islanding situation, correctly. This study introduces a new islanding detection method based on exponentially damped signal estimation method. The proposed method uses adaptive identifier method for estimating of the frequency deviation of the point of common coupling (PCC) link as a target signal that can detect the islanding condition with near-zero active power imbalance. Main advantage of the adaptive identifier method over other signal estimation methods is its small sampling window. In this paper, the adaptive identifier based islanding detection method introduces a new detection index entitled decision signal by estimating of oscillation frequency of the PCC frequency and can detect islanding conditions, properly. In islanding conditions, oscillations frequency of PCC frequency reach to zero, thus threshold setting for decision signal is not a tedious job. The non-islanding transient events, which can cause a significant deviation in the PCC frequency are considered in simulations. These events include different types of faults, load changes, capacitor bank switching, and motor starting. Further, for islanding events, the capability of the proposed islanding detection method is verified by near-to-zero active power mismatches. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  9. A simplified ATP method for the rapid control of cell viability in a freeze-dried BCG vaccine. (United States)

    Ugarova, Natalia N; Lomakina, Galina Yu; Modestova, Yulia; Chernikov, Sergey V; Vinokurova, Natalia V; Оtrashevskaya, Elena V; Gorbachev, Vyacheslav Y


    We propose a simple and cost-effective ATP method for controlling the specific activity of a freeze-dried BCG vaccine. A freeze-dried BCG vaccine is reconstituted with 1ml saline and incubated for 15min at room temperature and then for 1h at 37°C. The vaccine is then treated with apyrase to remove extracellular ATP. After that, the cells are lysed with DMSO and the ATP content in the lysate is measured by the bioluminescence method. To implement the method, we developed a kit that requires no time-consuming preparation before the analysis. We demonstrated the linear relationship between the experimental values of the specific activity (106CFU/mg) and intracellular ATP content (ATP, pmol/mg) for different batches of the studied BCG vaccines; the proportionality coefficient was К=0.36±0.02. We proposed a formula for calculating the specific activity from the measured content of intracellular ATP (ATP, pmol/mg). The comparison of the measured and calculated values of the specific activity (106CFU/mg) shows that these values are similar; their differences fall within the allowable range of deviations for the specific activity values of the BCG vaccine. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Comparison of estimated creatinine clearance among five formulae (Cockroft-Gault, Jelliffe, Sanaka, simplified 4-variable MDRD and DAF) and the 24hours-urine-collection creatinine clearance. (United States)

    Diamandopoulos, A; Goudas, P; Arvanitis, A


    GFR estimation is of major importance in everyday clinical practice. Usually it is done using one of the many eGFR equations available. In this study we compared in our population four widespread eGFR equations and our own empirical eGFR, with creatinine clearance calculated through a timed urine collection. We collected laboratory data of 907 patients from our clinic and outpatient department through a ten-year period and statistically compared the eGFR results between them and with the timed urine collection creatinine clearances. All eGFR equations gave acceptable approximations to the timed urine collection creatinine clearances. However, in different subpopulations some equations did better than others, without any clear advantage of any equation overall. Surprisingly, our empirical equation named DAF also gave acceptable approximations regardless of age, weight and sex of the patient. In our population our empirical eGFR method (DAF) gave satisfactory results regarding the monitoring of renal function, compared with four other eGFR methods. We suggest that it could provide a very fast and easy to use means of eGFR calculation.

  11. Multiparameter estimation along quantum trajectories with sequential Monte Carlo methods (United States)

    Ralph, Jason F.; Maskell, Simon; Jacobs, Kurt


    This paper proposes an efficient method for the simultaneous estimation of the state of a quantum system and the classical parameters that govern its evolution. This hybrid approach benefits from efficient numerical methods for the integration of stochastic master equations for the quantum system, and efficient parameter estimation methods from classical signal processing. The classical techniques use sequential Monte Carlo (SMC) methods, which aim to optimize the selection of points within the parameter space, conditioned by the measurement data obtained. We illustrate these methods using a specific example, an SMC sampler applied to a nonlinear system, the Duffing oscillator, where the evolution of the quantum state of the oscillator and three Hamiltonian parameters are estimated simultaneously.

  12. A new class of methods for functional connectivity estimation (United States)

    Lin, Wutu

    Measuring functional connectivity from neural recordings is important in understanding processing in cortical networks. The covariance-based methods are the current golden standard for functional connectivity estimation. However, the link between the pair-wise correlations and the physiological connections inside the neural network is unclear. Therefore, the power of inferring physiological basis from functional connectivity estimation is limited. To build a stronger tie and better understand the relationship between functional connectivity and physiological neural network, we need (1) a realistic model to simulate different types of neural recordings with known ground truth for benchmarking; (2) a new functional connectivity method that produce estimations closely reflecting the physiological basis. In this thesis, (1) I tune a spiking neural network model to match with human sleep EEG data, (2) introduce a new class of methods for estimating connectivity from different kinds of neural signals and provide theory proof for its superiority, (3) apply it to simulated fMRI data as an application.

  13. Simplified Mashing Efficiency. Novel Method for Optimization of Food Industry Wort Production with the Use of Adjuncts

    Directory of Open Access Journals (Sweden)

    Szwed Łukasz P.


    Full Text Available Malt extracts and malt concentrates have a broad range of application in food industry. Those products are obtained by methods similar to brewing worts. The possible reduction of cost can be achieved by application of malt substitutes likewise in brewing industry. As the malt concentrates for food industry do not have to fulfill strict norms for beer production it is possible to produce much cheaper products. It was proved that by means of mathematic optimization it is possible to determine the optimal share of unmalted material for cheap yet effective production of wort.

  14. A novel, simplified ex vivo method for measuring water exchange performance of heat and moisture exchangers for tracheostomy application. (United States)

    van den Boer, Cindy; Muller, Sara H; Vincent, Andrew D; Züchner, Klaus; van den Brekel, Michiel W M; Hilgers, Frans J M


    Breathing through a tracheostomy results in insufficient warming and humidification of inspired air. This loss of air-conditioning can be partially compensated for with the application of a heat and moisture exchanger (HME) over the tracheostomy. In vitro (International Organization for Standardization [ISO] standard 9360-2:2001) and in vivo measurements of the effects of an HME are complex and technically challenging. The aim of this study was to develop a simple method to measure the ex vivo HME performance comparable with previous in vitro and in vivo results. HMEs were weighed at the end of inspiration and at the end of expiration at different breathing volumes. Four HMEs (Atos Medical, Hörby, Sweden) with known in vivo humidity and in vitro water loss values were tested. The associations between weight change, volume, and absolute humidity were determined using both linear and non-linear mixed effects models. The rating between the 4 HMEs by weighing correlated with previous intra-tracheal measurements (R(2) = 0.98), and the ISO standard (R(2) = 0.77). Assessment of the weight change between end of inhalation and end of exhalation is a valid and simple method of measuring the water exchange performance of an HME.

  15. Simplified RP-HPLC method for multi-residue analysis of abamectin, emamectin benzoate and ivermectin in rice. (United States)

    Xie, Xianchuan; Gong, Shu; Wang, Xiaorong; Wu, Yinxing; Zhao, Li


    A rapid, reliable and sensitive reverse-phase high-performance liquid chromatography method with fluorescence detection (RP-FLD-HPLC) was developed and validated for simultaneous analysis of the abamectin (ABA), emamectin (EMA) benzoate and ivermectin (IVM) residues in rice. After extraction with acetonitrile/water (2 : 1) with sonication, the avermectin (AVMs) residues were directly derivatised by N-methylimidazole (N-NMIM) and trifluoroacetic anhydride (TFAA) and then analysed on RP-FLD-HPLC. A good linear relationship (r(2 )> 0.99) was obtained for three AVMs ranging from 0.01 to 5 microg ml(-1), i.e. 0.01-5.0 microg g(-1) in rice matrix. The limit of detection (LOD) and the limit of quantification (LOQ) were between 0.001 and 0.002 microg g(-1) and between 0.004 and 0.006 microg g(-1), respectively. Recoveries were from 81.9% to 105.4% and precision less than 12.4%. The proposed method was successfully applied to routine analysis of the AVMs residues in rice.

  16. Comparison of simplified parametric methods for visual interpretation of 11C-Pittsburgh compound-B PET images. (United States)

    Zwan, Marissa D; Ossenkoppele, Rik; Tolboom, Nelleke; Beunders, Alexandra J M; Kloet, Reina W; Adriaanse, Sofie M; Boellaard, Ronald; Windhorst, Albert D; Raijmakers, Pieter; Adams, Human; Lammertsma, Adriaan A; Scheltens, Philip; van der Flier, Wiesje M; van Berckel, Bart N M


    This study compared several parametric imaging methods to determine the optimal approach for visual assessment of parametric Pittsburgh compound-B ((11)C-PIB) PET images to detect cortical amyloid deposition in different memory clinic patient groups. Dynamic (11)C-PIB scanning of 120 memory clinic patients was performed. Parametric nondisplaceable binding potential (BPND) images were compared with standardized uptake value (SUV) and SUV ratio images. Images were visually assessed by 3 independent readers, and both interreader and intermethod agreement was determined. Both 90-min (Fleiss κ = 0.88) and 60-min (Fleiss κ = 0.89) BPND images showed excellent interreader agreement, whereas agreement was good to moderate for SUV ratio images (Fleiss κ = 0.68) and SUV images (Fleiss κ = 0.59). Intermethod agreement varied substantially between readers, although BPND images consistently showed the best performance. The use of BPND images provided the highest interreader and intermethod agreement and is therefore the method of choice for optimal visual interpretation of (11)C-PIB PET scans. © 2014 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  17. Simplified method for quality evaluation of the mammographic system; Metodo simplificado para avaliar a qualidade do sistema mamografico

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Claudio Domingues de; Coutinho, Celia Maria Campos [Instituto de Radioprotecao e Dosimetria (IRD), Rio de Janeiro, RJ (Brazil); Oliveira, Larissa Conceicao Gomes [Fundacao Tecnico Eduacional Souza Marques, Rio de Janeiro, RJ (Brazil)


    In order to produce mammograms at the lowest doses consistent with high diagnostic sensitivity and specificity, it is necessary that careful consideration be given to the selection of equipment, patient positioning and imaging techniques, besides the establishment of an effective quality control program. So, in this work is proposed a comprehensive method to evaluate the consistence of quality mammographic systems by analyzing results from automatic exposure control, image contrast index, optical densities and image quality from a mammographic phantom. Results point out that from a total of 12 installed units analyzed, classified as good equipment, 8% overexposure the patients above from those necessary to register adequate optical densities when 58% of phantom images presented values below from those acceptable index. Yet, 33% of equipment had shown failures on AEC and 50% of images presented poor quality. Finally, on this statistic, only one installation executed procedures of quality control. Conclusion shows the necessity of simple and objective methods which should be part of routines in mammographic installations. (author)

  18. Simplifying Massive Contour Maps

    DEFF Research Database (Denmark)

    Arge, Lars; Deleuran, Lasse Kosetski; Mølhave, Thomas


    We present a simple, efficient and practical algorithm for constructing and subsequently simplifying contour maps from massive high-resolution DEMs, under some practically realistic assumptions on the DEM and contours.......We present a simple, efficient and practical algorithm for constructing and subsequently simplifying contour maps from massive high-resolution DEMs, under some practically realistic assumptions on the DEM and contours....

  19. A comparative study between a simplified Kalman filter and Sliding Window Averaging for single trial dynamical estimation of event-related potentials

    DEFF Research Database (Denmark)

    Vedel-Larsen, Esben; Fuglø, Jacob; Channir, Fouad


    The classical approach for extracting event-related potentials (ERPs) from the brain is ensemble averaging. For long latency ERPs this is not optimal, partly due to the time-delay in obtaining a response and partly because the latency and amplitude for the ERP components, like the P300......, are variable and depend on cognitive function. This study compares the performance of a simplified Kalman filter with Sliding Window Averaging in tracking dynamical changes in single trial P300. The comparison is performed on simulated P300 data with added background noise consisting of both simulated and real...... in the P300 component and in a considerably higher robustness towards suboptimal settings. The latter is of great importance in a clinical setting where the optimal setting cannot be determined....

  20. Advances in production and simplified methods for recovery and quantification of exopolysaccharides for applications in food and health. (United States)

    Leroy, Frédéric; De Vuyst, Luc


    The capacity of strains to produce exopolysaccharides (EPS) is widespread among species of lactic acid bacteria and bifidobacteria, although the physiological role of these molecules is not yet clearly understood. When EPS are produced during food fermentation, they confer technological benefits on the fermented end products, such as improved texture and stability. In addition, some of these EPS may have beneficial effects on consumer health. These uses of EPS necessitate optimal and sufficient production of these molecules, both in situ and ex situ, not only to improve their yields but also to obtain a particular functionality. The present study reviews the commonly used methods of production, isolation, and quantification that have been used in recent studies dealing with EPS-producing lactic acid bacteria and bifidobacteria. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  1. Simplified methods for spatial sampling: application to first-phase data of Italian National Forest Inventory (INFC in Sicily

    Directory of Open Access Journals (Sweden)

    Cullotta S


    Full Text Available Methodological approaches able to integrate data from sample plots with cartographic processes are widely applied. Based on mathematic-statistical techniques, the spatial analysis allows the exploration and spatialization of geographic data. Starting from the punctual information on land use types obtained from the dataset of the first phase of the ongoing new Italian NFI (INFC, a spatialization of land cover classes was carried out using the Inverse Distance Weighting (IDW method. In order to validate the obtained results, an overlay with other vectorial land use data was carried out. In particular, the overlay compared data at different scales, evaluating differences in terms of degree of correspondence between the interpolated and reference land cover.

  2. Training Methods for Image Noise Level Estimation on Wavelet Components

    Directory of Open Access Journals (Sweden)

    A. De Stefano


    Full Text Available The estimation of the standard deviation of noise contaminating an image is a fundamental step in wavelet-based noise reduction techniques. The method widely used is based on the mean absolute deviation (MAD. This model-based method assumes specific characteristics of the noise-contaminated image component. Three novel and alternative methods for estimating the noise standard deviation are proposed in this work and compared with the MAD method. Two of these methods rely on a preliminary training stage in order to extract parameters which are then used in the application stage. The sets used for training and testing, 13 and 5 images, respectively, are fully disjoint. The third method assumes specific statistical distributions for image and noise components. Results showed the prevalence of the training-based methods for the images and the range of noise levels considered.

  3. A Group Contribution Method for Estimating Cetane and Octane Numbers

    Energy Technology Data Exchange (ETDEWEB)

    Kubic, William Louis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Process Modeling and Analysis Group


    Much of the research on advanced biofuels is devoted to the study of novel chemical pathways for converting nonfood biomass into liquid fuels that can be blended with existing transportation fuels. Many compounds under consideration are not found in the existing fuel supplies. Often, the physical properties needed to assess the viability of a potential biofuel are not available. The only reliable information available may be the molecular structure. Group contribution methods for estimating physical properties from molecular structure have been used for more than 60 years. The most common application is estimation of thermodynamic properties. More recently, group contribution methods have been developed for estimating rate dependent properties including cetane and octane numbers. Often, published group contribution methods are limited in terms of types of function groups and range of applicability. In this study, a new, broadly-applicable group contribution method based on an artificial neural network was developed to estimate cetane number research octane number, and motor octane numbers of hydrocarbons and oxygenated hydrocarbons. The new method is more accurate over a greater range molecular weights and structural complexity than existing group contribution methods for estimating cetane and octane numbers.

  4. Estimation of arsenic in nail using silver diethyldithiocarbamate method

    Directory of Open Access Journals (Sweden)

    Habiba Akhter Bhuiyan


    Full Text Available Spectrophotometric method of arsenic estimation in nails has four steps: a washing of nails, b digestion of nails, c arsenic generation, and finally d reading absorbance using spectrophotometer. Although the method is a cheapest one, widely used and effective, it is time consuming, laborious and need caution while using four acids.

  5. Evaluation and reliability of bone histological age estimation methods

    African Journals Online (AJOL)

    The review of those methods showed that the nature of research for estimating age at death has been shifted from gross morphological analysis to histological analysis, and its further going towards the use of digital image processing tools to achieve high accuracy. Histological methods based on the analysis of bone ...

  6. An improved method for estimating the frequency correlation function

    KAUST Repository

    Chelli, Ali


    For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.

  7. Error Self-calibration Method in Spatial Spectrum Estimation

    Directory of Open Access Journals (Sweden)

    Huang Chuan-lu


    Full Text Available An important bottleneck in spatial spectrum estimation theory is array errors. A new error self-calibration method is presented on the basis of active error calibration and online self-calibration theories. An algorithm combining Direction Of Arrival (DOA estimation and array complex gain error calibration is developed using an auxiliary source without accurate spatial location information and numerical optimization calculation; the error calibration parameters can be used in the following spatial spectrum estimation. The method has a higher veracity than the active correction algorithm and is an offline calculation procedure, having the advantage of small computation amount, similar to the online self-calibration method, and avoiding accurate spatial location information in classical active calibration method.

  8. Evaluation of non cyanide methods for hemoglobin estimation

    Directory of Open Access Journals (Sweden)

    Vinaya B Shah


    Full Text Available Background: The hemoglobincyanide method (HiCN method for measuring hemoglobin is used extensively worldwide; its advantages are the ready availability of a stable and internationally accepted reference standard calibrator. However, its use may create a problem, as the waste disposal of large volumes of reagent containing cyanide constitutes a potential toxic hazard. Aims and Objective: As an alternative to drabkin`s method of Hb estimation, we attempted to estimate hemoglobin by other non-cyanide methods: alkaline hematin detergent (AHD-575 using Triton X-100 as lyser and alkaline- borax method using quarternary ammonium detergents as lyser. Materials and Methods: The hemoglobin (Hb results on 200 samples of varying Hb concentrations obtained by these two cyanide free methods were compared with a cyanmethemoglobin method on a colorimeter which is light emitting diode (LED based. Hemoglobin was also estimated in one hundred blood donors and 25 blood samples of infants and compared by these methods. Statistical analysis used was Pearson`s correlation coefficient. Results: The response of the non cyanide method is linear for serially diluted blood samples over the Hb concentration range from 3gm/dl -20 gm/dl. The non cyanide methods has a precision of + 0.25g/dl (coefficient of variation= (2.34% and is suitable for use with fixed wavelength or with colorimeters at wavelength- 530 nm and 580 nm. Correlation of these two methods was excellent (r=0.98. The evaluation has shown it to be as reliable and reproducible as HiCN for measuring hemoglobin at all concentrations. The reagents used in non cyanide methods are non-biohazardous and did not affect the reliability of data determination and also the cost was less than HiCN method. Conclusions: Thus, non cyanide methods of Hb estimation offer possibility of safe and quality Hb estimation and should prove useful for routine laboratory use. Non cyanide methods is easily incorporated in hemobloginometers

  9. Finite Difference Energy Method for nonlinear numerical analysis of reinforced concrete slab using simplified isotropic damage model

    Directory of Open Access Journals (Sweden)

    M. V. A. Lima

    Full Text Available This work presents a model to predict the flexural behavior of reinforced concrete slabs, combining the Mazars damage model for simulation of the loss of stiffness of the concrete during the cracking process and the Classical Theory of Laminates, to govern the bending of the structural element. A variational formulation based on the principle of virtual work was developed for the model, and then treated numerically according to the Finite Difference Energy Method, with the end result a program developed in Fortran. To validate the model thus proposed have been simulated with the program, some cases of slabs in flexure in the literature. The evaluation of the results obtained in this study demonstrated the capability of the model, in view of the good predictability of the behavior of slabs in flexure, sweeping the path of equilibrium to the rupture of the structural element. Besides the satisfactory prediction of the behavior observed as positive aspects of the model to its relative simplicity and reduced number of experimental parameters necessary for modeling.

  10. A simplified method for the cultivation of extreme anaerobic Archaea based on the use of sodium sulfite as reducing agent. (United States)

    Rothe, O; Thomm, M


    The extreme sensitivity of many Archaea to oxygen is a major obstacle for their cultivation in the laboratory and the development of archaeal genetic exchange systems. The technique of Balch and Wolfe (1976) is suitable for the cultivation of anaerobic Archaea but involves time-consuming procedures such as the use of air locks and glove boxes. We describe here a procedure for the cultivation of anaerobic Archaea that is more convenient and faster and allows the preparation of liquid media without the use of an anaerobic chamber. When the reducing agent sodium sulfide (Na2S) was replaced by sodium sulfite (Na2SO3), anaerobic media could be prepared without protection from oxygen outside an anaerobic chamber. Exchange of the headspace of serum bottles by appropriate gases was sufficient to maintain anaerobic conditions in the culture media. Organisms that were unable to utilize sulfite as a source for cellular sulfur were supplemented with hydrogen sulfide. H2S was simply added to the headspace of serum bottles by a syringe. The use of H2S as a source for sulfur minimized the precipitation of cations by sulfide. Representatives of 12 genera of anaerobic Archaea studied here were able to grow in media prepared by this procedure. For the extremely oxygen-sensitive organism Methanococcus thermolithotrophicus, we show that plates could be prepared outside an anaerobic chamber when sulfite was used as reducing agent. The application of this method may faciliate the cultivation and handling of extreme anaerobic Archaea considerably.

  11. High-Performance Black Multicrystalline Silicon Solar Cells by a Highly Simplified Metal-Catalyzed Chemical Etching Method

    KAUST Repository

    Ying, Zhiqin


    A wet-chemical surface texturing technique, including a two-step metal-catalyzed chemical etching (MCCE) and an extra alkaline treatment, has been proven as an efficient way to fabricate high-efficiency black multicrystalline (mc) silicon solar cells, whereas it is limited by the production capacity and the cost cutting due to the complicated process. Here, we demonstrated that with careful control of the composition in etching solution, low-aspect-ratio bowl-like nanostructures with atomically smooth surfaces could be directly achieved by improved one-step MCCE and with no posttreatment, like alkali solution. The doublet surface texture of implementing this nanobowl structure upon the industrialized acidic-textured surface showed concurrent improvement in optical and electrical properties for realizing 18.23% efficiency mc-Si solar cells (156 mm × 156 mm), which is sufficiently higher than 17.7% of the solely acidic-textured cells in the same batch. The one-step MCCE method demonstrated in this study may provide a cost-effective way to manufacture high-performance mc-Si solar cells for the present photovoltaic industry. © 2016 IEEE.

  12. Adaptive Methods for Permeability Estimation and Smart Well Management

    Energy Technology Data Exchange (ETDEWEB)

    Lien, Martha Oekland


    The main focus of this thesis is on adaptive regularization methods. We consider two different applications, the inverse problem of absolute permeability estimation and the optimal control problem of estimating smart well management. Reliable estimates of absolute permeability are crucial in order to develop a mathematical description of an oil reservoir. Due to the nature of most oil reservoirs, mainly indirect measurements are available. In this work, dynamic production data from wells are considered. More specifically, we have investigated into the resolution power of pressure data for permeability estimation. The inversion of production data into permeability estimates constitutes a severely ill-posed problem. Hence, regularization techniques are required. In this work, deterministic regularization based on adaptive zonation is considered, i.e. a solution approach with adaptive multiscale estimation in conjunction with level set estimation is developed for coarse scale permeability estimation. A good mathematical reservoir model is a valuable tool for future production planning. Recent developments within well technology have given us smart wells, which yield increased flexibility in the reservoir management. In this work, we investigate into the problem of finding the optimal smart well management by means of hierarchical regularization techniques based on multiscale parameterization and refinement indicators. The thesis is divided into two main parts, where Part I gives a theoretical background for a collection of research papers that has been written by the candidate in collaboration with others. These constitutes the most important part of the thesis, and are presented in Part II. A brief outline of the thesis follows below. Numerical aspects concerning calculations of derivatives will also be discussed. Based on the introduction to regularization given in Chapter 2, methods for multiscale zonation, i.e. adaptive multiscale estimation and refinement

  13. Spectral method for estimating the quality of the medium from one source and two stations (United States)

    Kedrov, O. K.; Kedrov, E. O.


    The method is suggested for estimating the quality (coefficient Q) of the medium from the longitudinal waves of the earthquakes in the regional zone ( Q P ). This method is based on analyzing the spectra of the P n waves that are generated by one source and recorded at two stations; it does not require calculating the spectrum of the source and, thus, substantially simplifies the solution of the problem. This method is applicable if the propagation conditions of the signals between the source and each station are identical, i.e., the signal-to-station paths lie in the structurally and compositionally similar geological environments. Using the estimates of the Q P -factor and the coefficient of seismic attenuation t*, calculated by the suggested method, we assessed the classification of the source-station paths as pertaining to the stable (S) or tectonic (T) regions for a number of earthquakes in Iran and South California and for the underground nuclear explosion of September 25, 2009 in North Korea. It is shown that the source-station path classification derived in the present study on the basis of the Q P and t* parameters generally agree with the estimates derived in (Kedrov et al., 2010) from the attenuation coefficient b Δ of the spectral parameters of recognition of the explosions and earthquakes. In the Appendix, it is demonstrated by the examples of several earthquakes in South California that prompt estimates of the Q P parameter can also be calculated in the time domain from the amplitudes of the P n waves at a frequency of 3 Hz, which are provided in the Bulletin of the International Data Center in Vienna.

  14. Precision of two methods for estimating age from burbot otoliths (United States)

    Edwards, W.H.; Stapanian, M.A.; Stoneman, A.T.


    Lower reproductive success and older age structure are associated with many burbot (Lota lota L.) populations that are declining or of conservation concern. Therefore, reliable methods for estimating the age of burbot are critical for effective assessment and management. In Lake Erie, burbot populations have declined in recent years due to the combined effects of an aging population (&xmacr; = 10 years in 2007) and extremely low recruitment since 2002. We examined otoliths from burbot (N = 91) collected in Lake Erie in 2007 and compared the estimates of burbot age by two agers, each using two established methods (cracked-and-burned and thin-section) of estimating ages from burbot otoliths. One ager was experienced at estimating age from otoliths, the other was a novice. Agreement (precision) between the two agers was higher for the thin-section method, particularly at ages 6–11 years, based on linear regression analyses and 95% confidence intervals. As expected, precision between the two methods was higher for the more experienced ager. Both agers reported that the thin sections offered clearer views of the annuli, particularly near the margins on otoliths from burbot ages ≥8. Slides for the thin sections required some costly equipment and more than 2 days to prepare. In contrast, preparing the cracked-and-burned samples was comparatively inexpensive and quick. We suggest use of the thin-section method for estimating the age structure of older burbot populations.

  15. An approach to parameter estimation for breast tumor by finite element method (United States)

    Xu, A.-qing; Yang, Hong-qin; Ye, Zhen; Su, Yi-ming; Xie, Shu-sen


    The temperature of human body on the surface of the skin depends on the metabolic activity, the blood flow, and the temperature of the surroundings. Any abnormality in the tissue, such as the presence of a tumor, alters the normal temperature on the skin surface due to increased metabolic activity of the tumor. Therefore, abnormal skin temperature profiles are an indication of diseases such as tumor or cancer. This study is to present an approach to detect the female breast tumor and its related parameter estimations by combination the finite element method with infrared thermography for the surface temperature profile. A 2D simplified breast embedded a tumor model based on the female breast anatomical structure and physiological characteristics was first established, and then finite element method was used to analyze the heat diffuse equation for the surface temperature profiles of the breast. The genetic optimization algorithm was used to estimate the tumor parameters such as depth, size and blood perfusion by minimizing a fitness function involving the temperature profiles simulated data by finite element method to the experimental data obtained by infrared thermography. This preliminary study shows it is possible to determine the depth and the heat generation rate of the breast tumor by using infrared thermography and the optimization analysis, which may play an important role in the female breast healthcare and diseases evaluation or early detection. In order to develop the proposed methodology to be used in clinical, more accurate anatomy 3D breast geometry should be considered in further investigations.

  16. Geostatistic in Reservoir Characterization: from estimation to simulation methods


    Mata Lima, H.


    In this article objective have been made to reviews different geostatistical methods available to estimate and simulate petrophysical properties (porosity and permeability) of the reservoir. Different geostatistical techniques that allow the combination of hard and soft data are taken into account and one refers the main reason to use the geostatistical simulation rather than estimation. Uncertainty in reservoir characterization due to variogram assumption, which is a strict mathematical equa...

  17. Information-theoretic methods for estimating of complicated probability distributions

    CERN Document Server

    Zong, Zhi


    Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur

  18. Plant-available soil water capacity: estimation methods and implications

    Directory of Open Access Journals (Sweden)

    Bruno Montoani Silva


    Full Text Available The plant-available water capacity of the soil is defined as the water content between field capacity and wilting point, and has wide practical application in planning the land use. In a representative profile of the Cerrado Oxisol, methods for estimating the wilting point were studied and compared, using a WP4-T psychrometer and Richards chamber for undisturbed and disturbed samples. In addition, the field capacity was estimated by the water content at 6, 10, 33 kPa and by the inflection point of the water retention curve, calculated by the van Genuchten and cubic polynomial models. We found that the field capacity moisture determined at the inflection point was higher than by the other methods, and that even at the inflection point the estimates differed, according to the model used. By the WP4-T psychrometer, the water content was significantly lower found the estimate of the permanent wilting point. We concluded that the estimation of the available water holding capacity is markedly influenced by the estimation methods, which has to be taken into consideration because of the practical importance of this parameter.

  19. The duplicate method of uncertainty estimation: are eight targets enough? (United States)

    Lyn, Jennifer A; Ramsey, Michael H; Coad, D Stephen; Damant, Andrew P; Wood, Roger; Boon, Katy A


    This paper presents methods for calculating confidence intervals for estimates of sampling uncertainty (s(samp)) and analytical uncertainty (s(anal)) using the chi-squared distribution. These uncertainty estimates are derived from application of the duplicate method, which recommends a minimum of eight duplicate samples. The methods are applied to two case studies--moisture in butter and nitrate in lettuce. Use of the recommended minimum of eight duplicate samples is justified for both case studies as the confidence intervals calculated using greater than eight duplicates did not show any appreciable reduction in width. It is considered that eight duplicates provide estimates of uncertainty that are both acceptably accurate and cost effective.

  20. An Estimation Method for number of carrier frequency

    Directory of Open Access Journals (Sweden)

    Xiong Peng


    Full Text Available This paper proposes a method that utilizes AR model power spectrum estimation based on Burg algorithm to estimate the number of carrier frequency in single pulse. In the modern electronic and information warfare, the pulse signal form of radar is complex and changeable, among which single pulse with multi-carrier frequencies is the most typical one, such as the frequency shift keying (FSK signal, the frequency shift keying with linear frequency (FSK-LFM hybrid modulation signal and the frequency shift keying with bi-phase shift keying (FSK-BPSK hybrid modulation signal. In view of this kind of single pulse which has multi-carrier frequencies, this paper adopts a method which transforms the complex signal into AR model, then takes power spectrum based on Burg algorithm to show the effect. Experimental results show that the estimation method still can determine the number of carrier frequencies accurately even when the signal noise ratio (SNR is very low.

  1. A Simplified and Systematic Method to Isolate, Culture, and Characterize Multiple Types of Human Dental Stem Cells from a Single Tooth. (United States)

    Bakkar, Mohammed; Liu, Younan; Fang, Dongdong; Stegen, Camille; Su, Xinyun; Ramamoorthi, Murali; Lin, Li-Chieh; Kawasaki, Takako; Makhoul, Nicholas; Pham, Huan; Sumita, Yoshinori; Tran, Simon D


    This chapter describes a simplified method that allows the systematic isolation of multiple types of dental stem cells such as dental pulp stem cells (DPSC), periodontal ligament stem cells (PDLSC), and stem cells of the apical papilla (SCAP) from a single tooth. Of specific interest is the modified laboratory approach to harvest/retrieve the dental pulp tissue by minimizing trauma to DPSC by continuous irrigation, reduction of frictional heat from the bur rotation, and reduction of the bur contact time with the dentin. Also, the use of a chisel and a mallet will maximize the number of live DPSC for culture. Steps demonstrating the potential for multiple cell differentiation lineages of each type of dental stem cell into either osteocytes, adipocytes, or chondrocytes are described. Flow cytometry, with a detailed strategy for cell gating and analysis, is described to verify characteristic markers of human mesenchymal multipotent stromal cells (MSC) from DPSC, PDLSC, or SCAP for subsequent experiments in cell therapy and in tissue engineering. Overall, this method can be adapted to any laboratory with a general setup for cell culture experiments.

  2. Modified cross-validation as a method for estimating parameter (United States)

    Shi, Chye Rou; Adnan, Robiah


    Best subsets regression is an effective approach to distinguish models that can attain objectives with as few predictors as would be prudent. Subset models might really estimate the regression coefficients and predict future responses with smaller variance than the full model using all predictors. The inquiry of how to pick subset size λ depends on the bias and variance. There are various method to pick subset size λ. Regularly pick the smallest model that minimizes an estimate of the expected prediction error. Since data are regularly small, so Repeated K-fold cross-validation method is the most broadly utilized method to estimate prediction error and select model. The data is reshuffled and re-stratified before each round. However, the "one-standard-error" rule of Repeated K-fold cross-validation method always picks the most stingy model. The objective of this research is to modify the existing cross-validation method to avoid overfitting and underfitting model, a modified cross-validation method is proposed. This paper compares existing cross-validation and modified cross-validation. Our results reasoned that the modified cross-validation method is better at submodel selection and evaluation than other methods.

  3. Evaluation of estimation methods and base data uncertainties for critical loads of acid deposition in Japan

    NARCIS (Netherlands)

    Shindo, J.; Bregt, A.K.; Hakamata, T.


    A simplified steady-state mass balance model for estimating critical loads was applied to a test area in Japan to evaluate its applicability. Three criteria for acidification limits were used. Mean values and spatial distribution patterns of critical load values calculated by these criteria differed

  4. Comparing four methods to estimate usual intake distributions. (United States)

    Souverein, O W; Dekkers, A L; Geelen, A; Haubrock, J; de Vries, J H; Ocké, M C; Harttig, U; Boeing, H; van 't Veer, P


    The aim of this paper was to compare methods to estimate usual intake distributions of nutrients and foods. As 'true' usual intake distributions are not known in practice, the comparison was carried out through a simulation study, as well as empirically, by application to data from the European Food Consumption Validation (EFCOVAL) Study in which two 24-h dietary recalls (24-HDRs) and food frequency data were collected. The methods being compared were the Iowa State University Method (ISU), National Cancer Institute Method (NCI), Multiple Source Method (MSM) and Statistical Program for Age-adjusted Dietary Assessment (SPADE). Simulation data were constructed with varying numbers of subjects (n), different values for the Box-Cox transformation parameter (λ(BC)) and different values for the ratio of the within- and between-person variance (r(var)). All data were analyzed with the four different methods and the estimated usual mean intake and selected percentiles were obtained. Moreover, the 2-day within-person mean was estimated as an additional 'method'. These five methods were compared in terms of the mean bias, which was calculated as the mean of the differences between the estimated value and the known true value. The application of data from the EFCOVAL Project included calculations of nutrients (that is, protein, potassium, protein density) and foods (that is, vegetables, fruit and fish). Overall, the mean bias of the ISU, NCI, MSM and SPADE Methods was small. However, for all methods, the mean bias and the variation of the bias increased with smaller sample size, higher variance ratios and with more pronounced departures from normality. Serious mean bias (especially in the 95th percentile) was seen using the NCI Method when r(var) = 9, λ(BC) = 0 and n = 1000. The ISU Method and MSM showed a somewhat higher s.d. of the bias compared with NCI and SPADE Methods, indicating a larger method uncertainty. Furthermore, whereas the ISU, NCI and SPADE Methods produced

  5. Separation methods for estimating octanol-water partition coefficients. (United States)

    Poole, Salwa K; Poole, Colin F


    Separation methods for the indirect estimation of the octanol-water partition coefficient (logP) are reviewed with an emphasis on high throughput methods with a wide application range. The solvation parameter model is used to identify suitable separation systems for estimating logP in an efficient manner that negates the need for empirical trial and error experiments. With a few exceptions, systems based on reversed-phase chromatography employing chemically bonded phases are shown to be unsuitable for estimating logP for compounds of diverse structure. This is because the fundamental properties responsible for chromatographic retention tend to be different to those responsible for partition between octanol and water, especially the contribution from hydrogen bonding interactions. On the other hand, retention in several micellar and microemulsion electrokinetic chromatography systems is shown to be highly correlated with the octanol-water partition coefficient. These systems are suitable for the rapid, high throughput determination of logP for neutral, weakly acidic, and weakly basic compounds. For compounds with a permanent charge, electrophoretic migration and electrostatic interactions with the stationary phase results in inaccurate estimation of partition coefficients. The experimental determination of solute descriptors offers an alternative approach for estimating logP, and other biopartitioning properties. A distinct advantage of this approach is that once the solute descriptors are known, solute properties can be estimated for any distribution or transport system for which a solvation parameter model has been established.

  6. A new method for designing floor slabs on grade due to the difficulty of applying simplified design methods, amongst them being the Portland Cement Association (PCA and Wire Reinforcement Institute (WRI methods

    Directory of Open Access Journals (Sweden)

    Hugo Ernesto Camero Sanabrial


    Full Text Available This article presents a methodology for designing slabs on grade for industrial floors where there is an eccentricity between the slab centroid and the gravity centre loads of the loaded axle of forklift trucks travelling over the floor. An example was used for analysing how Portland Cement Association (PCA and the Wire Reinforcement Institute (WRI methods are inadequate for designing floors sublected to this condition. The new proposal for designing slabs on grade for industrial floors has been called the Camero method. An example of an industrial floor designed to be capable of sustaining an infinite number of load applications (or 50-year life was compared to the results of the Camero method and PCA and WRI’s simplified methods. Industrial floors should be capable of sustaining an infinite number of load applications (50-year life if designed with the Camero method; on the other hand, if designed using PCA and WRI methods they will only last one year (in this example the number of axle load applications in a 1-year period was equal to the number of allowable repetitions because they will not be able to sustain an infinite number of load applications. It was concluded that designing plain concrete slabs (without reinforcement on grade according to PCA and the WRI methods leads to slab fatigue, even though extreme fibre stress should not exceed 50 percent (50% of static modulus of concrete rupture and slabs should sustain an infinite number of load repetitions (infinite amount of forklift truck traffic were considered parameters in their design.

  7. Improvement of Source Number Estimation Method for Single Channel Signal.

    Directory of Open Access Journals (Sweden)

    Zhi Dong

    Full Text Available Source number estimation methods for single channel signal have been investigated and the improvements for each method are suggested in this work. Firstly, the single channel data is converted to multi-channel form by delay process. Then, algorithms used in the array signal processing, such as Gerschgorin's disk estimation (GDE and minimum description length (MDL, are introduced to estimate the source number of the received signal. The previous results have shown that the MDL based on information theoretic criteria (ITC obtains a superior performance than GDE at low SNR. However it has no ability to handle the signals containing colored noise. On the contrary, the GDE method can eliminate the influence of colored noise. Nevertheless, its performance at low SNR is not satisfactory. In order to solve these problems and contradictions, the work makes remarkable improvements on these two methods on account of the above consideration. A diagonal loading technique is employed to ameliorate the MDL method and a jackknife technique is referenced to optimize the data covariance matrix in order to improve the performance of the GDE method. The results of simulation have illustrated that the performance of original methods have been promoted largely.

  8. Paradigms and commonalities in atmospheric source term estimation methods (United States)

    Bieringer, Paul E.; Young, George S.; Rodriguez, Luna M.; Annunzio, Andrew J.; Vandenberghe, Francois; Haupt, Sue Ellen


    Modeling the downwind hazard area resulting from the unknown release of an atmospheric contaminant requires estimation of the source characteristics of a localized source from concentration or dosage observations and use of this information to model the subsequent transport and dispersion of the contaminant. This source term estimation problem is mathematically challenging because airborne material concentration observations and wind data are typically sparse and the turbulent wind field chaotic. Methods for addressing this problem fall into three general categories: forward modeling, inverse modeling, and nonlinear optimization. Because numerous methods have been developed on various foundations, they often have a disparate nomenclature. This situation poses challenges to those facing a new source term estimation problem, particularly when selecting the best method for the problem at hand. There is, however, much commonality between many of these methods, especially within each category. Here we seek to address the difficulties encountered when selecting an STE method by providing a synthesis of the various methods that highlights commonalities, potential opportunities for component exchange, and lessons learned that can be applied across methods.

  9. Correction of Misclassifications Using a Proximity-Based Estimation Method

    Directory of Open Access Journals (Sweden)

    Shmulevich Ilya


    Full Text Available An estimation method for correcting misclassifications in signal and image processing is presented. The method is based on the use of context-based (temporal or spatial information in a sliding-window fashion. The classes can be purely nominal, that is, an ordering of the classes is not required. The method employs nonlinear operations based on class proximities defined by a proximity matrix. Two case studies are presented. In the first, the proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm. In the second, the estimation method is applied to two-dimensional signals for correction of misclassifications in images. In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework. Simulation results are presented in both case studies and the degree of improvement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.

  10. FWS Interest Simplified (United States)

    US Fish and Wildlife Service, Department of the Interior — These boundaries are simplified from the U.S. Fish and Wildlife Service Real Estate Interest data layer containing polygons representing tracts of land (parcels) in...

  11. Method for estimating spin-spin interactions from magnetization curves (United States)

    Tamura, Ryo; Hukushima, Koji


    We develop a method to estimate the spin-spin interactions in the Hamiltonian from the observed magnetization curve by machine learning based on Bayesian inference. In our method, plausible spin-spin interactions are determined by maximizing the posterior distribution, which is the conditional probability of the spin-spin interactions in the Hamiltonian for a given magnetization curve with observation noise. The conditional probability is obtained with the Markov chain Monte Carlo simulations combined with an exchange Monte Carlo method. The efficiency of our method is tested using synthetic magnetization curve data, and the results show that spin-spin interactions are estimated with a high accuracy. In particular, the relevant terms of the spin-spin interactions are successfully selected from the redundant interaction candidates by the l1 regularization in the prior distribution.

  12. A Quick Method for Estimating Vehicle Characteristics Appropriate for Continuous Thrust Round Trip Missions Within the Solar System (United States)

    Emrich, Bill


    A simple method of estimating vehicle parameters appropriate for interplanetary travel can provide a useful tool for evaluating the suitability of particular propulsion systems to various space missions. Although detailed mission analyses for interplanetary travel can be quite complex, it is possible to derive hirly simple correlations which will provide reasonable trip time estimates to the planets. In the present work, it is assumed that a constant thrust propulsion system propels a spacecraft on a round trip mission having equidistant outbound and inbound legs in which the spacecraft accelerates during the first portion of each leg of the journey and decelerates during the last portion of each leg of the journey. Comparisons are made with numerical calculations from low thrust trajectory codes to estimate the range of applicability of the simplified correlations.

  13. Approximate relative fatigue life estimation methods for thin-walled monolithic ceramic crowns. (United States)

    Nasrin, Sadia; Katsube, Noriko; Seghi, Robert R; Rokhlin, Stanislav I


    The objective is to establish an approximate relative fatigue life estimation method under simulated mastication load for thin-walled monolithic restorations. Experimentally measured fatigue parameters of fluormica, leucite, lithium disilicate and yttrium-stabilized zirconia in the existing literature were expressed in terms of the maximum cyclic stress and stress corresponding to initial crack size prior to N number of loading cycles to assess their differences. Assuming that failures mostly originate from high stress region, an approximate restoration life method was explored by ignoring the multi-axial nature of stress state. Experiments utilizing a simple trilayer restoration model with ceramic LD were performed to test the model validity. Ceramic fatigue was found to be similar for clinically relevant loading range and mastication frequency, resulting in the development of an approximate fatigue equation that is universally applicable to a wide range of dental ceramic materials. The equation was incorporated into the approximate restoration life estimation, leading to a simple expression in terms of fast fracture parameters, high stress area ΔA, the high stress averaged over ΔA and N. The developed method was preliminarily verified by the experiments. The impact of fast fracture parameters on the restoration life was separated from other factors, and the importance of surface preparation was manifested in the simplified equation. Both the maximum stress and the area of high stress region were also shown to play critical roles. While nothing can replace actual clinical studies, this method could provide a reasonable preliminary estimation of relative restoration life. Copyright © 2018 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  14. A personalized-model-based central aortic pressure estimation method. (United States)

    Jiang, Sheng; Zhang, Zhi-Qiang; Wang, Fang; Wu, Jian-Kang


    Central Aortic Pressure (CAP) can be used to predict cardiovascular structural damage and cardiovascular events, and the development of simple, well-validated and non-invasive methods for CAP waveforms estimation is critical to facilitate the routine clinical applications of CAP. Existing widely applied methods, such as generalized transfer function (GTF-CAP) method and N-Point Moving Average (NPMA-CAP) method, are based on clinical practices, and lack a mathematical foundation. Those methods also have inherent drawback that there is no personalisation, and missing individual aortic characteristics. To overcome this pitfall, we present a personalized-model-based central aortic pressure estimation method (PM-CAP)in this paper. This PM-CAP has a mathematical foundation: a human aortic network model is proposed which is developed based on viscous fluid mechanics theory and could be personalized conveniently. Via measuring the pulse wave at the proximal and distal ends of the radial artery, the least square method is then proposed to estimate patient-specific circuit parameters. Thus the central aortic pulse wave can be obtained via calculating the transfer function between the radial artery and central aorta. An invasive validation study with 18 subjects comparing PM-CAP with direct aortic root pressure measurements during percutaneous transluminal coronary intervention was carried out at the Beijing Hospital. The experimental results show better performance of the PM-CAP method compared to the GTF-CAP method and NPMA-CAP method, which illustrates the feasibility and effectiveness of the proposed method. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Estimation of water percolation by different methods using TDR

    Directory of Open Access Journals (Sweden)

    Alisson Jadavi Pereira da Silva


    Full Text Available Detailed knowledge on water percolation into the soil in irrigated areas is fundamental for solving problems of drainage, pollution and the recharge of underground aquifers. The aim of this study was to evaluate the percolation estimated by time-domain-reflectometry (TDR in a drainage lysimeter. We used Darcy's law with K(θ functions determined by field and laboratory methods and by the change in water storage in the soil profile at 16 points of moisture measurement at different time intervals. A sandy clay soil was saturated and covered with plastic sheet to prevent evaporation and an internal drainage trial in a drainage lysimeter was installed. The relationship between the observed and estimated percolation values was evaluated by linear regression analysis. The results suggest that percolation in the field or laboratory can be estimated based on continuous monitoring with TDR, and at short time intervals, of the variations in soil water storage. The precision and accuracy of this approach are similar to those of the lysimeter and it has advantages over the other evaluated methods, of which the most relevant are the possibility of estimating percolation in short time intervals and exemption from the predetermination of soil hydraulic properties such as water retention and hydraulic conductivity. The estimates obtained by the Darcy-Buckingham equation for percolation levels using function K(θ predicted by the method of Hillel et al. (1972 provided compatible water percolation estimates with those obtained in the lysimeter at time intervals greater than 1 h. The methods of Libardi et al. (1980, Sisson et al. (1980 and van Genuchten (1980 underestimated water percolation.

  16. Lidar method to estimate emission rates from extended sources (United States)

    Currently, point measurements, often combined with models, are the primary means by which atmospheric emission rates are estimated from extended sources. However, these methods often fall short in their spatial and temporal resolution and accuracy. In recent years, lidar has emerged as a suitable to...

  17. A simple method for estimating the convectiondispersion equation ...

    African Journals Online (AJOL)

    The convection-dispersion equation (CDE) is the classical approach for modeling solute transport in porous media. So, estimating parameters became a key problem in CDE. For statistical method, some problems such as parameter uniqueness are still unsolved because of more factors. Due to the advantage of clear ...

  18. Hydrological drought. Processes and estimation methods for streamflow and groundwater

    NARCIS (Netherlands)

    Tallaksen, L.; Lanen, van H.A.J.


    Hydrological drought is a textbook for university students, practising hydrologists and researchers. The main scope of this book is to provide the reader with a comprehensive review of processes and estimation methods for streamflow and groundwater drought. It includes a qualitative conceptual

  19. Comparing different methods for estimating radiation dose to the conceptus

    Energy Technology Data Exchange (ETDEWEB)

    Lopez-Rendon, X.; Dedulle, A. [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Herestraat 49, box 7003, Leuven (Belgium); Walgraeve, M.S.; Woussen, S.; Zhang, G. [University Hospitals Leuven, Department of Radiology, Leuven (Belgium); Bosmans, H. [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Herestraat 49, box 7003, Leuven (Belgium); University Hospitals Leuven, Department of Radiology, Leuven (Belgium); Zanca, F. [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Herestraat 49, box 7003, Leuven (Belgium); GE Healthcare, Buc (France)


    To compare different methods available in the literature for estimating radiation dose to the conceptus (D{sub conceptus}) against a patient-specific Monte Carlo (MC) simulation and a commercial software package (CSP). Eight voxel models from abdominopelvic CT exams of pregnant patients were generated. D{sub conceptus} was calculated with an MC framework including patient-specific longitudinal tube current modulation (TCM). For the same patients, dose to the uterus, D{sub uterus}, was calculated as an alternative for D{sub conceptus}, with a CSP that uses a standard-size, non-pregnant phantom and a generic TCM curve. The percentage error between D{sub uterus} and D{sub conceptus} was studied. Dose to the conceptus and percent error with respect to D{sub conceptus} was also estimated for three methods in the literature. The percentage error ranged from -15.9% to 40.0% when comparing MC to CSP. When comparing the TCM profiles with the generic TCM profile from the CSP, differences were observed due to patient habitus and conceptus position. For the other methods, the percentage error ranged from -30.1% to 13.5% but applicability was limited. Estimating an accurate D{sub conceptus} requires a patient-specific approach that the CSP investigated cannot provide. Available methods in the literature can provide a better estimation if applicable to patient-specific cases. (orig.)

  20. A study of methods to estimate debris flow velocity (United States)

    Prochaska, A.B.; Santi, P.M.; Higgins, J.D.; Cannon, S.H.


    Debris flow velocities are commonly back-calculated from superelevation events which require subjective estimates of radii of curvature of bends in the debris flow channel or predicted using flow equations that require the selection of appropriate rheological models and material property inputs. This research investigated difficulties associated with the use of these conventional velocity estimation methods. Radii of curvature estimates were found to vary with the extent of the channel investigated and with the scale of the media used, and back-calculated velocities varied among different investigated locations along a channel. Distinct populations of Bingham properties were found to exist between those measured by laboratory tests and those back-calculated from field data; thus, laboratory-obtained values would not be representative of field-scale debris flow behavior. To avoid these difficulties with conventional methods, a new preliminary velocity estimation method is presented that statistically relates flow velocity to the channel slope and the flow depth. This method presents ranges of reasonable velocity predictions based on 30 previously measured velocities. ?? 2008 Springer-Verlag.

  1. A comparison of two popular statistical methods for estimating the ...

    Indian Academy of Sciences (India)

    Keywords. coalescence; Monte-Carlo simulation; evolution. ... Different evolutionary scenarios were simulated and the estimation procedures were evaluated. We have found that for both methods ... Partha P. Majumder1. Anthropology & Human Genetics Unit, Indian Statistical Institute, 203 B.T. Road, Kolkata 700 108, India ...

  2. Comparing four methods to estimate usual intake distributions

    NARCIS (Netherlands)

    Souverein, O.W.; Dekkers, A.L.; Geelen, A.; Haubrock, J.; Vries, de J.H.M.; Ocke, M.C.; Harttig, U.; Boeing, H.; Veer, van 't P.


    Background/Objectives: The aim of this paper was to compare methods to estimate usual intake distributions of nutrients and foods. As ‘true’ usual intake distributions are not known in practice, the comparison was carried out through a simulation study, as well as empirically, by application to data

  3. Dual ant colony operational modal analysis parameter estimation method (United States)

    Sitarz, Piotr; Powałka, Bartosz


    Operational Modal Analysis (OMA) is a common technique used to examine the dynamic properties of a system. Contrary to experimental modal analysis, the input signal is generated in object ambient environment. Operational modal analysis mainly aims at determining the number of pole pairs and at estimating modal parameters. Many methods are used for parameter identification. Some methods operate in time while others in frequency domain. The former use correlation functions, the latter - spectral density functions. However, while some methods require the user to select poles from a stabilisation diagram, others try to automate the selection process. Dual ant colony operational modal analysis parameter estimation method (DAC-OMA) presents a new approach to the problem, avoiding issues involved in the stabilisation diagram. The presented algorithm is fully automated. It uses deterministic methods to define the interval of estimated parameters, thus reducing the problem to optimisation task which is conducted with dedicated software based on ant colony optimisation algorithm. The combination of deterministic methods restricting parameter intervals and artificial intelligence yields very good results, also for closely spaced modes and significantly varied mode shapes within one measurement point.

  4. Accurate position estimation methods based on electrical impedance tomography measurements (United States)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.


    Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less

  5. Methods for Measuring and Estimating Methane Emission from Ruminants

    Directory of Open Access Journals (Sweden)

    Jørgen Madsen


    Full Text Available This paper is a brief introduction to the different methods used to quantify the enteric methane emission from ruminants. A thorough knowledge of the advantages and disadvantages of these methods is very important in order to plan experiments, understand and interpret experimental results, and compare them with other studies. The aim of the paper is to describe the principles, advantages and disadvantages of different methods used to quantify the enteric methane emission from ruminants. The best-known methods: Chambers/respiration chambers, SF6 technique and in vitro gas production technique and the newer CO2 methods are described. Model estimations, which are used to calculate national budget and single cow enteric emission from intake and diet composition, are also discussed. Other methods under development such as the micrometeorological technique, combined feeder and CH4 analyzer and proxy methods are briefly mentioned. Methods of choice for estimating enteric methane emission depend on aim, equipment, knowledge, time and money available, but interpretation of results obtained with a given method can be improved if knowledge about the disadvantages and advantages are used in the planning of experiments.

  6. Power Source Status Estimation and Drive Control Method for Autonomous Decentralized Hybrid Train (United States)

    Furuya, Takemasa; Ogawa, Kenichi; Yamamoto, Takamitsu; Hasegawa, Hitoshi

    A hybrid control system has two main functions: power sharing and equipment protection. In this paper, we discuss the design, construction and testing of a drive control method for an autonomous decentralized hybrid train with 100-kW-class fuel cells (FC) and 36-kWh lithium-ion batteries (Li-Batt). The main objectives of this study are to identify the operation status of the power sources on the basis of the input voltage of the traction inverter and to estimate the maximum traction power control basis of the power-source status. The proposed control method is useful in preventing overload operation of the onboard power sources in an autonomous decentralized hybrid system that has a flexible main circuit configuration and a few control signal lines. Further, with this method, the initial cost of a hybrid system can be reduced and the retrofit design of the hybrid system can be simplified. The effectiveness of the proposed method is experimentally confirmed by using a real-scale hybrid train system.

  7. Increasing confidence in mass discharge estimates using geostatistical methods. (United States)

    Cai, Zuansi; Wilson, Ryan D; Cardiff, Michael A; Kitanidis, Peter K


    Mass discharge is one metric rapidly gaining acceptance for assessing the performance of in situ groundwater remediation systems. Multilevel sampling transects provide the data necessary to make such estimates, often using the Thiessen Polygon method. This method, however, does not provide a direct estimate of uncertainty. We introduce a geostatistical mass discharge estimation approach that involves a rigorous analysis of data spatial variability and selection of an appropriate variogram model. High-resolution interpolation was applied to create a map of measurements across a transect, and the magnitude and uncertainty of mass discharge were quantified by conditional simulation. An important benefit of the approach is quantified uncertainty of the mass discharge estimate. We tested the approach on data from two sites monitored using multilevel transects. We also used the approach to explore the effect of lower spatial monitoring resolution on the accuracy and uncertainty of mass discharge estimates. This process revealed two important findings: (1) appropriate monitoring resolution is that which yielded an estimate comparable with the full dataset value, and (2) high-resolution sampling yields a more representative spatial data structure descriptor, which can then be used via conditional simulation to make subsequent mass discharge estimates from lower resolution sampling of the same transect. The implication of the latter is that a high-resolution multilevel transect needs to be sampled only once to obtain the necessary spatial data descriptor for a contaminant plume exhibiting minor temporal variability, and thereafter less spatially intensely to reduce costs. Copyright © 2010 The Author(s). Journal compilation © 2010 National Ground Water Association.

  8. Seasonal adjustment methods and real time trend-cycle estimation

    CERN Document Server

    Bee Dagum, Estela


    This book explores widely used seasonal adjustment methods and recent developments in real time trend-cycle estimation. It discusses in detail the properties and limitations of X12ARIMA, TRAMO-SEATS and STAMP - the main seasonal adjustment methods used by statistical agencies. Several real-world cases illustrate each method and real data examples can be followed throughout the text. The trend-cycle estimation is presented using nonparametric techniques based on moving averages, linear filters and reproducing kernel Hilbert spaces, taking recent advances into account. The book provides a systematical treatment of results that to date have been scattered throughout the literature. Seasonal adjustment and real time trend-cycle prediction play an essential part at all levels of activity in modern economies. They are used by governments to counteract cyclical recessions, by central banks to control inflation, by decision makers for better modeling and planning and by hospitals, manufacturers, builders, transportat...

  9. Flood risk assessment in France: comparison of extreme flood estimation methods (EXTRAFLO project, Task 7) (United States)

    Garavaglia, F.; Paquet, E.; Lang, M.; Renard, B.; Arnaud, P.; Aubert, Y.; Carre, J.


    In flood risk assessment the methods can be divided in two families: deterministic methods and probabilistic methods. In the French hydrologic community the probabilistic methods are historically preferred to the deterministic ones. Presently a French research project named EXTRAFLO (RiskNat Program of the French National Research Agency, deals with the design values for extreme rainfall and floods. The object of this project is to carry out a comparison of the main methods used in France for estimating extreme values of rainfall and floods, to obtain a better grasp of their respective fields of application. In this framework we present the results of Task 7 of EXTRAFLO project. Focusing on French watersheds, we compare the main extreme flood estimation methods used in French background: (i) standard flood frequency analysis (Gumbel and GEV distribution), (ii) regional flood frequency analysis (regional Gumbel and GEV distribution), (iii) local and regional flood frequency analysis improved by historical information (Naulet et al., 2005), (iv) simplify probabilistic method based on rainfall information (i.e. Gradex method (CFGB, 1994), Agregee method (Margoum, 1992) and Speed method (Cayla, 1995)), (v) flood frequency analysis by continuous simulation approach and based on rainfall information (i.e. Schadex method (Paquet et al., 2013, Garavaglia et al., 2010), Shyreg method (Lavabre et al., 2003)) and (vi) multifractal approach. The main result of this comparative study is that probabilistic methods based on additional information (i.e. regional, historical and rainfall information) provide better estimations than the standard flood frequency analysis. Another interesting result is that, the differences between the various extreme flood quantile estimations of compared methods increase with return period, staying relatively moderate up to 100-years return levels. Results and discussions are here illustrated throughout with the example

  10. Estimate capital for operational risk using peak over threshold method (United States)

    Saputri, Azizah Anugrahwati; Noviyanti, Lienda; Soleh, Achmad Zanbar


    Operational risk is inherent in bank activities. To cover this risk a bank reserves a fund called as capital. Often a bank uses Basic Indicator approach (BIA), Standardized Approach (SA), or Advanced Measurement Approach (AMA) for estimating the capital amount. BIA and SA are less-objective in comparison to AMA, since BIA and SA use non-actual loss data while AMA use the actual one. In this research, we define the capital as an OpVaR (i.e. the worst loss at a given confidence level) which will be estimated by Peak Over Threshold Method.

  11. IC space radiation effects experimental simulation and estimation methods

    CERN Document Server

    Chumakov, A I; Telets, V A; Gerasimov, V F; Yanenko, A V; Sogoyan, A V


    Laboratory test simulation methods are developed for IC response prediction to space radiation. The minimum set of radiation simulators is proposed to investigate IC failures and upsets under space radiation. The accelerated test technique of MOS ICs degradation estimation are developed for low intensity irradiation taking into account temperature variations as well as latent degradation effects. Two-parameter cross section functions are adapted to describe the ion- and proton-induced single event upsets. Non-focused laser irradiation is found to be applicable for single event latchup threshold estimation.

  12. An Adaptive Background Subtraction Method Based on Kernel Density Estimation

    Directory of Open Access Journals (Sweden)

    Mignon Park


    Full Text Available In this paper, a pixel-based background modeling method, which uses nonparametric kernel density estimation, is proposed. To reduce the burden of image storage, we modify the original KDE method by using the first frame to initialize it and update it subsequently at every frame by controlling the learning rate according to the situations. We apply an adaptive threshold method based on image changes to effectively subtract the dynamic backgrounds. The devised scheme allows the proposed method to automatically adapt to various environments and effectively extract the foreground. The method presented here exhibits good performance and is suitable for dynamic background environments. The algorithm is tested on various video sequences and compared with other state-of-the-art background subtraction methods so as to verify its performance.


    Directory of Open Access Journals (Sweden)

    V. L. Adzhienko


    Full Text Available A purpose of this study was to determine the probability of bankruptcy by various methods in order to predict the financial crisis of pharmacy organization. Estimating the probability of pharmacy organization bankruptcy was conducted using W. Beaver’s method adopted in the Russian Federation, with integrated assessment of financial stability use on the basis of scoring analysis. The results obtained by different methods are comparable and show that the risk of bankruptcy of the pharmacy organization is small.

  14. Dental age estimation using Willems method: A digital orthopantomographic study

    Directory of Open Access Journals (Sweden)

    Rezwana Begum Mohammed


    Full Text Available In recent years, age estimation has become increasingly important in living people for a variety of reasons, including identifying criminal and legal responsibility, and for many other social events such as a birth certificate, marriage, beginning a job, joining the army, and retirement. Objectives: The aim of this study was to assess the developmental stages of left seven mandibular teeth for estimation of dental age (DA in different age groups and to evaluate the possible correlation between DA and chronological age (CA in South Indian population using Willems method. Materials and Methods: Digital Orthopantomogram of 332 subjects (166 males, 166 females who fit the study and the criteria were obtained. Assessment of mandibular teeth (from central incisor to the second molar on left quadrant development was undertaken and DA was assessed using Willems method. Results and Discussion: The present study showed a significant correlation between DA and CA in both males (r = 0.71 and females (r = 0.88. The overall mean difference between the estimated DA and CA for males was 0.69 ± 2.14 years (P 0.05. Willems method underestimated the mean age of males by 0.69 years and females by 0.08 years and showed that females mature earlier than males in selected population. The mean difference between DA and CA according to Willems method was 0.39 years and is statistically significant (P < 0.05. Conclusion: This study showed significant relation between DA and CA. Thus, digital radiographic assessment of mandibular teeth development can be used to generate mean DA using Willems method and also the estimated age range for an individual of unknown CA.

  15. Estimating the Capacity of Urban Transportation Networks with an Improved Sensitivity Based Method

    Directory of Open Access Journals (Sweden)

    Muqing Du


    Full Text Available The throughput of a given transportation network is always of interest to the traffic administrative department, so as to evaluate the benefit of the transportation construction or expansion project before its implementation. The model of the transportation network capacity formulated as a mathematic programming with equilibrium constraint (MPEC well defines this problem. For practical applications, a modified sensitivity analysis based (SAB method is developed to estimate the solution of this bilevel model. The high-efficient origin-based (OB algorithm is extended for the precise solution of the combined model which is integrated in the network capacity model. The sensitivity analysis approach is also modified to simplify the inversion of the Jacobian matrix in large-scale problems. The solution produced in every iteration of SAB is restrained to be feasible to guarantee the success of the heuristic search. From the numerical experiments, the accuracy of the derivatives for the linear approximation could significantly affect the converging of the SAB method. The results also show that the proposed method could obtain good suboptimal solutions from different starting points in the test examples.

  16. Benchmarking Foot Trajectory Estimation Methods for Mobile Gait Analysis

    Directory of Open Access Journals (Sweden)

    Julius Hannink


    Full Text Available Mobile gait analysis systems based on inertial sensing on the shoe are applied in a wide range of applications. Especially for medical applications, they can give new insights into motor impairment in, e.g., neurodegenerative disease and help objectify patient assessment. One key component in these systems is the reconstruction of the foot trajectories from inertial data. In literature, various methods for this task have been proposed. However, performance is evaluated on a variety of datasets due to the lack of large, generally accepted benchmark datasets. This hinders a fair comparison of methods. In this work, we implement three orientation estimation and three double integration schemes for use in a foot trajectory estimation pipeline. All methods are drawn from literature and evaluated against a marker-based motion capture reference. We provide a fair comparison on the same dataset consisting of 735 strides from 16 healthy subjects. As a result, the implemented methods are ranked and we identify the most suitable processing pipeline for foot trajectory estimation in the context of mobile gait analysis.

  17. Ridge regression estimator: combining unbiased and ordinary ridge regression methods of estimation

    Directory of Open Access Journals (Sweden)

    Sharad Damodar Gore


    Full Text Available Statistical literature has several methods for coping with multicollinearity. This paper introduces a new shrinkage estimator, called modified unbiased ridge (MUR. This estimator is obtained from unbiased ridge regression (URR in the same way that ordinary ridge regression (ORR is obtained from ordinary least squares (OLS. Properties of MUR are derived. Results on its matrix mean squared error (MMSE are obtained. MUR is compared with ORR and URR in terms of MMSE. These results are illustrated with an example based on data generated by Hoerl and Kennard (1975.

  18. Hexographic Method of Complex Town-Planning Terrain Estimate (United States)

    Khudyakov, A. Ju


    The article deals with the vital problem of a complex town-planning analysis based on the “hexographic” graphic analytic method, makes a comparison with conventional terrain estimate methods and contains the method application examples. It discloses a procedure of the author’s estimate of restrictions and building of a mathematical model which reflects not only conventional town-planning restrictions, but also social and aesthetic aspects of the analyzed territory. The method allows one to quickly get an idea of the territory potential. It is possible to use an unlimited number of estimated factors. The method can be used for the integrated assessment of urban areas. In addition, it is possible to use the methods of preliminary evaluation of the territory commercial attractiveness in the preparation of investment projects. The technique application results in simple informative graphics. Graphical interpretation is straightforward from the experts. A definite advantage is the free perception of the subject results as they are not prepared professionally. Thus, it is possible to build a dialogue between professionals and the public on a new level allowing to take into account the interests of various parties. At the moment, the method is used as a tool for the preparation of integrated urban development projects at the Department of Architecture in Federal State Autonomous Educational Institution of Higher Education “South Ural State University (National Research University)”, FSAEIHE SUSU (NRU). The methodology is included in a course of lectures as the material on architectural and urban design for architecture students. The same methodology was successfully tested in the preparation of business strategies for the development of some territories in the Chelyabinsk region. This publication is the first in a series of planned activities developing and describing the methodology of hexographical analysis in urban and architectural practice. It is also

  19. A Subspace Method for Dynamical Estimation of Evoked Potentials

    Directory of Open Access Journals (Sweden)

    Stefanos D. Georgiadis


    Full Text Available It is a challenge in evoked potential (EP analysis to incorporate prior physiological knowledge for estimation. In this paper, we address the problem of single-channel trial-to-trial EP characteristics estimation. Prior information about phase-locked properties of the EPs is assesed by means of estimated signal subspace and eigenvalue decomposition. Then for those situations that dynamic fluctuations from stimulus-to-stimulus could be expected, prior information can be exploited by means of state-space modeling and recursive Bayesian mean square estimation methods (Kalman filtering and smoothing. We demonstrate that a few dominant eigenvectors of the data correlation matrix are able to model trend-like changes of some component of the EPs, and that Kalman smoother algorithm is to be preferred in terms of better tracking capabilities and mean square error reduction. We also demonstrate the effect of strong artifacts, particularly eye blinks, on the quality of the signal subspace and EP estimates by means of independent component analysis applied as a prepossessing step on the multichannel measurements.

  20. A scale for estimating the ergonomics of information display methods

    Directory of Open Access Journals (Sweden)

    B. S. Goryachkin


    Full Text Available Modern automated systems use various methods to display information. Thus, a method to estimate the ergonomics of displayed information is desirable. Now there are a number of ergonomics scales, which are rather widely used in practice. Unfortunately, they have a number of shortcomings.We offer a new scale consisting of six points. Each of these points represents an assertion, which must be estimated by the respondent on Laykert's scale in terms of the extent of his consent with it. The offered scale consists of two independent subscales: a subscale of usability and a subscale of involvement.The offered scale was tested via polling survey of 653 people. The factorial analysis to confirm the existence of two factors was based on the results of the poll. Reliability evaluation was accomplished by calculating a Cronbach’s coefficient. For the offered scale the Cronbach’s coefficient was 0.82. As to the first factor and to the second one, it was, respectively, 0.79 and 0.81. It means that, reliability of the entire scale, and also that of the second factor is possible to qualify as a good one, while reliability of the first factor may be qualified as sufficient.The sensitivity evaluation was made using a method of the dispersion analysis. According to results of this analysis, a conclusion may be drawn that the sensitivity of the entire scale and the first factor is significant, and the second factor sensitivity is sufficient.The scale offered in the article is simple and easy in use. The scale allows us to estimate two aspects of ergonomics: usability and involvement. The conducted research has shown that the offered scale provides good reliability and rather high sensitivity of estimating the ergonomics of information display methods.



    Татьяна Александровна Коркина; Оксана Анатольевна Лапаева; Ольга Сергеевна Шивырялкина


    Analysis of definitions of «professionalism», reflecting the different viewpoints of scientists and practitioners, has shown that it is interpreted as a specific property of the people effectively and reliably carry out labour activity in a variety of conditions. The article presents the methodical approach to an estimation of professionalism of the employee from the position as the external manifestations of the reliability and effectiveness of the work and the position of the personal chara...

  2. Simplified Two-Time Step Method for Calculating Combustion Rates and Nitrogen Oxide Emissions for Hydrogen/Air and Hydorgen/Oxygen (United States)

    Molnar, Melissa; Marek, C. John


    A simplified single rate expression for hydrogen combustion and nitrogen oxide production was developed. Detailed kinetics are predicted for the chemical kinetic times using the complete chemical mechanism over the entire operating space. These times are then correlated to the reactor conditions using an exponential fit. Simple first order reaction expressions are then used to find the conversion in the reactor. The method uses a two-time step kinetic scheme. The first time averaged step is used at the initial times with smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, temperature, and pressure. The second instantaneous step is used at higher water concentrations (> 1 x 10(exp -20) moles/cc) in the mixture which gives the chemical kinetic time as a function of the instantaneous fuel and water mole concentrations, pressure and temperature (T4). The simple correlations are then compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. This time is regressed over the complete initial conditions using the Excel regression routine. Chemical kinetic time equations for H2 and NOx are obtained for H2/air fuel and for the H2/O2. A similar correlation is also developed using data from NASA s Chemical Equilibrium Applications (CEA) code to determine the equilibrium temperature (T4) as a function of overall fuel/air ratio, pressure and initial temperature (T3). High values of the regression coefficient R2 are obtained.

  3. A new method to estimate genetic gain in annual crops

    Directory of Open Access Journals (Sweden)

    Flávio Breseghello


    Full Text Available The genetic gain obtained by breeding programs to improve quantitative traits may be estimated by using data from regional trials. A new statistical method for this estimate is proposed and includes four steps: a joint analysis of regional trial data using a generalized linear model to obtain adjusted genotype means and covariance matrix of these means for the whole studied period; b calculation of the arithmetic mean of the adjusted genotype means, exclusively for the group of genotypes evaluated each year; c direct year comparison of the arithmetic means calculated, and d estimation of mean genetic gain by regression. Using the generalized least squares method, a weighted estimate of mean genetic gain during the period is calculated. This method permits a better cancellation of genotype x year and genotype x trial/year interactions, thus resulting in more precise estimates. This method can be applied to unbalanced data, allowing the estimation of genetic gain in series of multilocational trials.Os ganhos genéticos obtidos pelo melhoramento de caracteres quantitativos podem ser estimados utilizando resultados de ensaios regionais de avaliação de linhagens e cultivares. Um novo método estatístico para esta estimativa é proposto, o qual consiste em quatro passos: a análise conjunta da série de dados dos ensaios regionais através de um modelo linear generalizado de forma a obter as médias ajustadas dos genótipos e a matriz de covariâncias destas médias; b para o grupo de genótipos avaliados em cada ano, cálculo da média aritmética das médias ajustadas obtidas na análise conjunta; c comparação direta dos anos, conforme as médias aritméticas obtidas, e d estimativa de um ganho genético médio, por regressão. Aplicando-se o método de quadrados mínimos generalizado, é calculada uma estimativa ponderada do ganho genético médio no período. Este método permite um melhor cancelamento das interações genótipo x ano e gen

  4. Assessment of a simple ultrasonographic method in estimating fetal weight

    Directory of Open Access Journals (Sweden)

    Beigi A


    Full Text Available Estimating fetal weight in utero, for better management of pregnancy and appropriate timing of delivery especially in high-risk pregnancies is necessary. Our purpose to evaluate a simple method in estimating fetal weight in Iranian pregnant patients and also to compare was with a previous western study. This study was carried out in Arash hospital, Tehran university of medical sciences in 1996-99. In a descriptive-analytic study that was done prospectively on 464 pregnant patients, ultrasonic measurement of biparietal diameter (BPD, mean abdominal diameter (MAD, and femur length (FL performed close to delivery was conducted. Birth weight also was identified. Statistical analysis was done using multiple linear regression on the data and also student's T-test for comparison. Mean birth weight was 2320 gr. The outcome of linear regression analysis was the following model: Weight (gr=95.8×FL (cm+25×MAD (cm-15.6×BPD (cm-4632.1. The effect of all parameters were statistically significant (P<0.02. A fetal weight estimating table was also developed. T-test analysis showed a significant difference (P<0.05 in some final ranks of table (Weight estimations>4000 gr in comparison with the Rose and Mc callum study. Our study showed that ultrasound using the sum of BPD, MAD and FL is a precise method in fetal weight estimation. Application of other biometric measurements may be needed for better elucidation especially in small and large for gestational age fetuses.

  5. Dynamic systems models new methods of parameter and state estimation

    CERN Document Server


    This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...


    Directory of Open Access Journals (Sweden)

    V. Panteleev Andrei


    Full Text Available The article considers the usage of metaheuristic methods of constrained global optimization: “Big Bang - Big Crunch”, “Fireworks Algorithm”, “Grenade Explosion Method” in parameters of dynamic systems estimation, described with algebraic-differential equations. Parameters estimation is based upon the observation results from mathematical model behavior. Their values are derived after criterion minimization, which describes the total squared error of state vector coordinates from the deduced ones with precise values observation at different periods of time. Paral- lelepiped type restriction is imposed on the parameters values. Used for solving problems, metaheuristic methods of constrained global extremum don’t guarantee the result, but allow to get a solution of a rather good quality in accepta- ble amount of time. The algorithm of using metaheuristic methods is given. Alongside with the obvious methods for solving algebraic-differential equation systems, it is convenient to use implicit methods for solving ordinary differen- tial equation systems. Two ways of solving the problem of parameters evaluation are given, those parameters differ in their mathematical model. In the first example, a linear mathematical model describes the chemical action parameters change, and in the second one, a nonlinear mathematical model describes predator-prey dynamics, which characterize the changes in both kinds’ population. For each of the observed examples there are calculation results from all the three methods of optimization, there are also some recommendations for how to choose methods parameters. The obtained numerical results have demonstrated the efficiency of the proposed approach. The deduced parameters ap- proximate points slightly differ from the best known solutions, which were deduced differently. To refine the results one should apply hybrid schemes that combine classical methods of optimization of zero, first and second orders and

  7. Simplified Two-Time Step Method for Calculating Combustion and Emission Rates of Jet-A and Methane Fuel With and Without Water Injection (United States)

    Molnar, Melissa; Marek, C. John


    A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two time step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting rates of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx are obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3). The temperature of the gas entering

  8. Simplified Predictive Models for CO2 Sequestration Performance Assessment: Research Topical Report on Task #4 - Reduced-Order Method (ROM) Based Models

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta; Jin, Larry; He, Jincong; Durlofsky, Louis


    Reduced-order models provide a means for greatly accelerating the detailed simulations that will be required to manage CO2 storage operations. In this work, we investigate the use of one such method, POD-TPWL, which has previously been shown to be effective in oil reservoir simulation problems. This method combines trajectory piecewise linearization (TPWL), in which the solution to a new (test) problem is represented through a linearization around the solution to a previously-simulated (training) problem, with proper orthogonal decomposition (POD), which enables solution states to be expressed in terms of a relatively small number of parameters. We describe the application of POD-TPWL for CO2-water systems simulated using a compositional procedure. Stanford’s Automatic Differentiation-based General Purpose Research Simulator (AD-GPRS) performs the full-order training simulations and provides the output (derivative matrices and system states) required by the POD-TPWL method. A new POD-TPWL capability introduced in this work is the use of horizontal injection wells that operate under rate (rather than bottom-hole pressure) control. Simulation results are presented for CO2 injection into a synthetic aquifer and into a simplified model of the Mount Simon formation. Test cases involve the use of time-varying well controls that differ from those used in training runs. Results of reasonable accuracy are consistently achieved for relevant well quantities. Runtime speedups of around a factor of 370 relative to full- order AD-GPRS simulations are achieved, though the preprocessing needed for POD-TPWL model construction corresponds to the computational requirements for about 2.3 full-order simulation runs. A preliminary treatment for POD-TPWL modeling in which test cases differ from training runs in terms of geological parameters (rather than well controls) is also presented. Results in this case involve only small differences between

  9. Comparison of two methods of estimation of low density lipoprotein cholesterol, the direct versus friedewald estimation


    Sahu, Suchanda; Chawla, Rajinder; Uppal, Bharti


    Current recommendations of the Adult Treatment Panel and Adolescents Treatment Panel of National Cholesterol Education Program make the low-density lipoprotein cholesterol (LDL-C) levels in serum the basis of classification and management of hypercholesterolemia. A number of direct homogenous assays based on surfactant/solubility principles have evolved in the recent past. This has made LDL-C estimation less cumbersome than the earlier used methods. Here we compared one of the direct homogeno...

  10. Estimation method for dynamic ductility index of steel structures by using of equivalent linearization method (United States)

    Nakazawa, Shoji; Maeda, Haruki


    The purpose of this paper is to propose a simply evaluation method for a dynamic ductility index dF and a seismic performance index dIs by using the equivalent linearization method. These indices dF and dIs indicate the ductility and seismic performance of the structure corresponding to the critical deformation of members, and are evaluated based on the nonlinear seismic response analysis. Therefore, in this study, the estimation method of dF and dIs using the equivalent linearization method are proposed. By comparing between the value of dF calculated by the nonlinear analysis of a single degree of freedom system and the value of dFest obtained by the proposed estimation method, the validity of the proposed estimation method is discussed.

  11. Natural interpretations in Tobit regression models using marginal estimation methods. (United States)

    Wang, Wei; Griswold, Michael E


    The Tobit model, also known as a censored regression model to account for left- and/or right-censoring in the dependent variable, has been used in many areas of applications, including dental health, medical research and economics. The reported Tobit model coefficient allows estimation and inference of an exposure effect on the latent dependent variable. However, this model does not directly provide overall exposure effects estimation on the original outcome scale. We propose a direct-marginalization approach using a reparameterized link function to model exposure and covariate effects directly on the truncated dependent variable mean. We also discuss an alternative average-predicted-value, post-estimation approach which uses model-predicted values for each person in a designated reference group under different exposure statuses to estimate covariate-adjusted overall exposure effects. Simulation studies were conducted to show the unbiasedness and robustness properties for both approaches under various scenarios. Robustness appears to diminish when covariates with substantial effects are imbalanced between exposure groups; we outline an approach for model choice based on information criterion fit statistics. The methods are applied to the Genetic Epidemiology Network of Arteriopathy (GENOA) cohort study to assess associations between obesity and cognitive function in the non-Hispanic white participants. © The Author(s) 2015.

  12. Simplifying EU environmental legislation

    DEFF Research Database (Denmark)

    Anker, Helle Tegner


    The recent review of the EIA Directive was launched as part of the ‘better regulation’ agenda with the purpose to simplify procedures and reduce administrative burdens. This was combined with an attempt to further harmonise procedures in order address shortcomings in the Directive and to overcome...

  13. Ambit determination method in estimating rice plant population density

    Directory of Open Access Journals (Sweden)

    Abu Bakar, B.,


    Full Text Available Rice plant population density is a key indicator in determining the crop setting and fertilizer application rate. It is therefore essential that the population density is monitored to ensure that a correct crop management decision is taken. The conventional method of determining plant population is by manually counting the total number of rice plant tillers in a 25 cm x 25 cm square frame. Sampling is done by randomly choosing several different locations within a plot to perform tiller counting. This sampling method is time consuming, labour intensive and costly. An alternative fast estimating method was developed to overcome this issue. The method relies on measuring the outer circumference or ambit of the contained rice plants in a 25 cm x 25 cm square frame to determine the number of tillers within that square frame. Data samples of rice variety MR219 were collected from rice plots in the Muda granary area, Sungai Limau Dalam, Kedah. The data were taken at 50 days and 70 days after seeding (DAS. A total of 100 data samples were collected for each sampling day. A good correlation was obtained for the variety of 50 DAS and 70 DAS. The model was then verified by taking 100 samples with the latching strap for 50 DAS and 70 DAS. As a result, this technique can be used as a fast, economical and practical alternative to manual tiller counting. The technique can potentially be used in the development of an electronic sensing system to estimate paddy plant population density.

  14. A new method of SC image processing for confluence estimation. (United States)

    Soleimani, Sajjad; Mirzaei, Mohsen; Toncu, Dana-Cristina


    Stem cells images are a strong instrument in the estimation of confluency during their culturing for therapeutic processes. Various laboratory conditions, such as lighting, cell container support and image acquisition equipment, effect on the image quality, subsequently on the estimation efficiency. This paper describes an efficient image processing method for cell pattern recognition and morphological analysis of images that were affected by uneven background. The proposed algorithm for enhancing the image is based on coupling a novel image denoising method through BM3D filter with an adaptive thresholding technique for improving the uneven background. This algorithm works well to provide a faster, easier, and more reliable method than manual measurement for the confluency assessment of stem cell cultures. The present scheme proves to be valid for the prediction of the confluency and growth of stem cells at early stages for tissue engineering in reparatory clinical surgery. The method used in this paper is capable of processing the image of the cells, which have already contained various defects due to either personnel mishandling or microscope limitations. Therefore, it provides proper information even out of the worst original images available. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Concordance analysis between estimation methods of milk fatty acid content. (United States)

    Rodriguez, Mary Ana Petersen; Petrini, Juliana; Ferreira, Evandro Maia; Mourão, Luciana Regina Mangeti Barreto; Salvian, Mayara; Cassoli, Laerte Dagher; Pires, Alexandre Vaz; Machado, Paulo Fernando; Mourão, Gerson Barreto


    Considering the milk fatty acid influence on human health, the aim of this study was to compare gas chromatography (GC) and Fourier transform infrared (FTIR) spectroscopy for the determination of these compounds. Fatty acid content (g/100g of fat) were obtained by both methods and compared through Pearson's correlation, linear Bayesian regression, and the Bland-Altman method. Despite the high correlations between the measurements (r=0.60-0.92), the regression coefficient values indicated higher measures for palmitic acid, oleic acid, unsaturated and monounsaturated fatty acids and lower values for stearic acid, saturated and polyunsaturated fatty acids estimated by GC in comparison to FTIR results. This inequality was confirmed in the Bland-Altman test, with an average bias varying from -8.65 to 6.91g/100g of fat. However, the inclusion of 94% of the samples into the concordance limits suggested that the variability of the differences between the methods was constant throughout the range of measurement. Therefore, despite the inequality between the estimates, the methods displayed the same pattern of milk fat composition, allowing similar conclusions about the milk samples under evaluation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. The Software Cost Estimation Method Based on Fuzzy Ontology

    Directory of Open Access Journals (Sweden)

    Plecka Przemysław


    Full Text Available In the course of sales process of Enterprise Resource Planning (ERP Systems, it turns out that the standard system must be extended or changed (modified according to specific customer’s requirements. Therefore, suppliers face the problem of determining the cost of additional works. Most methods of cost estimation bring satisfactory results only at the stage of pre-implementation analysis. However, suppliers need to know the estimated cost as early as at the stage of trade talks. During contract negotiations, they expect not only the information about the costs of works, but also about the risk of exceeding these costs or about the margin of safety. One method that gives more accurate results at the stage of trade talks is the method based on the ontology of implementation costs. This paper proposes modification of the method involving the use of fuzzy attributes, classes, instances and relations in the ontology. The result provides not only the information about the value of work, but also about the minimum and maximum expected cost, and the most likely range of costs. This solution allows suppliers to effectively negotiate the contract and increase the chances of successful completion of the project.

  17. Comparison of methods of estimating creatinine clearance in pediatric patients. (United States)

    Padgett, Danielle; Ostrenga, Andrew; Lepard, Lindsey


    A retrospective study was conducted to compare various methods of measuring serum creatinine (SCr) values for use in pediatric renal function assessments, including a method aligned with a recently implemented national SCr testing standard. Demographic, medication-use, and selected laboratory data were collected from the hospital records of a sample of pediatric patients (n = 91) who underwent 12- or 24-hour timed urine collection for determination of creatinine clearance (CLcr) over a 2-year period. Documented CLcr values measured via the timed urine collection method were compared with investigator-calculated estimates of CLcr or glomerular filtration rate (GFR) derived using 3 SCr-based methods: the Counahan-Barratt equation; the original Schwartz equation; and the "bedside IDMS-traceable Schwartz equation," a modified version of the Schwartz equation reflecting the recent shift toward isotope dilution mass spectrometry (IDMS) methods of SCr measurement, which have been found to yield SCr values 10-20% lower than those derived by older methods, potentially resulting in GFR overestimation if traditional formulas for estimating GFR are used. Comparisons of timed urine collection-derived CLcr values with CLcr values derived from the 3 comparator equations indicated significant levels of bias in all cases, with calculated correlation coefficients of 0.71 for the original Schwartz equation, 0.72 for the bedside IDMS-traceable Schwartz equation, and 0.72 for the Counahan-Barratt equation. Pediatric CLcr values calculated using the original Schwartz, bedside IDMS-traceable Schwartz, and Counahan-Barratt equations were well correlated, but none of the 3 equations yielded values that correlated well with CLcr values derived via the gold-standard method of timed urine collection. Copyright © 2017 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  18. A projection and density estimation method for knowledge discovery.

    Directory of Open Access Journals (Sweden)

    Adam Stanski

    Full Text Available A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features.

  19. A projection and density estimation method for knowledge discovery. (United States)

    Stanski, Adam; Hellwich, Olaf


    A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features.

  20. A new method for the estimation of cell cycle phases. (United States)

    Zajicek, G; Michaeli, Y; Regev, J


    A method is described, which is applicable to cell renewal systems with an anatomical structure in which all cell locations may be uniquely mapped. Its use is demonstrated on the rat incisor inner enamel epithelium, which forms a one cell thick column in the sagittally sectioned tooth. Cells born in the apical part of the column migrate toward the distal end of the tooth, where they mature. As the cells migrate along the column, they traverse the various cell cycle phases. The present study has been designed to estimate the probability of a cell being in a given phase; all cells touching the basement membrane were numbered, and the number of cells separating any two cells was taken as a measure of distance. Since generally all cells move in one direction (lateral cell migration may occur), it is possible to solve the problem with the aid of functions describing the renewal counting stochastic process in which cell distance serves as an independent variable. The method predicts labelled cell and mitotic rates which agree with those estimated in the usual way. It was then utilized to estimate the fraction of cells in G2.

  1. Non spirographic or noninvasive methods to estimate anaerobic treshold

    Directory of Open Access Journals (Sweden)

    Najera-Longoria Raul Hose


    Full Text Available In the world of sports research, there are different ways to determinate physical conditioning, ranging from expensive laboratory-invasive methods to cheap, field based-non-invasive methods. The field based-non-invasive test maintains good reliability and low cost using physiological parameters such heart rate, saliva electrolytes or lactate, perceived exertion and electromyography among others. These parameters can be used to estimate anaerobic threshold (AnT to predict sport performance, redirect training and can help coaches and athletes to be more competitive. However, each of this parameter has some particularities and controversy due to different results reported by specialist. These differences may be explained by protocol testing used, sport level sample, starting intensity or number of levels among others. Despite this, they still have good reproducibility and applications on field based test protocols. Mentioned tests could be used in a large scale, and more often, with paying attention about level of correlation with original invasive tests, and percent of possible mistake in estimation process. Cheaper, and simpler tests (instead subjective estimation of training load allows using more precise planning and changing volume, and intensity of training, for coaches, and athletes, that have needed level of education and less money. Athletes comfort, and possible high frequency of testing during non - invasive test, also must be emphasized as an advantage during training evaluation process.

  2. Methods for cost estimation in software project management (United States)

    Briciu, C. V.; Filip, I.; Indries, I. I.


    The speed in which the processes used in software development field have changed makes it very difficult the task of forecasting the overall costs for a software project. By many researchers, this task has been considered unachievable, but there is a group of scientist for which this task can be solved using the already known mathematical methods (e.g. multiple linear regressions) and the new techniques as genetic programming and neural networks. The paper presents a solution for building a model for the cost estimation models in the software project management using genetic algorithms starting from the PROMISE datasets related COCOMO 81 model. In the first part of the paper, a summary of the major achievements in the research area of finding a model for estimating the overall project costs is presented together with the description of the existing software development process models. In the last part, a basic proposal of a mathematical model of a genetic programming is proposed including here the description of the chosen fitness function and chromosome representation. The perspective of model described it linked with the current reality of the software development considering as basis the software product life cycle and the current challenges and innovations in the software development area. Based on the author's experiences and the analysis of the existing models and product lifecycle it was concluded that estimation models should be adapted with the new technologies and emerging systems and they depend largely by the chosen software development method.

  3. Quality-control analytical methods: endotoxins: essential testing for pyrogens in the compounding laboratory, part 3: a simplified endotoxin test method for compounded sterile preparations. (United States)

    Cooper, James F


    The first two parts of the IJPC series on endotoxin testing explained the nature of pyrogenic contamination and described various Limulus amebocyte lysate methods for detecting and measuring endotoxin levels with the bacterial endotoxin test described in the United States Pharmacopeia. This third article in that series describes the endotoxin test that is simplest to permorm for pharmacists who prefer to conduct an endotoxin assa at the time of compounding in the pharmacy setting.

  4. Exact Group Sequential Methods for Estimating a Binomial Proportion

    Directory of Open Access Journals (Sweden)

    Zhengjia Chen


    Full Text Available We first review existing sequential methods for estimating a binomial proportion. Afterward, we propose a new family of group sequential sampling schemes for estimating a binomial proportion with prescribed margin of error and confidence level. In particular, we establish the uniform controllability of coverage probability and the asymptotic optimality for such a family of sampling schemes. Our theoretical results establish the possibility that the parameters of this family of sampling schemes can be determined so that the prescribed level of confidence is guaranteed with little waste of samples. Analytic bounds for the cumulative distribution functions and expectations of sample numbers are derived. Moreover, we discuss the inherent connection of various sampling schemes. Numerical issues are addressed for improving the accuracy and efficiency of computation. Computational experiments are conducted for comparing sampling schemes. Illustrative examples are given for applications in clinical trials.

  5. Estimating return on investment in translational research: methods and protocols. (United States)

    Grazier, Kyle L; Trochim, William M; Dilts, David M; Kirk, Rosalind


    Assessing the value of clinical and translational research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by the National Institutes of Health (NIH) and its Clinical and Translational Awards (CTSAs). To address this issue, the authors propose a model for measuring the return on investment (ROI) of one key CTSA program, the clinical research unit (CRU). By estimating the economic and social inputs and outputs of this program, this model produces multiple levels of ROI: investigator, program, and institutional estimates. A methodology, or evaluation protocol, is proposed to assess the value of this CTSA function, with specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered, analyzed, and evaluated. This article provides an approach CTSAs could use to assess the economic and social returns on NIH and institutional investments in these critical activities.

  6. Estimation of Lamotrigine by RP-HPLC Method

    Directory of Open Access Journals (Sweden)

    D. Anantha Kumar


    Full Text Available A rapid and reproducible reverse phase high performance liquid chromatographic method has been developed for the estimation of lamotrigine in its pure form as well as in pharmaceutical dosage forms. Chromatography was carried out on a Luna C18 column using a mixture of potassium dihydrogen phosphate buffer (pH 7.3 and methanol in a ratio of 60:40 v/v as the mobile phase at a flow rate of 1.0 mL/min. The detection was done at 305 nm. The retention time of the drug was 6.1 min. The method produced linear responses in the concentration range of 10 to 70 μg/mL of lamotrigine. The method was found to be reproducible for analysis of the drug in tablet dosage forms.

  7. Robustness of Modal Parameter Estimation Methods Applied to Lightweight Structures

    DEFF Research Database (Denmark)

    Dickow, Kristoffer Ahrens; Kirkegaard, Poul Henning; Andersen, Lars Vabbersgaard


    . The ability to handle closely spaced modes and broad frequency ranges is investigated for a numerical model of a lightweight junction under dierent signal-to-noise ratios. The selection of both excitation points and response points are discussed. It is found that both the Rational Fraction Polynomial-Z method...... of nominally identical test subjects. However, the literature on modal testing of timber structures is rather limited and the applicability and robustness of dierent curve tting methods for modal analysis of such structures is not described in detail. The aim of this paper is to investigate the robustness...... of two parameter estimation methods built into the commercial modal testing software B&K Pulse Re ex Advanced Modal Analysis. The investigations are done by means of frequency response functions generated from a nite-element model and subjected to articial noise before being analyzed with Pulse Re ex...

  8. A simplified model for estimating health effects due to leaching of long-lived nuclides stored in a radioactive waste repository. (United States)

    Robkin, M


    Evaluation of alternate methods of treating long-lived radionuclides to be stored in a waste repository include evaluating the differences in health effects from a potential release due to the distribution of nuclides produced by each method. The potential for release and the health effects extend into the indefinite future so predictions of human behavior become speculative. A simple model is proposed which incorporates the nuclides in the repository, leached, in transit in ground water, discharged to surface water, used for irrigation, taken up into crops and ingested. The model yields a solution of simple form and permits the evaluation of potential health effects per unit activity of a nuclide initially placed in the repository. The model is applied to calculate the potential health effects due to the nuclides occurring in a 4n, 4n + 1, 4n + 2, and 4n + 3 decay chain.

  9. A simplified method for power-law modelling of metabolic pathways from time-course data and steady-state flux profiles

    Directory of Open Access Journals (Sweden)

    Sugimoto Masahiro


    Full Text Available Abstract Background In order to improve understanding of metabolic systems there have been attempts to construct S-system models from time courses. Conventionally, non-linear curve-fitting algorithms have been used for modelling, because of the non-linear properties of parameter estimation from time series. However, the huge iterative calculations required have hindered the development of large-scale metabolic pathway models. To solve this problem we propose a novel method involving power-law modelling of metabolic pathways from the Jacobian of the targeted system and the steady-state flux profiles by linearization of S-systems. Results The results of two case studies modelling a straight and a branched pathway, respectively, showed that our method reduced the number of unknown parameters needing to be estimated. The time-courses simulated by conventional kinetic models and those described by our method behaved similarly under a wide range of perturbations of metabolite concentrations. Conclusion The proposed method reduces calculation complexity and facilitates the construction of large-scale S-system models of metabolic pathways, realizing a practical application of reverse engineering of dynamic simulation models from the Jacobian of the targeted system and steady-state flux profiles.

  10. Estimating Fuel Cycle Externalities: Analytical Methods and Issues, Report 2

    Energy Technology Data Exchange (ETDEWEB)

    Barnthouse, L.W.; Cada, G.F.; Cheng, M.-D.; Easterly, C.E.; Kroodsma, R.L.; Lee, R.; Shriner, D.S.; Tolbert, V.R.; Turner, R.S.


    of complex issues that also have not been fully addressed. This document contains two types of papers that seek to fill part of this void. Some of the papers describe analytical methods that can be applied to one of the five steps of the damage function approach. The other papers discuss some of the complex issues that arise in trying to estimate externalities. This report, the second in a series of eight reports, is part of a joint study by the U.S. Department of Energy (DOE) and the Commission of the European Communities (EC)* on the externalities of fuel cycles. Most of the papers in this report were originally written as working papers during the initial phases of this study. The papers provide descriptions of the (non-radiological) atmospheric dispersion modeling that the study uses; reviews much of the relevant literature on ecological and health effects, and on the economic valuation of those impacts; contains several papers on some of the more complex and contentious issues in estimating externalities; and describes a method for depicting the quality of scientific information that a study uses. The analytical methods and issues that this report discusses generally pertain to more than one of the fuel cycles, though not necessarily to all of them. The report is divided into six parts, each one focusing on a different subject area.

  11. Sediment Curve Uncertainty Estimation Using GLUE and Bootstrap Methods

    Directory of Open Access Journals (Sweden)

    aboalhasan fathabadi


    Full Text Available Introduction: In order to implement watershed practices to decrease soil erosion effects it needs to estimate output sediment of watershed. Sediment rating curve is used as the most conventional tool to estimate sediment. Regarding to sampling errors and short data, there are some uncertainties in estimating sediment using sediment curve. In this research, bootstrap and the Generalized Likelihood Uncertainty Estimation (GLUE resampling techniques were used to calculate suspended sediment loads by using sediment rating curves. Materials and Methods: The total drainage area of the Sefidrood watershed is about 560000 km2. In this study uncertainty in suspended sediment rating curves was estimated in four stations including Motorkhane, Miyane Tonel Shomare 7, Stor and Glinak constructed on Ayghdamosh, Ghrangho, GHezelOzan and Shahrod rivers, respectively. Data were randomly divided into a training data set (80 percent and a test set (20 percent by Latin hypercube random sampling.Different suspended sediment rating curves equations were fitted to log-transformed values of sediment concentration and discharge and the best fit models were selected based on the lowest root mean square error (RMSE and the highest correlation of coefficient (R2. In the GLUE methodology, different parameter sets were sampled randomly from priori probability distribution. For each station using sampled parameter sets and selected suspended sediment rating curves equation suspended sediment concentration values were estimated several times (100000 to 400000 times. With respect to likelihood function and certain subjective threshold, parameter sets were divided into behavioral and non-behavioral parameter sets. Finally using behavioral parameter sets the 95% confidence intervals for suspended sediment concentration due to parameter uncertainty were estimated. In bootstrap methodology observed suspended sediment and discharge vectors were resampled with replacement B (set to

  12. Rapid method to estimate temperature changes in electronics elements

    Directory of Open Access Journals (Sweden)

    Oborskii G. A., Savel’eva O. S., Shikhireva Yu. V.


    Full Text Available Thermal behavior of electronic equipment is the determining factor for performing rapid assessment of the effectiveness of design and operation of the equipment. The assessment method proposed in this article consists in fixation of an infrared video stream from the surface of the device and converting it into a visible flow by means of a thermal imager, splitting it into component colors and their further processing using parabolic transformation. The result of the transformation is the number used as a rapid criterion for estimation of distribution stability of heat in the equipment.

  13. Sub-pixel Area Calculation Methods for Estimating Irrigated Areas

    Directory of Open Access Journals (Sweden)

    Suraj Pandey


    Full Text Available The goal of this paper was to develop and demonstrate practical methods forcomputing sub-pixel areas (SPAs from coarse-resolution satellite sensor data. Themethods were tested and verified using: (a global irrigated area map (GIAM at 10-kmresolution based, primarily, on AVHRR data, and (b irrigated area map for India at 500-mbased, primarily, on MODIS data. The sub-pixel irrigated areas (SPIAs from coarse-resolution satellite sensor data were estimated by multiplying the full pixel irrigated areas(FPIAs with irrigated area fractions (IAFs. Three methods were presented for IAFcomputation: (a Google Earth Estimate (IAF-GEE; (b High resolution imagery (IAF-HRI; and (c Sub-pixel de-composition technique (IAF-SPDT. The IAF-GEE involvedthe use of “zoom-in-views” of sub-meter to 4-meter very high resolution imagery (VHRIfrom Google Earth and helped determine total area available for irrigation (TAAI or netirrigated areas that does not consider intensity or seasonality of irrigation. The IAF-HRI isa well known method that uses finer-resolution data to determine SPAs of the coarser-resolution imagery. The IAF-SPDT is a unique and innovative method wherein SPAs aredetermined based on the precise location of every pixel of a class in 2-dimensionalbrightness-greenness-wetness (BGW feature-space plot of red band versus near-infraredband spectral reflectivity. The SPIAs computed using IAF-SPDT for the GIAM was within2 % of the SPIA computed using well known IAF-HRI. Further the fractions from the 2 methods were significantly correlated. The IAF-HRI and IAF-SPDT help to determine annualized or gross irrigated areas (AIA that does consider intensity or seasonality (e.g., sum of areas from season 1, season 2, and continuous year-round crops. The national census based irrigated areas for the top 40 irrigated nations (which covers about 90% of global irrigation was significantly better related (and had lesser uncertainties and errors when

  14. A Simplified Method for Three-Dimensional (3-D) Ovarian Tissue Culture Yielding Oocytes Competent to Produce Full-Term Offspring in Mice (United States)

    Higuchi, Carolyn M.; Maeda, Yuuki; Horiuchi, Toshitaka; Yamazaki, Yukiko


    In vitro growth of follicles is a promising technology to generate large quantities of competent oocytes from immature follicles and could expand the potential of assisted reproductive technologies (ART). Isolated follicle culture is currently the primary method used to develop and mature follicles in vitro. However, this procedure typically requires complicated, time-consuming procedures, as well as destruction of the normal ovarian microenvironment. Here we describe a simplified 3-D ovarian culture system that can be used to mature multilayered secondary follicles into antral follicles, generating developmentally competent oocytes in vitro. Ovaries recovered from mice at 14 days of age were cut into 8 pieces and placed onto a thick Matrigel drop (3-D culture) for 10 days of culture. As a control, ovarian pieces were cultured on a membrane filter without any Matrigel drop (Membrane culture). We also evaluated the effect of activin A treatment on follicle growth within the ovarian pieces with or without Matrigel support. Thus we tested four different culture conditions: C (Membrane/activin-), A (Membrane/activin+), M (Matrigel/activin-), and M+A (Matrigel/activin+). We found that the cultured follicles and oocytes steadily increased in size regardless of the culture condition used. However, antral cavity formation occurred only in the follicles grown in the 3-D culture system (M, M+A). Following ovarian tissue culture, full-grown GV oocytes were isolated from the larger follicles to evaluate their developmental competence by subjecting them to in vitro maturation (IVM) and in vitro fertilization (IVF). Maturation and fertilization rates were higher using oocytes grown in 3-D culture (M, M+A) than with those grown in membrane culture (C, A). In particular, activin A treatment further improved 3-D culture (M+A) success. Following IVF, two-cell embryos were transferred to recipients to generate full-term offspring. In summary, this simple and easy 3-D ovarian culture

  15. A Simplified Method for Three-Dimensional (3-D Ovarian Tissue Culture Yielding Oocytes Competent to Produce Full-Term Offspring in Mice.

    Directory of Open Access Journals (Sweden)

    Carolyn M Higuchi

    Full Text Available In vitro growth of follicles is a promising technology to generate large quantities of competent oocytes from immature follicles and could expand the potential of assisted reproductive technologies (ART. Isolated follicle culture is currently the primary method used to develop and mature follicles in vitro. However, this procedure typically requires complicated, time-consuming procedures, as well as destruction of the normal ovarian microenvironment. Here we describe a simplified 3-D ovarian culture system that can be used to mature multilayered secondary follicles into antral follicles, generating developmentally competent oocytes in vitro. Ovaries recovered from mice at 14 days of age were cut into 8 pieces and placed onto a thick Matrigel drop (3-D culture for 10 days of culture. As a control, ovarian pieces were cultured on a membrane filter without any Matrigel drop (Membrane culture. We also evaluated the effect of activin A treatment on follicle growth within the ovarian pieces with or without Matrigel support. Thus we tested four different culture conditions: C (Membrane/activin-, A (Membrane/activin+, M (Matrigel/activin-, and M+A (Matrigel/activin+. We found that the cultured follicles and oocytes steadily increased in size regardless of the culture condition used. However, antral cavity formation occurred only in the follicles grown in the 3-D culture system (M, M+A. Following ovarian tissue culture, full-grown GV oocytes were isolated from the larger follicles to evaluate their developmental competence by subjecting them to in vitro maturation (IVM and in vitro fertilization (IVF. Maturation and fertilization rates were higher using oocytes grown in 3-D culture (M, M+A than with those grown in membrane culture (C, A. In particular, activin A treatment further improved 3-D culture (M+A success. Following IVF, two-cell embryos were transferred to recipients to generate full-term offspring. In summary, this simple and easy 3-D ovarian

  16. A method for sex estimation using the proximal femur. (United States)

    Curate, Francisco; Coelho, João; Gonçalves, David; Coelho, Catarina; Ferreira, Maria Teresa; Navega, David; Cunha, Eugénia


    The assessment of sex is crucial to the establishment of a biological profile of an unidentified skeletal individual. The best methods currently available for the sexual diagnosis of human skeletal remains generally rely on the presence of well-preserved pelvic bones, which is not always the case. Postcranial elements, including the femur, have been used to accurately estimate sex in skeletal remains from forensic and bioarcheological settings. In this study, we present an approach to estimate sex using two measurements (femoral neck width [FNW] and femoral neck axis length [FNAL]) of the proximal femur. FNW and FNAL were obtained in a training sample (114 females and 138 males) from the Luís Lopes Collection (National History Museum of Lisbon). Logistic regression and the C4.5 algorithm were used to develop models to predict sex in unknown individuals. Proposed cross-validated models correctly predicted sex in 82.5-85.7% of the cases. The models were also evaluated in a test sample (96 females and 96 males) from the Coimbra Identified Skeletal Collection (University of Coimbra), resulting in a sex allocation accuracy of 80.1-86.2%. This study supports the relative value of the proximal femur to estimate sex in skeletal remains, especially when other exceedingly dimorphic skeletal elements are not accessible for analysis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Analytical method to estimate resin cement diffusion into dentin (United States)

    de Oliveira Ferraz, Larissa Cristina; Ubaldini, Adriana Lemos Mori; de Oliveira, Bruna Medeiros Bertol; Neto, Antonio Medina; Sato, Fracielle; Baesso, Mauro Luciano; Pascotto, Renata Corrêa


    This study analyzed the diffusion of two resin luting agents (resin cements) into dentin, with the aim of presenting an analytical method for estimating the thickness of the diffusion zone. Class V cavities were prepared in the buccal and lingual surfaces of molars (n=9). Indirect composite inlays were luted into the cavities with either a self-adhesive or a self-etch resin cement. The teeth were sectioned bucco-lingually and the cement-dentin interface was analyzed by using micro-Raman spectroscopy (MRS) and scanning electron microscopy. Evolution of peak intensities of the Raman bands, collected from the functional groups corresponding to the resin monomer (C-O-C, 1113 cm-1) present in the cements, and the mineral content (P-O, 961 cm-1) in dentin were sigmoid shaped functions. A Boltzmann function (BF) was then fitted to the peaks encountered at 1113 cm-1 to estimate the resin cement diffusion into dentin. The BF identified a resin cement-dentin diffusion zone of 1.8±0.4 μm for the self-adhesive cement and 2.5±0.3 μm for the self-etch cement. This analysis allowed the authors to estimate the diffusion of the resin cements into the dentin. Fitting the MRS data to the BF contributed to and is relevant for future studies of the adhesive interface.

  18. A power function method for estimating base flow. (United States)

    Lott, Darline A; Stewart, Mark T


    Analytical base flow separation techniques are often used to determine the base flow contribution to total stream flow. Most analytical methods derive base flow from discharge records alone without using basin-specific variables other than basin area. This paper derives a power function for estimating base flow, the form being aQ(b) + cQ, an analytical method calibrated against an integrated basin variable, specific conductance, relating base flow to total discharge, and is consistent with observed mathematical behavior of dissolved solids in stream flow with varying discharge. Advantages of the method are being uncomplicated, reproducible, and applicable to hydrograph separation in basins with limited specific conductance data. The power function relationship between base flow and discharge holds over a wide range of basin areas. It better replicates base flow determined by mass balance methods than analytical methods such as filters or smoothing routines that are not calibrated to natural tracers or empirical basin and gauge-specific variables. Also, it can be used with discharge during periods without specific conductance values, including separating base flow from quick flow for single events. However, it may overestimate base flow during very high flow events. Application of geochemical mass balance and power function base flow separation methods to stream flow and specific conductance records from multiple gauges in the same basin suggests that analytical base flow separation methods must be calibrated at each gauge. Using average values of coefficients introduces a potentially significant and unknown error in base flow as compared with mass balance methods. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  19. Study on color difference estimation method of medicine biochemical analysis (United States)

    Wang, Chunhong; Zhou, Yue; Zhao, Hongxia; Sun, Jiashi; Zhou, Fengkun


    The biochemical analysis in medicine is an important inspection and diagnosis method in hospital clinic. The biochemical analysis of urine is one important item. The Urine test paper shows corresponding color with different detection project or different illness degree. The color difference between the standard threshold and the test paper color of urine can be used to judge the illness degree, so that further analysis and diagnosis to urine is gotten. The color is a three-dimensional physical variable concerning psychology, while reflectance is one-dimensional variable; therefore, the estimation method of color difference in urine test can have better precision and facility than the conventional test method with one-dimensional reflectance, it can make an accurate diagnose. The digital camera is easy to take an image of urine test paper and is used to carry out the urine biochemical analysis conveniently. On the experiment, the color image of urine test paper is taken by popular color digital camera and saved in the computer which installs a simple color space conversion (RGB -> XYZ -> L *a *b *)and the calculation software. Test sample is graded according to intelligent detection of quantitative color. The images taken every time were saved in computer, and the whole illness process will be monitored. This method can also use in other medicine biochemical analyses that have relation with color. Experiment result shows that this test method is quick and accurate; it can be used in hospital, calibrating organization and family, so its application prospect is extensive.

  20. Estimation of citicoline sodium in tablets by difference spectrophotometric method

    Directory of Open Access Journals (Sweden)

    Sagar Suman Panda


    Full Text Available Aim: The present work deals with development and validation of a novel, precise, and accurate spectrophotometric method for the estimation of citicoline sodium (CTS in tablets. This spectrophotometric method is based on the principle that CTS shows two different forms that differs in the absorption spectra in basic and acidic medium. Materials and Methods: The present work was being carried out on Shimadzu 1800 Double Beam UV-visible spectrophotometer. Difference spectra were generated using 10 mm quartz cells over the range of 200-400 nm. Solvents used were 0.1 M NaOH and 0.1 M HCl. Results: The maxima and minima in the difference spectra of CTS were found to be 239 nm and 283 nm, respectively. Amplitude was calculated from the maxima and minima of spectrum. The drug follows linearity in the range of 1-50 μ/ml (R 2 = 0.999. The average % recovery from the tablet formulation was found to be 98.47%. The method was validated as per International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use: ICH Q2(R1 Validation of Analytical Procedures: Text and Methodology guidelines. Conclusion: This method is simple and inexpensive. Hence it can be applied for determination of the drug in pharmaceutical dosage forms.

  1. On Density Estimation from Censored Data by Penalized Likelihood Methods. (United States)


    estimation , * 0 density estimation , censored data, Kaplan-Meier estimator , hazard rate, spline function , reproducing kernel Hilbert space. LJ Research...density function , cumulative distribution L. -’ function , and hazard function are proposed in the random censorship setting. The estimators are the independent random censorship model is to estimate the distribution function nonparametrically . The maximum

  2. Pollution Load Estimation Based on Characteristic Section Load Method (United States)

    Zhu, Lei; Song, JinXi; Liu, WanQing


    Weihe River Watershed above Linjiacun Section is taken as the research objective in this paper and COD is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a new method to estimate pollution loads-Characteristic Section Load Method(CSLM) is proposed and point source pollution and non-point source pollution loads of Weihe River Watershed above Linjiacun Section are calculated in the rainy, normal and dry season in the year 2007. The results show that the monthly point source pollution loads of Weihe River Watershed above Linjiacun Section are discharged stably, the monthly non-point source pollution loads of Weihe River Watershed above Linjiacun Section change greatly, the non-point source pollution load proportions of total pollution load of COD are gradually decreased in the rainy, normal and wet periods.

  3. Current methods for estimating the rate of photorespiration in leaves. (United States)

    Busch, F A


    Photorespiration is a process that competes with photosynthesis, in which Rubisco oxygenates, instead of carboxylates, its substrate ribulose 1,5-bisphosphate. The photorespiratory metabolism associated with the recovery of 3-phosphoglycerate is energetically costly and results in the release of previously fixed CO2. The ability to quantify photorespiration is gaining importance as a tool to help improve plant productivity in order to meet the increasing global food demand. In recent years, substantial progress has been made in the methods used to measure photorespiration. Current techniques are able to measure multiple aspects of photorespiration at different points along the photorespiratory C2 cycle. Six different methods used to estimate photorespiration are reviewed, and their advantages and disadvantages discussed. © 2012 German Botanical Society and The Royal Botanical Society of the Netherlands.

  4. Three methods for estimating a range of vehicular interactions (United States)

    Krbálek, Milan; Apeltauer, Jiří; Apeltauer, Tomáš; Szabová, Zuzana


    We present three different approaches how to estimate the number of preceding cars influencing a decision-making procedure of a given driver moving in saturated traffic flows. The first method is based on correlation analysis, the second one evaluates (quantitatively) deviations from the main assumption in the convolution theorem for probability, and the third one operates with advanced instruments of the theory of counting processes (statistical rigidity). We demonstrate that universally-accepted premise on short-ranged traffic interactions may not be correct. All methods introduced have revealed that minimum number of actively-followed vehicles is two. It supports an actual idea that vehicular interactions are, in fact, middle-ranged. Furthermore, consistency between the estimations used is surprisingly credible. In all cases we have found that the interaction range (the number of actively-followed vehicles) drops with traffic density. Whereas drivers moving in congested regimes with lower density (around 30 vehicles per kilometer) react on four or five neighbors, drivers moving in high-density flows respond to two predecessors only.

  5. A Method to Estimate Shear Quality Factor of Hard Rocks (United States)

    Wang, Xin; Cai, Ming


    Attenuation has a large influence on ground motion intensity. Quality factors are used to measure wave attenuation in a medium and they are often difficult to estimate due to many factors such as the complex geology and underground mining environment. This study investigates the effect of attenuation on seismic wave propagation and ground motion using an advanced numerical tool—SPECFEM2D. A method, which uses numerical modeling and site-specific scaling laws, is proposed to estimate the shear quality factor of hard rocks in underground mines. In the numerical modeling, the seismic source is represented by a moment tensor model and the considered medium is isotropic and homogeneous. Peak particle velocities along the strongest wave motion direction are compared with that from a design scaling law. Based on the field data that were used to derive a semi-empirical design scaling law, it is demonstrated that a shear quality factor of 60 seems to be a representative for the hard rocks in deep mines to consider the attenuation effect of seismic wave propagation. Using the proposed method, reasonable shear quality factors of hard rocks can be obtained and this, in turn, will assist accurate ground motion determination for mine design.

  6. Probabilistic seismic loss estimation via endurance time method (United States)

    Tafakori, Ehsan; Pourzeynali, Saeid; Estekanchi, Homayoon E.


    Probabilistic Seismic Loss Estimation is a methodology used as a quantitative and explicit expression of the performance of buildings using terms that address the interests of both owners and insurance companies. Applying the ATC 58 approach for seismic loss assessment of buildings requires using Incremental Dynamic Analysis (IDA), which needs hundreds of time-consuming analyses, which in turn hinders its wide application. The Endurance Time Method (ETM) is proposed herein as part of a demand propagation prediction procedure and is shown to be an economical alternative to IDA. Various scenarios were considered to achieve this purpose and their appropriateness has been evaluated using statistical methods. The most precise and efficient scenario was validated through comparison against IDA driven response predictions of 34 code conforming benchmark structures and was proven to be sufficiently precise while offering a great deal of efficiency. The loss values were estimated by replacing IDA with the proposed ETM-based procedure in the ATC 58 procedure and it was found that these values suffer from varying inaccuracies, which were attributed to the discretized nature of damage and loss prediction functions provided by ATC 58.


    Directory of Open Access Journals (Sweden)

    Maria del Pilar Naranjo


    Full Text Available After using simplified Labanotation as a didactic tool for some years, the author can conclude that it accomplishes at least three main functions: efficiency of rehearsing time, social recognition and broadening of the choreographic consciousness of the dancer. The doubts of the dancing community about the issue of ‘to write or not to write’ are highly determined by the contexts and their own choreographic evolution, but the utility of Labanotation, as a tool for knowledge, is undeniable.

  8. Estimation method for mathematical expectation of continuous variable upon ordered sample


    Domchenkov, O. A.


    Method for estimation of mathematical expectation of a continuous variable based on analysis of the ordered sample is proposed. The method admits the estimation class propagation on nonlinear estimation classes.


    Directory of Open Access Journals (Sweden)

    Sandeep Shukla


    Full Text Available This paper presents an improved robust method for timing offset estimation in preamble-aided OFDM system. The proposed method is aimed to provide low complexity, high performance timing estimator under the high frequency offset conditions. It uses a modified preamble structure and utilizes double autocorrelation technique to achieve robust timing estimation performance with only moderate increase in complexity. We finally evaluated and compared the performance of the proposed method in terms of mean square error (MSE in AWGN, Rayleigh fading ISI channels and HIPERLAN/2 indoor channel A. The results indicate that the new method has a significantly smaller estimator MSE than the previously presented methods.

  10. Public-Private Investment Partnerships: Efficiency Estimation Methods

    Directory of Open Access Journals (Sweden)

    Aleksandr Valeryevich Trynov


    Full Text Available The article focuses on assessing the effectiveness of investment projects implemented on the principles of public-private partnership (PPP. This article puts forward the hypothesis that the inclusion of multiplicative economic effects will increase the attractiveness of public-private partnership projects, which in turn will contribute to the more efficient use of budgetary resources. The author proposed a methodological approach and methods of evaluating the economic efficiency of PPP projects. The author’s technique is based upon the synthesis of approaches to evaluation of the project implemented in the private and public sector and in contrast to the existing methods allows taking into account the indirect (multiplicative effect arising during the implementation of project. In the article, to estimate the multiplier effect, the model of regional economy — social accounting matrix (SAM was developed. The matrix is based on the data of the Sverdlovsk region for 2013. In the article, the genesis of the balance models of economic systems is presented. The evolution of balance models in the Russian (Soviet and foreign sources from their emergence up to now are observed. It is shown that SAM is widely used in the world for a wide range of applications, primarily to assess the impact on the regional economy of various exogenous factors. In order to clarify the estimates of multiplicative effects, the disaggregation of the account of the “industry” of the matrix of social accounts was carried out in accordance with the All-Russian Classifier of Types of Economic Activities (OKVED. This step allows to consider the particular characteristics of the industry of the estimated investment project. The method was tested on the example of evaluating the effectiveness of the construction of a toll road in the Sverdlovsk region. It is proved that due to the multiplier effect, the more capital-intensive version of the project may be more beneficial in

  11. Influencing Factors and Simplified Model of Film Hole Irrigation

    Directory of Open Access Journals (Sweden)

    Yi-Bo Li


    Full Text Available Film hole irrigation is an advanced low-cost and high-efficiency irrigation method, which can improve water conservation and water use efficiency. Given its various advantages and potential applications, we conducted a laboratory study to investigate the effects of soil texture, bulk density, initial soil moisture, irrigation depth, opening ratio (ρ, film hole diameter (D, and spacing on cumulative infiltration using SWMS-2D. We then proposed a simplified model based on the Kostiakov model for infiltration estimation. Error analyses indicated SWMS-2D to be suitable for infiltration simulation of film hole irrigation. Additional SWMS-2D-based investigations indicated that, for a certain soil, initial soil moisture and irrigation depth had the weakest effects on cumulative infiltration, whereas ρ and D had the strongest effects on cumulative infiltration. A simplified model with ρ and D was further established, and its use was then expanded to different soils. Verification based on seven soil types indicated that the established simplified double-factor model effectively estimates cumulative infiltration for film hole irrigation, with a small mean average error of 0.141–2.299 mm, a root mean square error of 0.177–2.722 mm, a percent bias of −2.131–1.479%, and a large Nash–Sutcliffe coefficient that is close to 1.0.

  12. An interactive website for analytical method comparison and bias estimation. (United States)

    Bahar, Burak; Tuncel, Ayse F; Holmes, Earle W; Holmes, Daniel T


    Regulatory standards mandate laboratories to perform studies to ensure accuracy and reliability of their test results. Method comparison and bias estimation are important components of these studies. We developed an interactive website for evaluating the relative performance of two analytical methods using R programming language tools. The website can be accessed at The site has an easy-to-use interface that allows both copy-pasting and manual entry of data. It also allows selection of a regression model and creation of regression and difference plots. Available regression models include Ordinary Least Squares, Weighted-Ordinary Least Squares, Deming, Weighted-Deming, Passing-Bablok and Passing-Bablok for large datasets. The server processes the data and generates downloadable reports in PDF or HTML format. Our website provides clinical laboratories a practical way to assess the relative performance of two analytical methods. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  13. Rainfall estimation by inverting SMOS soil moisture estimates: A comparison of different methods over Australia (United States)

    Brocca, Luca; Pellarin, Thierry; Crow, Wade T.; Ciabatta, Luca; Massari, Christian; Ryu, Dongryeol; Su, Chun-Hsu; Rüdiger, Christoph; Kerr, Yann


    Remote sensing of soil moisture has reached a level of maturity and accuracy for which the retrieved products can be used to improve hydrological and meteorological applications. In this study, the soil moisture product from the Soil Moisture and Ocean Salinity (SMOS) satellite is used for improving satellite rainfall estimates obtained from the Tropical Rainfall Measuring Mission multisatellite precipitation analysis product (TMPA) using three different "bottom up" techniques: SM2RAIN, Soil Moisture Analysis Rainfall Tool, and Antecedent Precipitation Index Modification. The implementation of these techniques aims at improving the well-known "top down" rainfall estimate derived from TMPA products (version 7) available in near real time. Ground observations provided by the Australian Water Availability Project are considered as a separate validation data set. The three algorithms are calibrated against the gauge-corrected TMPA reanalysis product, 3B42, and used for adjusting the TMPA real-time product, 3B42RT, using SMOS soil moisture data. The study area covers the entire Australian continent, and the analysis period ranges from January 2010 to November 2013. Results show that all the SMOS-based rainfall products improve the performance of 3B42RT, even at daily time scale (differently from previous investigations). The major improvements are obtained in terms of estimation of accumulated rainfall with a reduction of the root-mean-square error of more than 25%. Also, in terms of temporal dynamic (correlation) and rainfall detection (categorical scores) the SMOS-based products provide slightly better results with respect to 3B42RT, even though the relative performance between the methods is not always the same. The strengths and weaknesses of each algorithm and the spatial variability of their performances are identified in order to indicate the ways forward for this promising research activity. Results show that the integration of bottom up and top down approaches

  14. Estimating recharge at Yucca Mountain, Nevada, USA: Comparison of methods (United States)

    Flint, A.L.; Flint, L.E.; Kwicklis, E.M.; Fabryka-Martin, J. T.; Bodvarsson, G.S.


    Obtaining values of net infiltration, groundwater travel time, and recharge is necessary at the Yucca Mountain site, Nevada, USA, in order to evaluate the expected performance of a potential repository as a containment system for high-level radioactive waste. However, the geologic complexities of this site, its low precipitation and net infiltration, with numerous mechanisms operating simultaneously to move water through the system, provide many challenges for the estimation of the spatial distribution of recharge. A variety of methods appropriate for arid environments has been applied, including water-balance techniques, calculations using Darcy's law in the unsaturated zone, a soil-physics method applied to neutron-hole water-content data, inverse modeling of thermal profiles in boreholes extending through the thick unsaturated zone, chloride mass balance, atmospheric radionuclides, and empirical approaches. These methods indicate that near-surface infiltration rates at Yucca Mountain are highly variable in time and space, with local (point) values ranging from zero to several hundred millimeters per year. Spatially distributed net-infiltration values average 5 mm/year, with the highest values approaching 20 mm/year near Yucca Crest. Site-scale recharge estimates range from less than 1 to about 12 mm/year. These results have been incorporated into a site-scale model that has been calibrated using these data sets that reflect infiltration processes acting on highly variable temporal and spatial scales. The modeling study predicts highly non-uniform recharge at the water table, distributed significantly differently from the non-uniform infiltration pattern at the surface.

  15. Methods for estimating disease transmission rates: Evaluating the precision of Poisson regression and two novel methods

    DEFF Research Database (Denmark)

    Kirkeby, Carsten Thure; Hisham Beshara Halasa, Tariq; Gussmann, Maya Katrin


    Precise estimates of disease transmission rates are critical for epidemiological simulation models. Most often these rates must be estimated from longitudinal field data, which are costly and time-consuming to conduct. Consequently, measures to reduce cost like increased sampling intervals...... the transmission rate. We use data from the two simulation models and vary the sampling intervals and the size of the population sampled. We devise two new methods to determine transmission rate, and compare these to the frequently used Poisson regression method in both epidemic and endemic situations. For most...

  16. Nonlinear Least Squares Methods for Joint DOA and Pitch Estimation

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt


    In this paper, we consider the problem of joint direction-of-arrival (DOA) and fundamental frequency estimation. Joint estimation enables robust estimation of these parameters in multi-source scenarios where separate estimators may fail. First, we derive the exact and asymptotic Cram\\'{e}r-Rao...

  17. Estimation of Students’ Graduation Using Multiple Linear Regression Method

    Directory of Open Access Journals (Sweden)

    Bintang Dewi Fajar Kurniatullah


    Full Text Available Utilization of students’ academic data to produce information used by management in monitoring students’ study period on Information System Department. Multiple linier regression method will produce multiple linier regression equation used for estimating students’ graduation equipped with prototype. According to analysis carried out by using nine variable SKS1, SKS2, SKS3, SKS4, IPS1, IPS2, IPS3, IPS4, and the number of repeated courses of 2008 to 2012 the multiple linier regression equation is Y = 13.49  +  0.099 X1  + (-0.068 X2 + 0.025 X3 + (-0.059 X4 + (-0.585 X5 + (-0.443 X6 + (-0.155 X7 + (-0.368 X8 + (-0.082 X9. From the equation there is an error of MSE and RMSE that is equal to 0.1168 and 0.3418. The prototype uses a PHP-based program using sublime text and XAMPP. The prototype monitoring the students’ study time in this research is very helpful if supported by management. Keywords: Data mining, multiple linear regression, estimation, monitoring, study time


    Directory of Open Access Journals (Sweden)

    Татьяна Александровна Коркина


    Full Text Available Analysis of definitions of «professionalism», reflecting the different viewpoints of scientists and practitioners, has shown that it is interpreted as a specific property of the people effectively and reliably carry out labour activity in a variety of conditions. The article presents the methodical approach to an estimation of professionalism of the employee from the position as the external manifestations of the reliability and effectiveness of the work and the position of the personal characteristics of the employee, determining the results of his work. This approach includes the assessment of the level of qualification and motivation of the employee for each key job functions as well as the final results of its implementation on the criteria of efficiency and reliability. The proposed methodological approach to the estimation of professionalism of the employee allows to identify «bottlenecks» in the structure of its labour functions and to define directions of development of the professional qualities of the worker to ensure the required level of reliability and efficiency of the obtained results.DOI:

  19. Estimating rotavirus vaccine effectiveness in Japan using a screening method. (United States)

    Araki, Kaoru; Hara, Megumi; Sakanishi, Yuta; Shimanoe, Chisato; Nishida, Yuichiro; Matsuo, Muneaki; Tanaka, Keitaro


    Rotavirus gastroenteritis is a highly contagious, acute viral disease that imposes a significant health burden worldwide. In Japan, rotavirus vaccines have been commercially available since 2011 for voluntary vaccination, but vaccine coverage and effectiveness have not been evaluated. In the absence of a vaccination registry in Japan, vaccination coverage in the general population was estimated according to the number of vaccines supplied by the manufacturer, the number of children who received financial support for vaccination, and the size of the target population. Patients with rotavirus gastroenteritis were identified by reviewing the medical records of all children who consulted 6 major hospitals in Saga Prefecture with gastroenteritis symptoms. Vaccination status among these patients was investigated by reviewing their medical records or interviewing their guardians by telephone. Vaccine effectiveness was determined using a screening method. Vaccination coverage increased with time, and it was 2-times higher in municipalities where the vaccination fee was supported. In the 2012/13 season, vaccination coverage in Saga Prefecture was 14.9% whereas the proportion of patients vaccinated was 5.1% among those with clinically diagnosed rotavirus gastroenteritis and 1.9% among those hospitalized for rotavirus gastroenteritis. Thus, vaccine effectiveness was estimated as 69.5% and 88.8%, respectively. This is the first study to evaluate rotavirus vaccination coverage and effectiveness in Japan since vaccination began.

  20. Intercomparison of methods to estimate black carbon emissions from cookstoves. (United States)

    de la Sota, Candela; Kane, Moustapha; Mazorra, Javier; Lumbreras, Julio; Youm, Issakha; Viana, Mar


    Black carbon is the second largest contributor to climate change and also poses risks to human health. Despite the need for black carbon (BC) emissions estimates from residential biomass burning for cooking, quantitative data are still scarce. This scarcity is mainly due to the scattered location of the stoves, as well as relatively costly and complex analytical methods available. Two low cost and easy-to-use optical methods, a cell-phone based system and smoke stain reflectometry, where compared to elemental carbon (EC) concentrations by the Sunset OCEC Analyzer (TOT). The three techniques were challenged with different aerosol types (urban and biomass cookstoves), and different filter substrates (quartz and glass fibre). A good agreement was observed between the two low cost techniques and the reference system for the aerosol types and concentrations assessed, although the relationship was statistically different for each type of aerosol. The quantification of correction factors with respect to the reference method for the specific conditions under study is essential with either of the low-cost techniques. BC measurements from the cell-phone system and the reflectometer were moderately affected by the filter substrate. The easy use of the cell-phone based system may allow engaging cookstove users in the data collection process, increasing the amount and frequency of data collection which may, otherwise, not be feasible in resourced constrained locations. This would help to raise public awareness about environmental and health issues related to cookstoves. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Polynomial probability distribution estimation using the method of moments (United States)

    Mattsson, Lars; Rydén, Jesper


    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949

  2. Polynomial probability distribution estimation using the method of moments. (United States)

    Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper


    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.

  3. Comparison of NCCLS and 3-(4,5-Dimethyl-2-Thiazyl)-2,5-Diphenyl-2H-Tetrazolium Bromide (MTT) Methods of In Vitro Susceptibility Testing of Filamentous Fungi and Development of a New Simplified Method (United States)

    Meletiadis, Joseph; Meis, Jacques F. G. M.; Mouton, Johan W.; Donnelly, J. Peter; Verweij, Paul E.


    The susceptibility of 30 clinical isolates belonging to six different species of filamentous fungi (Aspergillus fumigatus, Aspergillus flavus, Scedosporium prolificans, Scedosporium apiospermum, Fusarium solani, and Fusarium oxysporum) was tested against six antifungal drugs (miconazole, voriconazole, itraconazole, UR9825, terbinafine, and amphotericin B) with the microdilution method recommended by the National Committee for Clinical Laboratory Standards (NCCLS) (M38-P). The MICs were compared with the MICs obtained by a colorimetric method measuring the reduction of the dye 3-(4,5-dimethyl-2-thiazyl)-2,5-diphenyl-2H-tetrazolium bromide (MTT) to formazan by viable fungi. The levels of agreement between the two methods were 96 and 92% for MIC-0 (clear wells) and MIC-1 (75% growth reduction), respectively. The levels of agreement were always higher for Aspergillus spp. (97% ± 2.5%), followed by Scedosporium spp. (87% ± 10.3%) and Fusarium spp. (78% ± 7.8%). The NCCLS method was more reproducible than the MTT method: 98 versus 95% for MIC-0 and 97 versus 90% for MIC-1. However, the percentage of hyphal growth as determined visually by the NCCLS method showed several discrepancies when they were compared with the percentages of MTT reduction. A new simplified assay that incorporates the dye MTT with the initial inoculum and in which the fungi are incubated with the dye for 48 h or more was developed, showing comparable levels of agreement and reproducibility with the other two methods. Furthermore, the new assay was easier to perform and more sensitive than the MTT method. PMID:10921957

  4. Geostatistic in Reservoir Characterization: from estimation to simulation methods

    Directory of Open Access Journals (Sweden)

    Mata Lima, H.


    Full Text Available In this article objective have been made to reviews different geostatistical methods available to estimate and simulate petrophysical properties (porosity and permeability of the reservoir. Different geostatistical techniques that allow the combination of hard and soft data are taken into account and one refers the main reason to use the geostatistical simulation rather than estimation. Uncertainty in reservoir characterization due to variogram assumption, which is a strict mathematical equation and can leads to serious simplification on description of the natural processes or phenomena under consideration, is treated here. Mutiple-point geostatistics methods based on the concept of training images, suggested by Strebelle (2000 and Caers (2003 owing to variogram limitation to capture complex heterogeneity, is another subject presented. This article intends to provide a review of geostatistical methods to serve the interest of students and researchers.Este artículo presenta una revisión de diversos métodos geoestatísticos disponibles para estimar y para simular características petrofísicas (porosidad y permeabilidad de la formación geológica (roca depósito del petróleo. Se presentan diversas técnicas geostatísticas que permiten la combinación de datos hard y soft y se explica la razón principal para utilizar la simulación geoestatística en vez de estimación. También se explica la incertidumbre en la caracterización del depósito debido a la asunción del variogram. El hecho de que el variogram sea una simple ecuación matemática conduce a la simplificación seria en la descripción de los procesos o de los fenómenos naturales bajo consideración. Los «métodos geostatísticos del Multiplepoint » (Multiple-point geostatistics methods basados en el concepto de training images, sugerido por Strebelle (2000 y Caers (2003, debido a la limitación del variogram para capturar heterogeneidad compleja es otro tema presentado. Este

  5. Office 2013 simplified

    CERN Document Server

    Marmel, Elaine


    A basic introduction to learn Office 2013 quickly, easily, and in full color Office 2013 has new features and tools to master, and whether you're upgrading from an earlier version or using the Office applications for the first time, you'll appreciate this simplified approach. Offering a clear, visual style of learning, this book provides you with concise, step-by-step instructions and full-color screen shots that walk you through the applications in the Microsoft Office 2013 suite: Word, Excel, PowerPoint, Outlook, and Publisher.Shows you how to tackle dozens of Office 2013

  6. Implant success!!!.....simplified

    Directory of Open Access Journals (Sweden)

    Luthra Kaushal


    Full Text Available The endeavor towards life-like restoration has helped nurture new vistas in the art and science of implant dentistry. The protocol of "restoration-driven implant placement" ensures that the implant is an apical extension of the ideal future restoration and not the opposite. Meticulous pre-implant evaluation of soft and hard tissues, diagnostic cast and use of aesthetic wax-up and radiographic template combined with surgical template can simplify the intricate roadmap for appropriate implant treatment. By applying the harmony of artistic skill, scientific knowledge and clinical expertise, we can simply master the outstanding implant success in requisites of aesthetics, phonetics and function.

  7. Implant success!!!.....simplified. (United States)

    Luthra, Kaushal K


    The endeavor towards life-like restoration has helped nurture new vistas in the art and science of implant dentistry. The protocol of "restoration-driven implant placement" ensures that the implant is an apical extension of the ideal future restoration and not the opposite. Meticulous pre-implant evaluation of soft and hard tissues, diagnostic cast and use of aesthetic wax-up and radiographic template combined with surgical template can simplify the intricate roadmap for appropriate implant treatment.By applying the harmony of artistic skill, scientific knowledge and clinical expertise, we can simply master the outstanding implant success in requisites of aesthetics, phonetics and function.

  8. Creating Web Pages Simplified

    CERN Document Server

    Wooldridge, Mike


    The easiest way to learn how to create a Web page for your family or organization Do you want to share photos and family lore with relatives far away? Have you been put in charge of communication for your neighborhood group or nonprofit organization? A Web page is the way to get the word out, and Creating Web Pages Simplified offers an easy, visual way to learn how to build one. Full-color illustrations and concise instructions take you through all phases of Web publishing, from laying out and formatting text to enlivening pages with graphics and animation. This easy-to-follow visual guide sho

  9. Windows 10 simplified

    CERN Document Server

    McFedries, Paul


    Learn Windows 10 quickly and painlessly with this beginner's guide Windows 10 Simplified is your absolute beginner's guide to the ins and outs of Windows. Fully updated to cover Windows 10, this highly visual guide covers all the new features in addition to the basics, giving you a one-stop resource for complete Windows 10 mastery. Every page features step-by-step screen shots and plain-English instructions that walk you through everything you need to know, no matter how new you are to Windows. You'll master the basics as you learn how to navigate the user interface, work with files, create

  10. Simplified Monitoring System

    CERN Document Server

    Jelinskas, Adomas


    This project can be considered as a model for a simplified grid monitoring. In particular, I was creating a specific monitoring instance, which can be easily set up on a machine and, depending on an input information, automatically start monitoring services using Nagios software application. I had to automate the set up process and configuration of the monitoring system in order for the user to use it easily. I developed a script which automatically sets up the monitoring system, configures it and starts monitoring. I put the script, files and instructions in the repository ';a=summary' under the sub-directory called SNCG.

  11. Windows 8 simplified

    CERN Document Server

    McFedries, Paul


    The easiest way for visual learners to get started with Windows 8 The popular Simplified series makes visual learning easier than ever, and with more than 360,000 copies sold, previous Windows editions are among the bestselling Visual books. This guide goes straight to the point with easy-to-follow, two-page tutorials for each task. With full-color screen shots and step-by-step directions, it gets beginners up and running on the newest version of Windows right away. Learn to work with the new interface and improved Internet Explorer, manage files, share your computer, and much more. Perfect fo


    Directory of Open Access Journals (Sweden)



    Full Text Available The article presents the results of studies of freight transportation by unit trains. The article is aimed at developing the methods of the efficiency evaluation of unit train dispatch on the basis of full-scale experiments. Duration of the car turnover is a random variable when dispatching the single cars and group cars, as well as when dispatching them as a part of a unit train. The existing methodologies for evaluating the efficiency of unit trains’ make-up are based on the use of calculation methodologies and their results can give significant errors. The work presents a methodology that makes it possible to evaluate the efficiency of unit train shipments based on the processing of results of experimental travels using the methods of mathematical statistics. This approach provides probabilistic estimates of the rolling stock use efficiency for different approaches to the organization of car traffic volumes, as well as establishes the effect for each of the participants in the transportation process.

  13. Methane exchange in a boreal forest estimated by gradient method

    Directory of Open Access Journals (Sweden)

    Elin Sundqvist


    Full Text Available Forests are generally considered to be net sinks of atmospheric methane (CH4 because of oxidation by methanotrophic bacteria in well-aerated forests soils. However, emissions from wet forest soils, and sometimes canopy fluxes, are often neglected when quantifying the CH4 budget of a forest. We used a modified Bowen ratio method and combined eddy covariance and gradient methods to estimate net CH4 exchange at a boreal forest site in central Sweden. Results indicate that the site is a net source of CH4. This is in contrast to soil, branch and leaf chamber measurements of uptake of CH4. Wetter soils within the footprint of the canopy are thought to be responsible for the discrepancy. We found no evidence for canopy emissions per se. However, the diel pattern of the CH4 exchange with minimum emissions at daytime correlated well with gross primary production, which supports an uptake in the canopy. More distant source areas could also contribute to the diel pattern; their contribution might be greater at night during stable boundary layer conditions.

  14. Pipeline heating method based on optimal control and state estimation

    Energy Technology Data Exchange (ETDEWEB)

    Vianna, F.L.V. [Dept. of Subsea Technology. Petrobras Research and Development Center - CENPES, Rio de Janeiro, RJ (Brazil)], e-mail:; Orlande, H.R.B. [Dept. of Mechanical Engineering. POLI/COPPE, Federal University of Rio de Janeiro - UFRJ, Rio de Janeiro, RJ (Brazil)], e-mail:; Dulikravich, G.S. [Dept. of Mechanical and Materials Engineering. Florida International University - FIU, Miami, FL (United States)], e-mail:


    In production of oil and gas wells in deep waters the flowing of hydrocarbon through pipeline is a challenging problem. This environment presents high hydrostatic pressures and low sea bed temperatures, which can favor the formation of solid deposits that in critical operating conditions, as unplanned shutdown conditions, may result in a pipeline blockage and consequently incur in large financial losses. There are different methods to protect the system, but nowadays thermal insulation and chemical injection are the standard solutions normally used. An alternative method of flow assurance is to heat the pipeline. This concept, which is known as active heating system, aims at heating the produced fluid temperature above a safe reference level in order to avoid the formation of solid deposits. The objective of this paper is to introduce a Bayesian statistical approach for the state estimation problem, in which the state variables are considered as the transient temperatures within a pipeline cross-section, and to use the optimal control theory as a design tool for a typical heating system during a simulated shutdown condition. An application example is presented to illustrate how Bayesian filters can be used to reconstruct the temperature field from temperature measurements supposedly available on the external surface of the pipeline. The temperatures predicted with the Bayesian filter are then utilized in a control approach for a heating system used to maintain the temperature within the pipeline above the critical temperature of formation of solid deposits. The physical problem consists of a pipeline cross section represented by a circular domain with four points over the pipe wall representing heating cables. The fluid is considered stagnant, homogeneous, isotropic and with constant thermo-physical properties. The mathematical formulation governing the direct problem was solved with the finite volume method and for the solution of the state estimation problem

  15. Efficient Estimation of Extreme Non-linear Roll Motions using the First-order Reliability Method (FORM)

    DEFF Research Database (Denmark)

    Jensen, Jørgen Juncher


    In on-board decision support systems efficient procedures are needed for real-time estimation of the maximum ship responses to be expected within the next few hours, given on-line information on the sea state and user defined ranges of possible headings and speeds. For linear responses standard...... the first-order reliability method (FORM), well-known from structural reliability problems. To illustrate the proposed procedure, the roll motion is modelled by a simplified non-linear procedure taking into account non-linear hydrodynamic damping, time-varying restoring and wave excitation moments...... and the heave acceleration. Resonance excitation, parametric roll and forced roll are all included in the model, albeit with some simplifications. The result is the mean out-crossing rate of the roll angle together with the corresponding most probable wave scenarios (critical wave episodes), leading to user...

  16. A Practical Method to Estimate the Aerodynamic Coefficients of a Small-Scale Paramotor

    Directory of Open Access Journals (Sweden)

    Razvan-Viorel MIHAI


    Full Text Available There are few aircraft other than lighter-than-air vehicles that have the payload carrying capability, short field take-off, and slow speed ranges afforded by a powered parafoil. One very interesting aspect of the powered parafoils or paramotors, is their tendency to fly at a constant airspeed whether it is climbing, descending, or flying straight-and-level. Not only are the aircraft speed stable, but they have pendulum stability as well, due to the mass of the airframe suspended significantly below the canopy. This allows the aircraft to maintain a safe roll attitude and effectively turn in a coordinated manner when the steering pedals are deflected. One of the challenges of flying these aircraft is the necessity of controlling altitude with thrust, and direction with asymmetric drag. The paper presents a practical method to estimate the aerodynamic coefficients of a small-scale paramotor in order to obtain a suitable mathematical model for the aerial vehicle. Thus, a reduced state linear model based on a simplified nonlinear six degree-of-freedom model (6 DOF is described. The autonomous control relies on the paramotor dynamics. And those equations depend on the aerodynamic coefficients. The task in this paper is to record the data of steady state flight regime, and to process it offline. Therefore, the system identification of the small-scale aerial vehicle can be done using the Two-Step Method, resulting an efficient six degree-of-freedom mini-paramotor model. The current work will permit the implementation of the control architecture in order to achieve the autonomous control of the small-scale paramotor through waypoints.

  17. Child mortality estimation 2013: an overview of updates in estimation methods by the United Nations Inter-agency Group for Child Mortality Estimation. (United States)

    Alkema, Leontine; New, Jin Rou; Pedersen, Jon; You, Danzhen


    In September 2013, the United Nations Inter-agency Group for Child Mortality Estimation (UN IGME) published an update of the estimates of the under-five mortality rate (U5MR) and under-five deaths for all countries. Compared to the UN IGME estimates published in 2012, updated data inputs and a new method for estimating the U5MR were used. We summarize the new U5MR estimation method, which is a Bayesian B-spline Bias-reduction model, and highlight differences with the previously used method. Differences in UN IGME U5MR estimates as published in 2012 and those published in 2013 are presented and decomposed into differences due to the updated database and differences due to the new estimation method to explain and motivate changes in estimates. Compared to the previously used method, the new UN IGME estimation method is based on a different trend fitting method that can track (recent) changes in U5MR more closely. The new method provides U5MR estimates that account for data quality issues. Resulting differences in U5MR point estimates between the UN IGME 2012 and 2013 publications are small for the majority of countries but greater than 10 deaths per 1,000 live births for 33 countries in 2011 and 19 countries in 1990. These differences can be explained by the updated database used, the curve fitting method as well as accounting for data quality issues. Changes in the number of deaths were less than 10% on the global level and for the majority of MDG regions. The 2013 UN IGME estimates provide the most recent assessment of levels and trends in U5MR based on all available data and an improved estimation method that allows for closer-to-real-time monitoring of changes in the U5MR and takes account of data quality issues.

  18. Simplified Model of Brushless Synchronous Generator for Real Time Simulation

    CERN Document Server

    Lopez, M D; Rebollo, E; Blanquez, F R


    This paper presents a simplified model of brushless synchronous machine for saving hardware resources in a real time simulation system. Firstly, a brushless excitation system model is described. Thereafter, the simplified transfer function of an AC exciter and rotating diodes of the brushless excitation system is estimated. Finally, the complete system is simulated, comparing the main generator's voltage with both detailed and simplified excitation systems in several scenarios. These results show the accuracy of the simplified model against the detailed simulation model, resulting on an important hardware resources savings.

  19. Sensitivity Analysis of a Simplified Fire Dynamic Model

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt; Nielsen, Anker


    This paper discusses a method for performing a sensitivity analysis of parameters used in a simplified fire model for temperature estimates in the upper smoke layer during a fire. The results from the sensitivity analysis can be used when individual parameters affecting fire safety are assessed...... results for the period before thermal penetration (tp) has occurred. The analysis is also done for all combinations of two parameters in order to find the combination with the largest effect. The Sobol total for pairs had the highest value for the combination of energy release rate and area of opening...

  20. Hardware architecture design of a fast global motion estimation method (United States)

    Liang, Chaobing; Sang, Hongshi; Shen, Xubang


    VLSI implementation of gradient-based global motion estimation (GME) faces two main challenges: irregular data access and high off-chip memory bandwidth requirement. We previously proposed a fast GME method that reduces computational complexity by choosing certain number of small patches containing corners and using them in a gradient-based framework. A hardware architecture is designed to implement this method and further reduce off-chip memory bandwidth requirement. On-chip memories are used to store coordinates of the corners and template patches, while the Gaussian pyramids of both the template and reference frame are stored in off-chip SDRAMs. By performing geometric transform only on the coordinates of the center pixel of a 3-by-3 patch in the template image, a 5-by-5 area containing the warped 3-by-3 patch in the reference image is extracted from the SDRAMs by burst read. Patched-based and burst mode data access helps to keep the off-chip memory bandwidth requirement at the minimum. Although patch size varies at different pyramid level, all patches are processed in term of 3x3 patches, so the utilization of the patch-processing circuit reaches 100%. FPGA implementation results show that the design utilizes 24,080 bits on-chip memory and for a sequence with resolution of 352x288 and frequency of 60Hz, the off-chip bandwidth requirement is only 3.96Mbyte/s, compared with 243.84Mbyte/s of the original gradient-based GME method. This design can be used in applications like video codec, video stabilization, and super-resolution, where real-time GME is a necessity and minimum memory bandwidth requirement is appreciated.

  1. Infrared thermography method for fast estimation of phase diagrams

    Energy Technology Data Exchange (ETDEWEB)

    Palomo Del Barrio, Elena [Université de Bordeaux, Institut de Mécanique et d’Ingénierie, Esplanade des Arts et Métiers, 33405 Talence (France); Cadoret, Régis [Centre National de la Recherche Scientifique, Institut de Mécanique et d’Ingénierie, Esplanade des Arts et Métiers, 33405 Talence (France); Daranlot, Julien [Solvay, Laboratoire du Futur, 178 Av du Dr Schweitzer, 33608 Pessac (France); Achchaq, Fouzia, E-mail: [Université de Bordeaux, Institut de Mécanique et d’Ingénierie, Esplanade des Arts et Métiers, 33405 Talence (France)


    Highlights: • Infrared thermography is proposed to determine phase diagrams in record time. • Phase boundaries are detected by means of emissivity changes during heating. • Transition lines are identified by using Singular Value Decomposition techniques. • Different binary systems have been used for validation purposes. - Abstract: Phase change materials (PCM) are widely used today in thermal energy storage applications. Pure PCMs are rarely used because of non adapted melting points. Instead of them, mixtures are preferred. The search of suitable mixtures, preferably eutectics, is often a tedious and time consuming task which requires the determination of phase diagrams. In order to accelerate this screening step, a new method for estimating phase diagrams in record time (1–3 h) has been established and validated. A sample composed by small droplets of mixtures with different compositions (as many as necessary to have a good coverage of the phase diagram) deposited on a flat substrate is first prepared and cooled down to ambient temperature so that all droplets crystallize. The plate is then heated at constant heating rate up to a sufficiently high temperature for melting all the small crystals. The heating process is imaged by using an infrared camera. An appropriate method based on singular values decomposition technique has been developed to analyze the recorded images and to determine the transition lines of the phase diagram. The method has been applied to determine several simple eutectic phase diagrams and the reached results have been validated by comparison with the phase diagrams obtained by Differential Scanning Calorimeter measurements and by thermodynamic modelling.

  2. Seismic Methods of Identifying Explosions and Estimating Their Yield (United States)

    Walter, W. R.; Ford, S. R.; Pasyanos, M.; Pyle, M. L.; Myers, S. C.; Mellors, R. J.; Pitarka, A.; Rodgers, A. J.; Hauk, T. F.


    Seismology plays a key national security role in detecting, locating, identifying and determining the yield of explosions from a variety of causes, including accidents, terrorist attacks and nuclear testing treaty violations (e.g. Koper et al., 2003, 1999; Walter et al. 1995). A collection of mainly empirical forensic techniques has been successfully developed over many years to obtain source information on explosions from their seismic signatures (e.g. Bowers and Selby, 2009). However a lesson from the three DPRK declared nuclear explosions since 2006, is that our historic collection of data may not be representative of future nuclear test signatures (e.g. Selby et al., 2012). To have confidence in identifying future explosions amongst the background of other seismic signals, and accurately estimate their yield, we need to put our empirical methods on a firmer physical footing. Goals of current research are to improve our physical understanding of the mechanisms of explosion generation of S- and surface-waves, and to advance our ability to numerically model and predict them. As part of that process we are re-examining regional seismic data from a variety of nuclear test sites including the DPRK and the former Nevada Test Site (now the Nevada National Security Site (NNSS)). Newer relative location and amplitude techniques can be employed to better quantify differences between explosions and used to understand those differences in term of depth, media and other properties. We are also making use of the Source Physics Experiments (SPE) at NNSS. The SPE chemical explosions are explicitly designed to improve our understanding of emplacement and source material effects on the generation of shear and surface waves (e.g. Snelson et al., 2013). Finally we are also exploring the value of combining seismic information with other technologies including acoustic and InSAR techniques to better understand the source characteristics. Our goal is to improve our explosion models

  3. A non-destructive method for estimating onion leaf area

    Directory of Open Access Journals (Sweden)

    Córcoles J.I.


    Full Text Available Leaf area is one of the most important parameters for characterizing crop growth and development, and its measurement is useful for examining the effects of agronomic management on crop production. It is related to interception of radiation, photosynthesis, biomass accumulation, transpiration and gas exchange in crop canopies. Several direct and indirect methods have been developed for determining leaf area. The aim of this study is to develop an indirect method, based on the use of a mathematical model, to compute leaf area in an onion crop using non-destructive measurements with the condition that the model must be practical and useful as a Decision Support System tool to improve crop management. A field experiment was conducted in a 4.75 ha commercial onion plot irrigated with a centre pivot system in Aguas Nuevas (Albacete, Spain, during the 2010 irrigation season. To determine onion crop leaf area in the laboratory, the crop was sampled on four occasions between 15 June and 15 September. At each sampling event, eight experimental plots of 1 m2 were used and the leaf area for individual leaves was computed using two indirect methods, one based on the use of an automated infrared imaging system, LI-COR-3100C, and the other using a digital scanner EPSON GT-8000, obtaining several images that were processed using Image J v 1.43 software. A total of 1146 leaves were used. Before measuring the leaf area, 25 parameters related to leaf length and width were determined for each leaf. The combined application of principal components analysis and cluster analysis for grouping leaf parameters was used to reduce the number of variables from 25 to 12. The parameter derived from the product of the total leaf length (L and the leaf diameter at a distance of 25% of the total leaf length (A25 gave the best results for estimating leaf area using a simple linear regression model. The model obtained was useful for computing leaf area using a non

  4. A review of structure-based biodegradation estimation methods. (United States)

    Raymond, J W; Rogers, T N; Shonnard, D R; Kline, A A


    Biodegradation, being the principal abatement process in the environment, is the most important parameter influencing the toxicity, persistence, and ultimate fate in aquatic and terrestrial ecosystems. Biodegradation of an organic chemical in natural systems may be classified as primary (alteration of molecular integrity), ultimate (complete mineralization; i.e. conversion to inorganic compounds and/or normal metabolic processes), or acceptable (toxicity ameliorated). Most of the biodegradation correlations presented in the literature focus on the characterization of primary or ultimate, aerobic degradation. The US Environmental Protection Agency (USEPA) is charged with determining the risks associated with the thousands of chemicals employed in commerce, an effort that is being facilitated through much research aimed at reliable structure-activity relationships (SAR) to predict biodegradation of chemicals in natural systems. To this end, models are needed to understand the mechanisms of biodegradation, to classify chemicals according to relative biodegradability, and to develop reliable biodegradation estimation methods for new chemicals. Frequently, published correlations associating molecular structure to biodegradation will attempt to quantify the degradability of a limited set of homologous chemicals. These correlations have been dubbed quantitative structure biodegradability relationships (QSBRs). More scarce and valuable to researchers are those models that predict the biodegradability of compounds possessing a wide variety of chemical structures. The latter may use any of several techniques and molecular descriptors to correlate biodegradability: QSBRs, pattern recognition, discriminant analysis, and principle component analysis (PCA), to name several. Generally, models either predict the propensity of a chemical to biodegrade using Boolean-type logic (i.e. whether a chemical will "readily biodegrade" or not), or else they quantify the degree of

  5. Estimation of preterm labor immediacy by nonlinear methods.

    Directory of Open Access Journals (Sweden)

    Iker Malaina

    Full Text Available Preterm delivery affects about one tenth of human births and is associated with an increased perinatal morbimortality as well as with remarkable costs. Even if there are a number of predictors and markers of preterm delivery, none of them has a high accuracy. In order to find quantitative indicators of the immediacy of labor, 142 cardiotocographies (CTG recorded from women consulting because of suspected threatened premature delivery with gestational ages comprehended between 24 and 35 weeks were collected and analyzed. These 142 samples were divided into two groups: the delayed labor group (n = 75, formed by the women who delivered more than seven days after the tocography was performed, and the anticipated labor group (n = 67, which corresponded to the women whose labor took place during the seven days following the recording. As a means of finding significant differences between the two groups, some key informational properties were analyzed by applying nonlinear techniques on the tocography recordings. Both the regularity and the persistence levels of the delayed labor group, which were measured by Approximate Entropy (ApEn and Generalized Hurst Exponent (GHE respectively, were found to be significantly different from the anticipated labor group. As delivery approached, the values of ApEn tended to increase while the values of GHE tended to decrease, suggesting that these two methods are sensitive to labor immediacy. On this paper, for the first time, we have been able to estimate childbirth immediacy by applying nonlinear methods on tocographies. We propose the use of the techniques herein described as new quantitative diagnosis tools for premature birth that significantly improve the current protocols for preterm labor prediction worldwide.

  6. Simplifying massive planar subdivisions

    DEFF Research Database (Denmark)

    Arge, Lars; Truelsen, Jakob; Yang, Jungwoo


    (SORT(N)) I/Os, where N is the size of the decomposition and SORT(N) is the number of I/Os need to sort in the standard external-memory model of computation. Previously, such an algorithm was only known for the special case of contour map simplification. Our algorithm is simple enough to be of practical......We present the first I/O- and practically-efficient algorithm for simplifying a planar subdivision, such that no point is moved more than a given distance εxy and such that neighbor relations between faces (homotopy) are preserved. Under some practically realistic assumptions, our algorithm uses...... interest. In fact, although more general, it is significantly simpler than the previous contour map simplification algorithm. We have implemented our algorithm and present results of experimenting with it on massive real-life data. The experiments confirm that the algorithm is efficient in practice...

  7. A novel optical method for estimating the near-wall volume fraction in granular flows (United States)

    Sarno, Luca; Nicolina Papa, Maria; Carleo, Luigi; Tai, Yih-Chin


    Geophysical phenomena, such as debris flows, pyroclastic flows and rock avalanches, involve the rapid flow of granular mixtures. Today the dynamics of these flows is far from being deeply understood, due to their huge complexity compared to clear water or monophasic fluids. To this regard, physical models at laboratory scale represent important tools for understanding the still unclear properties of granular flows and their constitutive laws, under simplified experimental conditions. Beside the velocity and the shear rate, the volume fraction is also strongly interlinked with the rheology of granular materials. Yet, a reliable estimation of this quantity is not easy through non-invasive techniques. In this work a novel cost-effective optical method for estimating the near-wall volume fraction is presented and, then, applied to a laboratory study on steady-state granular flows. A preliminary numerical investigation, through Monte-Carlo generations of grain distributions under controlled illumination conditions, allowed to find the stochastic relationship between the near-wall volume fraction, c3D, and a measurable quantity (the two-dimensional volume fraction), c2D, obtainable through an appropriate binarization of gray-scale images captured by a camera placed in front of the transparent boundary. Such a relation can be well described by c3D = aexp(bc2D), with parameters only depending on the angle of incidence of light, ζ. An experimental validation of the proposed approach is carried out on dispersions of white plastic grains, immersed in various ambient fluids. The mixture, confined in a box with a transparent window, is illuminated by a flickering-free LED lamp, placed so as to form a given ζ with the measuring surface, and is photographed by a camera, placed in front of the same window. The predicted exponential law is found to be in sound agreement with experiments for a wide range of ζ (10° digital scale at the channel outlet for measuring the mass flow

  8. Source Estimation for the Damped Wave Equation Using Modulating Functions Method: Application to the Estimation of the Cerebral Blood Flow

    KAUST Repository

    Asiri, Sharefa M.


    In this paper, a method based on modulating functions is proposed to estimate the Cerebral Blood Flow (CBF). The problem is written in an input estimation problem for a damped wave equation which is used to model the spatiotemporal variations of blood mass density. The method is described and its performance is assessed through some numerical simulations. The robustness of the method in presence of noise is also studied.

  9. Stability over Time of Different Methods of Estimating School Performance (United States)

    Dumay, Xavier; Coe, Rob; Anumendem, Dickson Nkafu


    This paper aims to investigate how stability varies with the approach used in estimating school performance in a large sample of English primary schools. The results show that (a) raw performance is considerably more stable than adjusted performance, which in turn is slightly more stable than growth model estimates; (b) schools' performance…

  10. Accuracy of a new bedside method for estimation of circulating blood volume

    DEFF Research Database (Denmark)

    Christensen, P; Waever Rasmussen, J; Winther Henneberg, S


    To evaluate the accuracy of a modification of the carbon monoxide method of estimating the circulating blood volume.......To evaluate the accuracy of a modification of the carbon monoxide method of estimating the circulating blood volume....

  11. A simplified method for the assessment of carbon balance in agriculture: an application in organic and conventional micro-agroecosystems in a long-term experiment in Tuscany, Italy

    Directory of Open Access Journals (Sweden)

    Giulio Lazzerini


    Full Text Available Many research works propose sophisticated methods to analyse the carbon balance, while only a few tools are available for the calculation of both greenhouse gas emissions and carbon sequestration with simplified methods. This paper describes a carbon balance assessment conducted at farm level with a simplified methodology, which includes calculations of both CO2 emissions and carbon sequestration in crop rotations. This carbon balance was tested in the Montepaldi Long Term Experiment (MOLTE trial in central Italy, where two agroecosystems managed with two different farming practices (organic vs conventional are compared. Both in terms of CO2eq emissions and carbon sequestration, this simplified method applied in our experiment provided comparable results to those yielded by complex methodologies reported in the literature. With regard to the crop rotation scheme applied in the reference period (2003-2007, CO2 emissions from various farm inputs were found to be significantly lower (0.74 Mg ha-1 in the organically managed system than in the conventionally managed system (1.76 Mg ha-1. The same trend was observed in terms of CO2eq per unit of product (0.30 Mg kg-1 in the organic system and 0.78 Mg kg-1 in the conventional system. In the conventional system the sources that contributed most to total emissions were direct and indirect emissions associated with the use of fertilisers and diesel fuel. Also the stock of sequestered carbon was significantly higher in the organic system (27.9 Mg ha-1 of C than in the conventional system (24.5 Mg ha-1 of C. The carbon sequestration rate did not show any significant difference between the two systems. It will be necessary to test further this methodology also in commercial farms and to validate the indicators to monitor carbon fluxes at farm level.

  12. Analytic Method to Estimate Particle Acceleration in Flux Ropes (United States)

    Guidoni, S. E.; Karpen, J. T.; DeVore, C. R.


    The mechanism that accelerates particles to the energies required to produce the observed high-energy emission in solar flares is not well understood. Drake et al. (2006) proposed a kinetic mechanism for accelerating electrons in contracting magnetic islands formed by reconnection. In this model, particles that gyrate around magnetic field lines transit from island to island, increasing their energy by Fermi acceleration in those islands that are contracting. Based on these ideas, we present an analytic model to estimate the energy gain of particles orbiting around field lines inside a flux rope (2.5D magnetic island). We calculate the change in the velocity of the particles as the flux rope evolves in time. The method assumes a simple profile for the magnetic field of the evolving island; it can be applied to any case where flux ropes are formed. In our case, the flux-rope evolution is obtained from our recent high-resolution, compressible 2.5D MHD simulations of breakout eruptive flares. The simulations allow us to resolve in detail the generation and evolution of large-scale flux ropes as a result of sporadic and patchy reconnection in the flare current sheet. Our results show that the initial energy of particles can be increased by 2-5 times in a typical contracting island, before the island reconnects with the underlying arcade. Therefore, particles need to transit only from 3-7 islands to increase their energies by two orders of magnitude. These macroscopic regions, filled with a large number of particles, may explain the large observed rates of energetic electron production in flares. We conclude that this mechanism is a promising candidate for electron acceleration in flares, but further research is needed to extend our results to 3D flare conditions.

  13. Method for estimating off-axis pulse tube losses (United States)

    Fang, T.; Mulcahey, T. I.; Taylor, R. P.; Spoor, P. S.; Conrad, T. J.; Ghiaasiaan, S. M.


    Some Stirling-type pulse tube cryocoolers (PTCs) exhibit sensitivity to gravitational orientation and often exhibit significant cooling performance losses unless situated with the cold end pointing downward. Prior investigations have indicated that some coolers exhibit sensitivity while others do not; however, a reliable method of predicting the level of sensitivity during the design process has not been developed. In this study, we present a relationship that estimates an upper limit to gravitationally induced losses as a function of the dimensionless pulse tube convection number (NPTC) that can be used to ensure that a PTC would remain functional at adverse static tilt conditions. The empirical relationship is based on experimental data as well as experimentally validated 3-D computational fluid dynamics simulations that examine the effects of frequency, mass flow rate, pressure ratio, mass-pressure phase difference, hot and cold end temperatures, and static tilt angle. The validation of the computational model is based on experimental data collected from six commercial pulse tube cryocoolers. The simulation results are obtained from component-level models of the pulse tube and heat exchangers. Parameter ranges covered in component level simulations are 0-180° for tilt angle, 4-8 for length to diameter ratios, 4-80 K cold tip temperatures, -30° to +30° for mass flow to pressure phase angles, and 25-60 Hz operating frequencies. Simulation results and experimental data are aggregated to yield the relationship between inclined PTC performance and pulse tube convection numbers. The results indicate that the pulse tube convection number can be used as an order of magnitude indicator of the orientation sensitivity, but CFD simulations should be used to calculate the change in energy flow more accurately.

  14. Performance Analysis of Methods for Estimating Weibull Parameters ...

    African Journals Online (AJOL)

    In this study, five numerical Weibull distribution methods, namely, the maximum likelihood method, the modified maximum likelihood method (MLM), the energy pattern factor method (EPF), the graphical method (GM), and the empirical method (EM) were explored using hourly synoptic data collected from 1985 to 2013 in the ...

  15. Simplifying Nondeterministic Finite Cover Automata

    Directory of Open Access Journals (Sweden)

    Cezar Câmpeanu


    Full Text Available The concept of Deterministic Finite Cover Automata (DFCA was introduced at WIA '98, as a more compact representation than Deterministic Finite Automata (DFA for finite languages. In some cases representing a finite language, Nondeterministic Finite Automata (NFA may significantly reduce the number of states used. The combined power of the succinctness of the representation of finite languages using both cover languages and non-determinism has been suggested, but never systematically studied. In the present paper, for nondeterministic finite cover automata (NFCA and l-nondeterministic finite cover automaton (l-NFCA, we show that minimization can be as hard as minimizing NFAs for regular languages, even in the case of NFCAs using unary alphabets. Moreover, we show how we can adapt the methods used to reduce, or minimize the size of NFAs/DFCAs/l-DFCAs, for simplifying NFCAs/l-NFCAs.

  16. Estimating Semiparametric Econometrics Models by Local Linear Method: With an Application to Cross-Country Growth


    Qi Li; Jeffrey Wooldridge


    It is well established that local linear method dominates the conventional local constant method in estimating nonparametric regression models by kernel method. In this paper we consider the problem of estimating semiparametric econometric models by local linear method. We provide a simple proof of establishing the joint asymptotic normality of the local linear estimator. We then show that our results can be used to easily derive the asymptotic distributions of local linear estimators for sev...

  17. Fast LCMV-based Methods for Fundamental Frequency Estimation

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Glentis, George-Othon; Christensen, Mads Græsbøll


    as such either the classic time domain averaging covariance matrix estimator, or, if aiming for an increased spectral resolution, the covariance matrix resulting from the application of the recent iterative adaptive approach (IAA). The proposed exact implementations reduce the required computational complexity...... with several orders of magnitude, but, as we show, further computational savings can be obtained by the adoption of an approximative IAA-based data covariance matrix estimator, reminiscent of the recently proposed Quasi-Newton IAA technique. Furthermore, it is shown how the considered pitch estimators can...

  18. Methods to Estimate the Variance of Some Indices of the Signal Detection Theory: A Simulation Study (United States)

    Suero, Manuel; Privado, Jesús; Botella, Juan


    A simulation study is presented to evaluate and compare three methods to estimate the variance of the estimates of the parameters d and "C" of the signal detection theory (SDT). Several methods have been proposed to calculate the variance of their estimators, "d'" and "c." Those methods have been mostly assessed by…

  19. Development, Validation and Application of a Novel Method for Estimating the Thermal Conductance of Critical Interfaces in the Jaws of the LHC Collimation System

    CERN Document Server

    Leitao, I V


    The motivation for this project arises from the difficulty in quantifying the manufacturing quality of critical interfaces in the water cooled jaws of the TCTP and TCSP (Target Collimator Tertiary Pickup and Target Collimator Secondary Pickup) collimators. These interfaces play a decisive role in the transfer of heat deposited by the beam towards the cooling system avoiding excessive deformation of the collimator. Therefore, it was necessary to develop a non-destructive method that provides an estimation of the thermal conductance during the acceptance test of the TCTP and TCSP jaws. The method is based on experimental measurements of temperature evolution and numerical simulations. By matching experimental and numerical results it is possible to estimate the thermal conductance in several sections of the jaw. A simplified experimental installation was built to validate the method, then a fully automatic Test-Bench was developed and built for the future acceptance of the TCTP/TCSP jaws which will be manufactu...

  20. Method for Estimating the Parameters of LFM Radar Signal

    Directory of Open Access Journals (Sweden)

    Tan Chuan-Zhang


    Full Text Available In order to obtain reliable estimate of parameters, it is very important to protect the integrality of linear frequency modulation (LFM signal. Therefore, in the practical LFM radar signal processing, the length of data frame is often greater than the pulse width (PW of signal. In this condition, estimating the parameters by fractional Fourier transform (FrFT will cause the signal to noise ratio (SNR decrease. Aiming at this problem, we multiply the data frame by a Gaussian window to improve the SNR. Besides, for a further improvement of parameters estimation precision, a novel algorithm is derived via Lagrange interpolation polynomial, and we enhance the algorithm by a logarithmic transformation. Simulation results demonstrate that the derived algorithm significantly reduces the estimation errors of chirp-rate and initial frequency.

  1. Improving methods estimation of the investment climate of the country

    Directory of Open Access Journals (Sweden)

    E. V. Ryabinin


    the most objective assessment of the investment climate in the country in order to build their strategies market functioning. The article describes two methods to obtain an estimate of the investment climate, a fundamental and expertise. Studies have shown that the fundamental method provides the most accurate and objective assessment of, but not all of the investment potential factors can be subjected to mathematical evaluation. The use of expert opinion on the practice of subjectivity difficult to experts, so its use requires special care. In modern economic practice it proved that the investment climate elements directly affect the investment decisions of companies. Improving the investment climate assessment methodology, it allows you to build the most optimal form of cooperation between investors from the host country. In today’s political tensions, this path requires clear cooperation of subjects, both in the domestic and international level. However, now, these measures will avoid the destabilization of Russia’s relations with foreign investors.

  2. Inventory-based estimates of forest biomass carbon stocks in China: A comparison of three methods (United States)

    Zhaodi Guo; Jingyun Fang; Yude Pan; Richard. Birdsey


    Several studies have reported different estimates for forest biomass carbon (C) stocks in China. The discrepancy among these estimates may be largely attributed to the methods used. In this study, we used three methods [mean biomass density method (MBM), mean ratio method (MRM), and continuous biomass expansion factor (BEF) method (abbreviated as CBM)] applied to...

  3. Simplified fractional Fourier transforms. (United States)

    Pei, S C; Ding, J J


    The fractional Fourier transform (FRFT) has been used for many years, and it is useful in many applications. Most applications of the FRFT are based on the design of fractional filters (such as removal of chirp noise and the fractional Hilbert transform) or on fractional correlation (such as scaled space-variant pattern recognition). In this study we introduce several types of simplified fractional Fourier transform (SFRFT). Such transforms are all special cases of a linear canonical transform (an affine Fourier transform or an ABCD transform). They have the same capabilities as the original FRFT for design of fractional filters or for fractional correlation. But they are simpler than the original FRFT in terms of digital computation, optical implementation, implementation of gradient-index media, and implementation of radar systems. Our goal is to search for the simplest transform that has the same capabilities as the original FRFT. Thus we discuss not only the formulas and properties of the SFRFT's but also their implementation. Although these SFRFT's usually have no additivity properties, they are useful for the practical applications. They have great potential for replacing the original FRFT's in many applications.


    Directory of Open Access Journals (Sweden)

    Taylor Mac Intyer Fonseca Junior


    Full Text Available This work evaluate seven estimation methods of fatigue properties applied to stainless steels and aluminum alloys. Experimental strain-life curves are compared to the estimations obtained by each method. After applying seven different estimation methods at 14 material conditions, it was found that fatigue life can be estimated with good accuracy only by the Bäumel-Seeger method for the martensitic stainless steel tempered between 300°C and 500°C. The differences between mechanical behavior during monotonic and cyclic loading are probably the reason for the absence of a reliable method for estimation of fatigue behavior from monotonic properties for a group of materials.

  5. Fast, moment-based estimation methods for delay network tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lawrence, Earl Christophre [Los Alamos National Laboratory; Michailidis, George [U OF MICHIGAN; Nair, Vijayan N [U OF MICHIGAN


    Consider the delay network tomography problem where the goal is to estimate distributions of delays at the link-level using data on end-to-end delays. These measurements are obtained using probes that are injected at nodes located on the periphery of the network and sent to other nodes also located on the periphery. Much of the previous literature deals with discrete delay distributions by discretizing the data into small bins. This paper considers more general models with a focus on computationally efficient estimation. The moment-based schemes presented here are designed to function well for larger networks and for applications like monitoring that require speedy solutions.

  6. Detection and identification of dengue virus isolates from Brazil by a simplified reverse transcription - polymerase chain reaction (RT-PCR method

    Directory of Open Access Journals (Sweden)

    FIGUEIREDO Luiz Tadeu Moraes


    Full Text Available We show here a simplified RT-PCR for identification of dengue virus types 1 and 2. Five dengue virus strains, isolated from Brazilian patients, and yellow fever vaccine 17DD as a negative control, were used in this study. C6/36 cells were infected and supernatants were collected after 7 days. The RT-PCR, done in a single reaction vessel, was carried out following a 1/10 dilution of virus in distilled water or in a detergent mixture containing Nonidet P40. The 50 µl assay reaction mixture included 50 pmol of specific primers amplifying a 482 base pair sequence for dengue type 1 and 210 base pair sequence for dengue type 2. In other assays, we used dengue virus consensus primers having maximum sequence similarity to the four serotypes, amplifying a 511 base pair sequence. The reaction mixture also contained 0.1 mM of the four deoxynucleoside triphosphates, 7.5 U of reverse transcriptase, 1U of thermostable Taq DNA polymerase. The mixture was incubated for 5 minutes at 37ºC for reverse transcription followed by 30 cycles of two-step PCR amplification (92ºC for 60 seconds, 53ºC for 60 seconds with slow temperature increment. The PCR products were subjected to 1.7% agarose gel electrophoresis and visualized by UV light after staining with ethidium bromide solution. Low virus titer around 10 3, 6 TCID50/ml was detected by RT-PCR for dengue type 1. Specific DNA amplification was observed with all the Brazilian dengue strains by using dengue virus consensus primers. As compared to other RT-PCRs, this assay is less laborious, done in a shorter time, and has reduced risk of contamination

  7. Climate reconstruction analysis using coexistence likelihood estimation (CRACLE): a method for the estimation of climate using vegetation. (United States)

    Harbert, Robert S; Nixon, Kevin C


    • Plant distributions have long been understood to be correlated with the environmental conditions to which species are adapted. Climate is one of the major components driving species distributions. Therefore, it is expected that the plants coexisting in a community are reflective of the local environment, particularly climate.• Presented here is a method for the estimation of climate from local plant species coexistence data. The method, Climate Reconstruction Analysis using Coexistence Likelihood Estimation (CRACLE), is a likelihood-based method that employs specimen collection data at a global scale for the inference of species climate tolerance. CRACLE calculates the maximum joint likelihood of coexistence given individual species climate tolerance characterization to estimate the expected climate.• Plant distribution data for more than 4000 species were used to show that this method accurately infers expected climate profiles for 165 sites with diverse climatic conditions. Estimates differ from the WorldClim global climate model by less than 1.5°C on average for mean annual temperature and less than ∼250 mm for mean annual precipitation. This is a significant improvement upon other plant-based climate-proxy methods.• CRACLE validates long hypothesized interactions between climate and local associations of plant species. Furthermore, CRACLE successfully estimates climate that is consistent with the widely used WorldClim model and therefore may be applied to the quantitative estimation of paleoclimate in future studies. © 2015 Botanical Society of America, Inc.

  8. Evaluation of some Conventional methods for Estimating Available ...

    African Journals Online (AJOL)

    The results of the selective tests showed that the Oslen's procedure gave the most reliable estimate of available P. Fractionation of the forms of inorganic phosphorus in the phosphate treated mud showed that the extracting solutions removed chiefly aluminium phosphate. In the absence of facilities for field, the relationship ...

  9. Estimating forest canopy bulk density using six indirect methods (United States)

    Robert E. Keane; Elizabeth D. Reinhardt; Joe Scott; Kathy Gray; James Reardon


    Canopy bulk density (CBD) is an important crown characteristic needed to predict crown fire spread, yet it is difficult to measure in the field. Presented here is a comprehensive research effort to evaluate six indirect sampling techniques for estimating CBD. As reference data, detailed crown fuel biomass measurements were taken on each tree within fixed-area plots...

  10. An alternative method for the estimation of greenhouse gas ...

    African Journals Online (AJOL)


    was to estimate methane (CH4) and nitrous oxide (N2O) emissions from privately owned game animals based on international recognized methodologies. ..... National inventory report. Department of climate change and energy efficiency, Australian National Inventory Report, Commonwealth of Australia,. Canberra, ACT.

  11. A novel parameter estimation method for metal oxide surge arrester ...

    Indian Academy of Sciences (India)

    In this paper, a new technique, which is the combination of Adaptive Particle Swarm Optimization (APSO) and Ant Colony Optimization (ACO) algorithms and linking the MATLAB and EMTP, is proposed to estimate the parameters of MO surge arrester models. The proposed algorithm is named Modified Adaptive Particle ...

  12. Estimation of protein degradation in rumen by three methods

    African Journals Online (AJOL)

    The rumen degradability of protein in diets containing maize straw, fish meal and 0, 30 and 60% maize grain was estimated in three ways: (i) from the difference between the total non-ammonia Nand microbial N entering the duodenum over a 24-hour period using 35S and DAPAas microbial markers,. (H) from the ...

  13. Assessing Methods for Generalizing Experimental Impact Estimates to Target Populations (United States)

    Kern, Holger L.; Stuart, Elizabeth A.; Hill, Jennifer; Green, Donald P.


    Randomized experiments are considered the gold standard for causal inference because they can provide unbiased estimates of treatment effects for the experimental participants. However, researchers and policymakers are often interested in using a specific experiment to inform decisions about other target populations. In education research,…

  14. Development of a method to estimate coal pillar loading

    CSIR Research Space (South Africa)

    Roberts, DP


    Full Text Available The primary goal of this project was to determine the accuracy and validity of the Tributary Area Theory (TAT) and to provide better estimates of pillar load using numerical modelling and other tools. Literature review highlighted that previous work...

  15. Estimation of protein degradationin rumen by three methods | Meyer ...

    African Journals Online (AJOL)

    The rumen degradability of protein in diets containing maize straw, fish meal and 0, 30 and 60% maize grain was estimated in three ways: (i) from the difference between the total non-ammonia Nand microbial N entering the duodenum over a 24-hour period using 35S and DAPAas microbial markers, (H) from the ...

  16. A Simple Method for Estimation of Parameters in First order Systems

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Miklos, Robert


    A simple method for estimation of parameters in first order systems with time delays is presented in this paper. The parameter estimation approach is based on a step response for the open loop system. It is shown that the estimation method does not require a complete step response, only a part of...

  17. A Modified Frequency Estimation Equating Method for the Common-Item Nonequivalent Groups Design (United States)

    Wang, Tianyou; Brennan, Robert L.


    Frequency estimation, also called poststratification, is an equating method used under the common-item nonequivalent groups design. A modified frequency estimation method is proposed here, based on altering one of the traditional assumptions in frequency estimation in order to correct for equating bias. A simulation study was carried out to…

  18. Simplified Freeman-Tukey test statistics for testing probabilities in ...

    African Journals Online (AJOL)

    This paper presents the simplified version of the Freeman-Tukey test statistic for testing hypothesis about multinomial probabilities in one, two and multidimensional contingency tables that does not require calculating the expected cell frequencies before test of significance. The simplified method established new criteria of ...


    Directory of Open Access Journals (Sweden)

    Brindusa Tudose


    Full Text Available In the last decades, tax fraud has grown, being catalogued as a serious impediment in the way of economic development. The paper aims to make contributions on two levels: a Theoretical level - by synthesis methodologies for estimating tax fraud and b Empirical level - by analyzing fraud mechanisms and dynamics of this phenomenon, properly established methodologies. To achieve the objective, we have appealed to the qualitative and quantitative analysis. Whatever the context that generates tax fraud mechanisms, the ultimate goal of fraudsters is the same: total or partial avoidance of taxation, respectively obtaining public funds unduly. The increasing complexity of business (regarded as a tax base and failure to adapt prompt of legal regulations to new contexts have allowed diversification and “improving” the mechanisms of fraud, creating additional risks for accuracy estimates of tax fraud.

  20. Method for estimating glueball-meson mixing in lattice QCD

    Energy Technology Data Exchange (ETDEWEB)

    Mandula, J.E.; Papanicolaou, N. (Washington Univ., St. Louis, MO (USA). Dept. of Physics)


    A systematic expansion of lattice QCD amplitudes based on the replica trick is discussed, the leading term of which is the quenched approximation. A parameter is defined that estimates the mixing between glueball and q anti q meson states and provides a test for the reliability of the quenched approximation. The procedure is illustrated by an explicit Monte Carlo calculation for a model system on a one-dimensional lattice.