WorldWideScience

Sample records for simple analytical estimates

  1. Simple analytical expression for crosstalk estimation in homogeneous trench-assisted multi-core fibers

    DEFF Research Database (Denmark)

    Ye, Feihong; Tu, Jiajing; Saitoh, Kunimasa

    2014-01-01

    An analytical expression for the mode coupling coe cient in homogeneous trench-assisted multi-core fibers is derived, which has a sim- ple relationship with the one in normal step-index structures. The amount of inter-core crosstalk reduction (in dB) with trench-assisted structures compared...... to the one with normal step-index structures can then be written by a simple expression. Comparison with numerical simulations confirms that the obtained analytical expression has very good accuracy for crosstalk estimation. The crosstalk properties in trench-assisted multi-core fibers, such as crosstalk...... dependence on core pitch and wavelength-dependent crosstalk, can be obtained by this simple analytical expression....

  2. Application of a simple analytical model to estimate effectiveness of radiation shielding for neutrons

    International Nuclear Information System (INIS)

    Frankle, S.C.; Fitzgerald, D.H.; Hutson, R.L.; Macek, R.J.; Wilkinson, C.A.

    1993-01-01

    Neutron dose equivalent rates have been measured for 800-MeV proton beam spills at the Los Alamos Meson Physics Facility. Neutron detectors were used to measure the neutron dose levels at a number of locations for each beam-spill test, and neutron energy spectra were measured for several beam-spill tests. Estimates of expected levels for various detector locations were made using a simple analytical model developed for 800-MeV proton beam spills. A comparison of measurements and model estimates indicates that the model is reasonably accurate in estimating the neutron dose equivalent rate for simple shielding geometries. The model fails for more complicated shielding geometries, where indirect contributions to the dose equivalent rate can dominate

  3. A Simple Analytic Model for Estimating Mars Ascent Vehicle Mass and Performance

    Science.gov (United States)

    Woolley, Ryan C.

    2014-01-01

    The Mars Ascent Vehicle (MAV) is a crucial component in any sample return campaign. In this paper we present a universal model for a two-stage MAV along with the analytic equations and simple parametric relationships necessary to quickly estimate MAV mass and performance. Ascent trajectories can be modeled as two-burn transfers from the surface with appropriate loss estimations for finite burns, steering, and drag. Minimizing lift-off mass is achieved by balancing optimized staging and an optimized path-to-orbit. This model allows designers to quickly find optimized solutions and to see the effects of design choices.

  4. Comparison of maximum runup through analytical and numerical approaches for different fault parameters estimates

    Science.gov (United States)

    Kanoglu, U.; Wronna, M.; Baptista, M. A.; Miranda, J. M. A.

    2017-12-01

    The one-dimensional analytical runup theory in combination with near shore synthetic waveforms is a promising tool for tsunami rapid early warning systems. Its application in realistic cases with complex bathymetry and initial wave condition from inverse modelling have shown that maximum runup values can be estimated reasonably well. In this study we generate a simplistic bathymetry domains which resemble realistic near-shore features. We investigate the accuracy of the analytical runup formulae to the variation of fault source parameters and near-shore bathymetric features. To do this we systematically vary the fault plane parameters to compute the initial tsunami wave condition. Subsequently, we use the initial conditions to run the numerical tsunami model using coupled system of four nested grids and compare the results to the analytical estimates. Variation of the dip angle of the fault plane showed that analytical estimates have less than 10% difference for angles 5-45 degrees in a simple bathymetric domain. These results shows that the use of analytical formulae for fast run up estimates constitutes a very promising approach in a simple bathymetric domain and might be implemented in Hazard Mapping and Early Warning.

  5. Drift estimation from a simple field theory

    International Nuclear Information System (INIS)

    Mendes, F. M.; Figueiredo, A.

    2008-01-01

    Given the outcome of a Wiener process, what can be said about the drift and diffusion coefficients? If the process is stationary, these coefficients are related to the mean and variance of the position displacements distribution. However, if either drift or diffusion are time-dependent, very little can be said unless some assumption about that dependency is made. In Bayesian statistics, this should be translated into some specific prior probability. We use Bayes rule to estimate these coefficients from a single trajectory. This defines a simple, and analytically tractable, field theory.

  6. Simple and Accurate Analytical Solutions of the Electrostatically Actuated Curled Beam Problem

    KAUST Repository

    Younis, Mohammad I.

    2014-08-17

    We present analytical solutions of the electrostatically actuated initially deformed cantilever beam problem. We use a continuous Euler-Bernoulli beam model combined with a single-mode Galerkin approximation. We derive simple analytical expressions for two commonly observed deformed beams configurations: the curled and tilted configurations. The derived analytical formulas are validated by comparing their results to experimental data in the literature and numerical results of a multi-mode reduced order model. The derived expressions do not involve any complicated integrals or complex terms and can be conveniently used by designers for quick, yet accurate, estimations. The formulas are found to yield accurate results for most commonly encountered microbeams of initial tip deflections of few microns. For largely deformed beams, we found that these formulas yield less accurate results due to the limitations of the single-mode approximations they are based on. In such cases, multi-mode reduced order models need to be utilized.

  7. Fast analytical scatter estimation using graphics processing units.

    Science.gov (United States)

    Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris

    2015-01-01

    To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.

  8. Simple and Accurate Analytical Solutions of the Electrostatically Actuated Curled Beam Problem

    KAUST Repository

    Younis, Mohammad I.

    2014-01-01

    We present analytical solutions of the electrostatically actuated initially deformed cantilever beam problem. We use a continuous Euler-Bernoulli beam model combined with a single-mode Galerkin approximation. We derive simple analytical expressions

  9. Aquatic concentrations of chemical analytes compared to ecotoxicity estimates

    Science.gov (United States)

    Kostich, Mitchell S.; Flick, Robert W.; Angela L. Batt,; Mash, Heath E.; Boone, J. Scott; Furlong, Edward T.; Kolpin, Dana W.; Glassmeyer, Susan T.

    2017-01-01

    We describe screening level estimates of potential aquatic toxicity posed by 227 chemical analytes that were measured in 25 ambient water samples collected as part of a joint USGS/USEPA drinking water plant study. Measured concentrations were compared to biological effect concentration (EC) estimates, including USEPA aquatic life criteria, effective plasma concentrations of pharmaceuticals, published toxicity data summarized in the USEPA ECOTOX database, and chemical structure-based predictions. Potential dietary exposures were estimated using a generic 3-tiered food web accumulation scenario. For many analytes, few or no measured effect data were found, and for some analytes, reporting limits exceeded EC estimates, limiting the scope of conclusions. Results suggest occasional occurrence above ECs for copper, aluminum, strontium, lead, uranium, and nitrate. Sparse effect data for manganese, antimony, and vanadium suggest that these analytes may occur above ECs, but additional effect data would be desirable to corroborate EC estimates. These conclusions were not affected by bioaccumulation estimates. No organic analyte concentrations were found to exceed EC estimates, but ten analytes had concentrations in excess of 1/10th of their respective EC: triclocarban, norverapamil, progesterone, atrazine, metolachlor, triclosan, para-nonylphenol, ibuprofen, venlafaxine, and amitriptyline, suggesting more detailed characterization of these analytes.

  10. A simple stationary semi-analytical wake model

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.

    We present an idealized simple, but fast, semi-analytical algorithm for computation of stationary wind farm wind fields with a possible potential within a multi-fidelity strategy for wind farm topology optimization. Basically, the model considers wakes as linear perturbations on the ambient non......-linear. With each of these approached, a parabolic system are described, which is initiated by first considering the most upwind located turbines and subsequently successively solved in the downstream direction. Algorithms for the resulting wind farm flow fields are proposed, and it is shown that in the limit......-uniform mean wind field, although the modelling of the individual stationary wake flow fields includes non-linear terms. The simulation of the individual wake contributions are based on an analytical solution of the thin shear layer approximation of the NS equations. The wake flow fields are assumed...

  11. Estimation of the simple correlation coefficient.

    Science.gov (United States)

    Shieh, Gwowen

    2010-11-01

    This article investigates some unfamiliar properties of the Pearson product-moment correlation coefficient for the estimation of simple correlation coefficient. Although Pearson's r is biased, except for limited situations, and the minimum variance unbiased estimator has been proposed in the literature, researchers routinely employ the sample correlation coefficient in their practical applications, because of its simplicity and popularity. In order to support such practice, this study examines the mean squared errors of r and several prominent formulas. The results reveal specific situations in which the sample correlation coefficient performs better than the unbiased and nearly unbiased estimators, facilitating recommendation of r as an effect size index for the strength of linear association between two variables. In addition, related issues of estimating the squared simple correlation coefficient are also considered.

  12. Simple analytical relations for ship bow waves

    Science.gov (United States)

    Noblesse, Francis; Delhommeau, G.?Rard; Guilbaud, Michel; Hendrix, Dane; Yang, Chi

    Simple analytical relations for the bow wave generated by a ship in steady motion are given. Specifically, simple expressions that define the height of a ship bow wave, the distance between the ship stem and the crest of the bow wave, the rise of water at the stem, and the bow wave profile, explicitly and without calculations, in terms of the ship speed, draught, and waterline entrance angle, are given. Another result is a simple criterion that predicts, also directly and without calculations, when a ship in steady motion cannot generate a steady bow wave. This unsteady-flow criterion predicts that a ship with a sufficiently fine waterline, specifically with waterline entrance angle 2, may generate a steady bow wave at any speed. However, a ship with a fuller waterline (25E) can only generate a steady bow wave if the ship speed is higher than a critical speed, defined in terms of αE by a simple relation. No alternative criterion for predicting when a ship in steady motion does not generate a steady bow wave appears to exist. A simple expression for the height of an unsteady ship bow wave is also given. In spite of their remarkable simplicity, the relations for ship bow waves obtained in the study (using only rudimentary physical and mathematical considerations) are consistent with experimental measurements for a number of hull forms having non-bulbous wedge-shaped bows with small flare angle, and with the authors' measurements and observations for a rectangular flat plate towed at a yaw angle.

  13. An Analytical Cost Estimation Procedure

    National Research Council Canada - National Science Library

    Jayachandran, Toke

    1999-01-01

    Analytical procedures that can be used to do a sensitivity analysis of a cost estimate, and to perform tradeoffs to identify input values that can reduce the total cost of a project, are described in the report...

  14. Analytical estimates of structural behavior

    CERN Document Server

    Dym, Clive L

    2012-01-01

    Explicitly reintroducing the idea of modeling to the analysis of structures, Analytical Estimates of Structural Behavior presents an integrated approach to modeling and estimating the behavior of structures. With the increasing reliance on computer-based approaches in structural analysis, it is becoming even more important for structural engineers to recognize that they are dealing with models of structures, not with the actual structures. As tempting as it is to run innumerable simulations, closed-form estimates can be effectively used to guide and check numerical results, and to confirm phys

  15. Analyticity estimates for the Navier-Stokes equations

    DEFF Research Database (Denmark)

    Herbst, I.; Skibsted, Erik

    We study spatial analyticity properties of solutions of the Navier-Stokes equation and obtain new growth rate estimates for the analyticity radius. We also study stability properties of strong global solutions of the Navier-Stokes equation with data in and prove a stability result...

  16. Development of a simple estimation tool for LMFBR construction cost

    International Nuclear Information System (INIS)

    Yoshida, Kazuo; Kinoshita, Izumi

    1999-01-01

    A simple tool for estimating the construction costs of liquid-metal-cooled fast breeder reactors (LMFBRs), 'Simple Cost' was developed in this study. Simple Cost is based on a new estimation formula that can reduce the amount of design data required to estimate construction costs. Consequently, Simple cost can be used to estimate the construction costs of innovative LMFBR concepts for which detailed design has not been carried out. The results of test calculation show that Simple Cost provides cost estimations equivalent to those obtained with conventional methods within the range of plant power from 325 to 1500 MWe. Sensitivity analyses for typical design parameters were conducted using Simple Cost. The effects of four major parameters - reactor vessel diameter, core outlet temperature, sodium handling area and number of secondary loops - on the construction costs of LMFBRs were evaluated quantitatively. The results show that the reduction of sodium handling area is particularly effective in reducing construction costs. (author)

  17. A simple analytical model of single-event upsets in bulk CMOS

    Energy Technology Data Exchange (ETDEWEB)

    Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A., E-mail: aasmol@spels.ru; Ulanova, Anastasia V.; Boruzdina, Anna B.

    2017-06-01

    During the last decade, multiple new methods of single event upset (SEU) rate prediction for aerospace systems have been proposed. Despite different models and approaches being employed in these methods, they all share relatively high usage complexity and require information about a device that is not always available to an end user. This work presents an alternative approach to estimating SEU cross-section as a function of linear energy transfer (LET) that can be further developed into a method of SEU rate prediction. The goal is to propose a simple, yet physics-based, approach with just two parameters that can be used even in situations when only a process node of the device is known. The developed approach is based on geometrical interpretation of SEU cross-section and an analytical solution to the diffusion problem obtained for a simplified IC topology model. A good fit of the model to the experimental data encompassing 7 generations of SRAMs is demonstrated.

  18. A simple analytical model of single-event upsets in bulk CMOS

    International Nuclear Information System (INIS)

    Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A.; Ulanova, Anastasia V.; Boruzdina, Anna B.

    2017-01-01

    During the last decade, multiple new methods of single event upset (SEU) rate prediction for aerospace systems have been proposed. Despite different models and approaches being employed in these methods, they all share relatively high usage complexity and require information about a device that is not always available to an end user. This work presents an alternative approach to estimating SEU cross-section as a function of linear energy transfer (LET) that can be further developed into a method of SEU rate prediction. The goal is to propose a simple, yet physics-based, approach with just two parameters that can be used even in situations when only a process node of the device is known. The developed approach is based on geometrical interpretation of SEU cross-section and an analytical solution to the diffusion problem obtained for a simplified IC topology model. A good fit of the model to the experimental data encompassing 7 generations of SRAMs is demonstrated.

  19. Radiative heat transfer in honeycomb structures-New simple analytical and numerical approaches

    International Nuclear Information System (INIS)

    Baillis, D; Coquard, R; Randrianalisoa, J

    2012-01-01

    Porous Honeycomb Structures present the interest of combining, at the same time, high thermal insulating properties, low density and sufficient mechanical resistance. However, their thermal properties remain relatively unexplored. The aim of this study is the modelling of the combined heat transfer and especially radiative heat transfer through this type of anisotropic porous material. The equivalent radiative properties of the material are determined using ray-tracing procedures inside the honeycomb porous structure. From computational ray-tracing results, simple new analytical relations have been deduced. These useful analytical relations permit to determine radiative properties such as extinction, absorption and scattering coefficients and phase function functions of cell dimensions and optical properties of cell walls. The radiative properties of honeycomb material strongly depend on the direction of propagation. From the radiative properties computed, we have estimated the radiative heat flux passing through slabs of honeycomb core materials submitted to a 1-D temperature difference between a hot and a cold plate. We have compared numerical results obtained from Discrete Ordinate Method with analytical results obtained from Rosseland-Deissler approximation. This approximation is usually used in the case of isotropic materials. We have extended it to anisotropic honeycomb materials. Indeed a mean over incident directions of Rosseland extinction coefficient is proposed. Results tend to show that Rosseland-Deissler extended approximation can be used as a first approximation. Deviation on radiative conductivity obtained from Rosseland-Deissler approximation and from the Discrete Ordinated Method are lower than 6.7% for all the cases studied.

  20. Simple apparatus for polarization sensing of analytes

    Science.gov (United States)

    Gryczynski, Zygmunt; Gryczynski, Ignacy; Lakowicz, Joseph R.

    2000-09-01

    We describe a simple device for fluorescence sensing based on an unexpansive light source, a dual photocell and a Watson bridge. The emission is detected from two fluorescent samples, one of which changes intensity in response to the analyte. The emission from these two samples is observed through two orthogonally oriented polarizers and an analyzer polarizer. The latter polarizer is rotated to yield equal intensities from both sides of the dual photocell, as determined by a zero voltage from the Watson bridge. Using this device, we are able to measure fluorescein concentration to an accuracy near 2% at 1 (mu) M fluorescein, and pH values accurate to +/- 0.02 pH units. We also use this approach with a UV hand lamp and a glucose-sensitive protein to measure glucose concentrations near 2 (mu) M to an accuracy of +/- 0.1 (mu) M. This approach requires only simple electronics, which can be battery powered. Additionally, the method is generic, and can be applied with any fluorescent sample that displays a change in intensity. One can imagine this approach being used to develop portable point-of-care clinical devices.

  1. An analytical solution for improved HIFU SAR estimation

    International Nuclear Information System (INIS)

    Dillon, C R; Vyas, U; Christensen, D A; Roemer, R B; Payne, A

    2012-01-01

    Accurate determination of the specific absorption rates (SARs) present during high intensity focused ultrasound (HIFU) experiments and treatments provides a solid physical basis for scientific comparison of results among HIFU studies and is necessary to validate and improve SAR predictive software, which will improve patient treatment planning, control and evaluation. This study develops and tests an analytical solution that significantly improves the accuracy of SAR values obtained from HIFU temperature data. SAR estimates are obtained by fitting the analytical temperature solution for a one-dimensional radial Gaussian heating pattern to the temperature versus time data following a step in applied power and evaluating the initial slope of the analytical solution. The analytical method is evaluated in multiple parametric simulations for which it consistently (except at high perfusions) yields maximum errors of less than 10% at the center of the focal zone compared with errors up to 90% and 55% for the commonly used linear method and an exponential method, respectively. For high perfusion, an extension of the analytical method estimates SAR with less than 10% error. The analytical method is validated experimentally by showing that the temperature elevations predicted using the analytical method's SAR values determined for the entire 3D focal region agree well with the experimental temperature elevations in a HIFU-heated tissue-mimicking phantom. (paper)

  2. Analytical estimations of limit cycle amplitude for delay-differential equations

    Directory of Open Access Journals (Sweden)

    Tamás Molnár

    2016-09-01

    Full Text Available The amplitude of limit cycles arising from Hopf bifurcation is estimated for nonlinear delay-differential equations by means of analytical formulas. An improved analytical estimation is introduced, which allows more accurate quantitative prediction of periodic solutions than the standard approach that formulates the amplitude as a square-root function of the bifurcation parameter. The improved estimation is based on special global properties of the system: the method can be applied if the limit cycle blows up and disappears at a certain value of the bifurcation parameter. As an illustrative example, the improved analytical formula is applied to the problem of stick balancing.

  3. The stationary sine-Gordon equation on metric graphs: Exact analytical solutions for simple topologies

    Science.gov (United States)

    Sabirov, K.; Rakhmanov, S.; Matrasulov, D.; Susanto, H.

    2018-04-01

    We consider the stationary sine-Gordon equation on metric graphs with simple topologies. Exact analytical solutions are obtained for different vertex boundary conditions. It is shown that the method can be extended for tree and other simple graph topologies. Applications of the obtained results to branched planar Josephson junctions and Josephson junctions with tricrystal boundaries are discussed.

  4. An alternative procedure for estimating the population mean in simple random sampling

    Directory of Open Access Journals (Sweden)

    Housila P. Singh

    2012-03-01

    Full Text Available This paper deals with the problem of estimating the finite population mean using auxiliary information in simple random sampling. Firstly we have suggested a correction to the mean squared error of the estimator proposed by Gupta and Shabbir [On improvement in estimating the population mean in simple random sampling. Jour. Appl. Statist. 35(5 (2008, pp. 559-566]. Later we have proposed a ratio type estimator and its properties are studied in simple random sampling. Numerically we have shown that the proposed class of estimators is more efficient than different known estimators including Gupta and Shabbir (2008 estimator.

  5. A simple analytical model for reactive particle ignition in explosives

    Energy Technology Data Exchange (ETDEWEB)

    Tanguay, Vincent [Defence Research and Development Canada - Valcartier, 2459 Pie XI Blvd. North, Quebec, QC, G3J 1X5 (Canada); Higgins, Andrew J. [Department of Mechanical Engineering, McGill University, 817 Sherbrooke St. West, Montreal, QC, H3A 2K6 (Canada); Zhang, Fan [Defence Research and Development Canada - Suffield, P. O. Box 4000, Stn Main, Medicine Hat, AB, T1A 8K6 (Canada)

    2007-10-15

    A simple analytical model is developed to predict ignition of magnesium particles in nitromethane detonation products. The flow field is simplified by considering the detonation products as a perfect gas expanding in a vacuum in a planar geometry. This simplification allows the flow field to be solved analytically. A single particle is then introduced in this flow field. Its trajectory and heating history are computed. It is found that most of the particle heating occurs in the Taylor wave and in the quiescent flow region behind it, shortly after which the particle cools. By considering only these regions, thereby considerably simplifying the problem, the flow field can be solved analytically with a more realistic equation of state (such as JWL) and a spherical geometry. The model is used to compute the minimum charge diameter for particle ignition to occur. It is found that the critical charge diameter for particle ignition increases with particle size. These results are compared to experimental data and show good agreement. (Abstract Copyright [2007], Wiley Periodicals, Inc.)

  6. Light Source Estimation with Analytical Path-tracing

    OpenAIRE

    Kasper, Mike; Keivan, Nima; Sibley, Gabe; Heckman, Christoffer

    2017-01-01

    We present a novel algorithm for light source estimation in scenes reconstructed with a RGB-D camera based on an analytically-derived formulation of path-tracing. Our algorithm traces the reconstructed scene with a custom path-tracer and computes the analytical derivatives of the light transport equation from principles in optics. These derivatives are then used to perform gradient descent, minimizing the photometric error between one or more captured reference images and renders of our curre...

  7. Erratum: A Simple, Analytical Model of Collisionless Magnetic Reconnection in a Pair Plasma

    Science.gov (United States)

    Hesse, Michael; Zenitani, Seiji; Kuznetsova, Masha; Klimas, Alex

    2011-01-01

    The following describes a list of errata in our paper, "A simple, analytical model of collisionless magnetic reconnection in a pair plasma." It supersedes an earlier erratum. We recently discovered an error in the derivation of the outflow-to-inflow density ratio.

  8. The estimated possibilities of process monitoring in milk production by the simple thermodynamic sensors

    Directory of Open Access Journals (Sweden)

    Martin Adámek

    2016-12-01

    Full Text Available The characterization and monitoring of thermal processes in thermodynamic systems can be performed using the thermodynamic sensors (TDS. The basic idea of thermodynamic sensor is possible to use in many various applications (eq. monitoring of frictional heat, thermal radiation, pollution of cleaning fluid, etc.. One of application areas, where the thermodynamic sensor can find the new area for a using, is a production of milk products - cheese, yogurt, kefir, etc. This paper describes the estimated possibilities, advantages and disadvantages of the use of thermodynamic sensors in diary productions and simple experiments for characterization and monitoring of basic operations in milk production process by thermodynamic sensors. The milk products are often realized by fermenting or renneting process. Final stages of fermentation and renneting processes are often determined on the base of sensory evaluation, pH measurement or by analytical method. The exact time of the fermentation process completion is dependent on various parameters and is often the company know-how. The fast, clean and simple non-analytical non-contact method for monitoring and for the determination of process final stages does not exist in this time. Tests of fermentation process, renneting process and yoghurt process by thermodynamic sensors were characterized and measured in this work. Measurement of activity yeasts was tested in first series of experiments. In second series of experiments, measurement of processes in milk production was tested. First results of simple experiments show that the thermodynamic sensors might be used for determination of time behaviour of these processes. Therefore, the milk products (cheese, yogurt, kefir, etc. is opened as a one of new application areas, where the thermodynamic sensor can be used.

  9. Analytic continuation by duality estimation of the S parameter

    International Nuclear Information System (INIS)

    Ignjatovic, S. R.; Wijewardhana, L. C. R.; Takeuchi, T.

    2000-01-01

    We investigate the reliability of the analytic continuation by duality (ACD) technique in estimating the electroweak S parameter for technicolor theories. The ACD technique, which is an application of finite energy sum rules, relates the S parameter for theories with unknown particle spectra to known OPE coefficients. We identify the sources of error inherent in the technique and evaluate them for several toy models to see if they can be controlled. The evaluation of errors is done analytically and all relevant formulas are provided in appendixes including analytical formulas for approximating the function 1/s with a polynomial in s. The use of analytical formulas protects us from introducing additional errors due to numerical integration. We find that it is very difficult to control the errors even when the momentum dependence of the OPE coefficients is known exactly. In realistic cases in which the momentum dependence of the OPE coefficients is only known perturbatively, it is impossible to obtain a reliable estimate. (c) 2000 The American Physical Society

  10. A simple method for estimating the entropy of neural activity

    International Nuclear Information System (INIS)

    Berry II, Michael J; Tkačik, Gašper; Dubuis, Julien; Marre, Olivier; Da Silveira, Rava Azeredo

    2013-01-01

    The number of possible activity patterns in a population of neurons grows exponentially with the size of the population. Typical experiments explore only a tiny fraction of the large space of possible activity patterns in the case of populations with more than 10 or 20 neurons. It is thus impossible, in this undersampled regime, to estimate the probabilities with which most of the activity patterns occur. As a result, the corresponding entropy—which is a measure of the computational power of the neural population—cannot be estimated directly. We propose a simple scheme for estimating the entropy in the undersampled regime, which bounds its value from both below and above. The lower bound is the usual ‘naive’ entropy of the experimental frequencies. The upper bound results from a hybrid approximation of the entropy which makes use of the naive estimate, a maximum entropy fit, and a coverage adjustment. We apply our simple scheme to artificial data, in order to check their accuracy; we also compare its performance to those of several previously defined entropy estimators. We then apply it to actual measurements of neural activity in populations with up to 100 cells. Finally, we discuss the similarities and differences between the proposed simple estimation scheme and various earlier methods. (paper)

  11. SIMPLE ESTIMATOR AND CONSISTENT STRONGLY OF STABLE DISTRIBUTIONS

    Directory of Open Access Journals (Sweden)

    Cira E. Guevara Otiniano

    2016-06-01

    Full Text Available Stable distributions are extensively used to analyze earnings of financial assets, such as exchange rates and stock prices assets. In this paper we propose a simple and strongly consistent estimator for the scale parameter of a symmetric stable L´evy distribution. The advantage of this estimator is that your computational time is minimum thus it can be used to initialize intensive computational procedure such as maximum likelihood. With random samples of sized n we tested the efficacy of these estimators by Monte Carlo method. We also included applications for three data sets.

  12. A simple procedure to estimate reactivity with good noise filtering characteristics

    International Nuclear Information System (INIS)

    Shimazu, Yoichiro

    2014-01-01

    Highlights: • A new and simple on-line reactivity estimation method is proposed. • The estimator has robust noise filtering characteristics. • The noise filtering is equivalent to those of conventional reactivity meters. • The new estimator eliminates the burden of selecting optimum filter constants. • The new estimation performance is assessed without and with measurement noise. - Abstract: A new and simple on-line reactivity estimation method is proposed. The estimator has robust noise filtering characteristics without the use of complex filters. The noise filtering capability is equivalent to or better than that of a conventional estimator based on Inverse Point Kinetics (IPK). The new estimator can also eliminate the burden of selecting optimum filter time constants, such as would be required for the IPK-based estimator, or noise covariance matrices, which are needed if the extended Kalman filter (EKF) technique is used. In this paper, the new estimation method is introduced and its performance assessed without and with measurement noise

  13. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.

    Science.gov (United States)

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-03-25

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  14. Entropy estimates for simple random fields

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Justesen, Jørn

    1995-01-01

    We consider the problem of determining the maximum entropy of a discrete random field on a lattice subject to certain local constraints on symbol configurations. The results are expected to be of interest in the analysis of digitized images and two dimensional codes. We shall present some examples...... of binary and ternary fields with simple constraints. Exact results on the entropies are known only in a few cases, but we shall present close bounds and estimates that are computationally efficient...

  15. Aquatic concentrations of chemical analytes compared to ecotoxicity estimates

    Data.gov (United States)

    U.S. Environmental Protection Agency — We describe screening level estimates of potential aquatic toxicity posed by 227 chemical analytes that were measured in 25 ambient water samples collected as part...

  16. A simple and rapid method to estimate radiocesium in man

    International Nuclear Information System (INIS)

    Kindl, P.; Steger, F.

    1990-09-01

    A simple and rapid method for monitoring internal contamination of radiocesium in man was developed. This method is based on measurements of the γ-rays emitted from the muscular parts between the thights by a simple NaJ(Tl)-system. The experimental procedure, the calibration, the estimation of the body activity and results are explained and discussed. (Authors)

  17. A simple analytical thermo-mechanical model for liquid crystal elastomer bilayer structures

    Directory of Open Access Journals (Sweden)

    Yun Cui

    2018-02-01

    Full Text Available The bilayer structure consisting of thermal-responsive liquid crystal elastomers (LCEs and other polymer materials with stretchable heaters has attracted much attention in applications of soft actuators and soft robots due to its ability to generate large deformations when subjected to heat stimuli. A simple analytical thermo-mechanical model, accounting for the non-uniform feature of the temperature/strain distribution along the thickness direction, is established for this type of bilayer structure. The analytical predictions of the temperature and bending curvature radius agree well with finite element analysis and experiments. The influences of the LCE thickness and the heat generation power on the bending deformation of the bilayer structure are fully investigated. It is shown that a thinner LCE layer and a higher heat generation power could yield more bending deformation. These results may help the design of soft actuators and soft robots involving thermal responsive LCEs.

  18. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera

    Directory of Open Access Journals (Sweden)

    Chun-Tang Chao

    2016-03-01

    Full Text Available In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  19. Image Analytical Approach for Needle-Shaped Crystal Counting and Length Estimation

    DEFF Research Database (Denmark)

    Wu, Jian X.; Kucheryavskiy, Sergey V.; Jensen, Linda G.

    2015-01-01

    Estimation of nucleation and crystal growth rates from microscopic information is of critical importance. This can be an especially challenging task if needle growth of crystals is observed. To address this challenge, an image analytical method for counting of needle-shaped crystals and estimating...

  20. Research on bathymetry estimation by Worldview-2 based with the semi-analytical model

    Science.gov (United States)

    Sheng, L.; Bai, J.; Zhou, G.-W.; Zhao, Y.; Li, Y.-C.

    2015-04-01

    South Sea Islands of China are far away from the mainland, the reefs takes more than 95% of south sea, and most reefs scatter over interested dispute sensitive area. Thus, the methods of obtaining the reefs bathymetry accurately are urgent to be developed. Common used method, including sonar, airborne laser and remote sensing estimation, are limited by the long distance, large area and sensitive location. Remote sensing data provides an effective way for bathymetry estimation without touching over large area, by the relationship between spectrum information and bathymetry. Aimed at the water quality of the south sea of China, our paper develops a bathymetry estimation method without measured water depth. Firstly the semi-analytical optimization model of the theoretical interpretation models has been studied based on the genetic algorithm to optimize the model. Meanwhile, OpenMP parallel computing algorithm has been introduced to greatly increase the speed of the semi-analytical optimization model. One island of south sea in China is selected as our study area, the measured water depth are used to evaluate the accuracy of bathymetry estimation from Worldview-2 multispectral images. The results show that: the semi-analytical optimization model based on genetic algorithm has good results in our study area;the accuracy of estimated bathymetry in the 0-20 meters shallow water area is accepted.Semi-analytical optimization model based on genetic algorithm solves the problem of the bathymetry estimation without water depth measurement. Generally, our paper provides a new bathymetry estimation method for the sensitive reefs far away from mainland.

  1. Efficacy of bi-component cocrystals and simple binary eutectics screening using heat of mixing estimated under super cooled conditions.

    Science.gov (United States)

    Cysewski, Piotr

    2016-07-01

    The values of excess heat characterizing sets of 493 simple binary eutectic mixtures and 965 cocrystals were estimated under super cooled liquid condition. The application of a confusion matrix as a predictive analytical tool was applied for distinguishing between the two subsets. Among seven considered levels of computations the BP-TZVPD-FINE approach was found to be the most precise in terms of the lowest percentage of misclassified positive cases. Also much less computationally demanding AM1 and PM7 semiempirical quantum chemistry methods are likewise worth considering for estimation of the heat of mixing values. Despite intrinsic limitations of the approach of modeling miscibility in the solid state, based on components affinities in liquids under super cooled conditions, it is possible to define adequate criterions for classification of coformers pairs as simple binary eutectics or cocrystals. The predicted precision has been found as 12.8% what is quite accepted, bearing in mind simplicity of the approach. However, tuning theoretical screening to such precision implies the exclusion of many positive cases and this wastage exceeds 31% of cocrystals classified as false negatives. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Fill rate estimation in periodic review policies with lost sales using simple methods

    Energy Technology Data Exchange (ETDEWEB)

    Cardós, M.; Guijarro Tarradellas, E.; Babiloni Griñón, E.

    2016-07-01

    Purpose: The exact estimation of the fill rate in the lost sales case is complex and time consuming. However, simple and suitable methods are needed for its estimation so that inventory managers could use them. Design/methodology/approach: Instead of trying to compute the fill rate in one step, this paper focuses first on estimating the probabilities of different on-hand stock levels so that the fill rate is computed later. Findings: As a result, the performance of a novel proposed method overcomes the other methods and is relatively simple to compute. Originality/value: Existing methods for estimating stock levels are examined, new procedures are proposed and their performance is assessed.

  3. Analytic model of heat deposition in spallation neutron target

    International Nuclear Information System (INIS)

    Findlay, D.J.S.

    2015-01-01

    A simple analytic model for estimating deposition of heat in a spallation neutron target is presented—a model that can readily be realised in an unambitious spreadsheet. The model is based on simple representations of the principal underlying physical processes, and is intended largely as a ‘sanity check’ on results from Monte Carlo codes such as FLUKA or MCNPX.

  4. Analytic model of heat deposition in spallation neutron target

    Energy Technology Data Exchange (ETDEWEB)

    Findlay, D.J.S.

    2015-12-11

    A simple analytic model for estimating deposition of heat in a spallation neutron target is presented—a model that can readily be realised in an unambitious spreadsheet. The model is based on simple representations of the principal underlying physical processes, and is intended largely as a ‘sanity check’ on results from Monte Carlo codes such as FLUKA or MCNPX.

  5. Coupling impedance of an in-vacuum undulator: Measurement, simulation, and analytical estimation

    Science.gov (United States)

    Smaluk, Victor; Fielder, Richard; Blednykh, Alexei; Rehm, Guenther; Bartolini, Riccardo

    2014-07-01

    One of the important issues of the in-vacuum undulator design is the coupling impedance of the vacuum chamber, which includes tapered transitions with variable gap size. To get complete and reliable information on the impedance, analytical estimate, numerical simulations and beam-based measurements have been performed at Diamond Light Source, a forthcoming upgrade of which includes introducing additional insertion device (ID) straights. The impedance of an already existing ID vessel geometrically similar to the new one has been measured using the orbit bump method. The measurement results in comparison with analytical estimations and numerical simulations are discussed in this paper.

  6. A simple route to maximum-likelihood estimates of two-locus

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Genetics; Volume 94; Issue 3. A simple route to maximum-likelihood estimates of two-locus recombination fractions under inequality restrictions. Iain L. Macdonald Philasande Nkalashe. Research Note Volume 94 Issue 3 September 2015 pp 479-481 ...

  7. Observation of lens aberrations for high resolution electron microscopy II: Simple expressions for optimal estimates

    Energy Technology Data Exchange (ETDEWEB)

    Saxton, W. Owen, E-mail: wos1@cam.ac.uk

    2015-04-15

    This paper lists simple closed-form expressions estimating aberration coefficients (defocus, astigmatism, three-fold astigmatism, coma / misalignment, spherical aberration) on the basis of image shift or diffractogram shape measurements as a function of injected beam tilt. Simple estimators are given for a large number of injected tilt configurations, optimal in the sense of least-squares fitting of all the measurements, and so better than most reported previously. Standard errors are given for most, allowing different approaches to be compared. Special attention is given to the measurement of the spherical aberration, for which several simple procedures are given, and the effect of foreknowledge of this on other aberration estimates is noted. Details and optimal expressions are also given for a new and simple method of analysis, requiring measurements of the diffractogram mirror axis direction only, which are simpler to make than the focus and astigmatism measurements otherwise required. - Highlights: • Optimal estimators for CTEM lens aberrations are more accurate and/or use fewer observations. • Estimators have been found for defocus, astigmatism, three-fold astigmatism, coma and spherical aberration. • Estimators have been found relying on diffractogram shape, image shift and diffractogram orientation only, for a variety of beam tilts. • The standard error for each estimator has been found.

  8. Interlaboratory analytical performance studies; a way to estimate measurement uncertainty

    Directory of Open Access Journals (Sweden)

    El¿bieta £ysiak-Pastuszak

    2004-09-01

    Full Text Available Comparability of data collected within collaborative programmes became the key challenge of analytical chemistry in the 1990s, including monitoring of the marine environment. To obtain relevant and reliable data, the analytical process has to proceed under a well-established Quality Assurance (QA system with external analytical proficiency tests as an inherent component. A programme called Quality Assurance in Marine Monitoring in Europe (QUASIMEME was established in 1993 and evolved over the years as the major provider of QA proficiency tests for nutrients, trace metals and chlorinated organic compounds in marine environment studies. The article presents an evaluation of results obtained in QUASIMEME Laboratory Performance Studies by the monitoring laboratory of the Institute of Meteorology and Water Management (Gdynia, Poland in exercises on nutrient determination in seawater. The measurement uncertainty estimated from routine internal quality control measurements and from results of analytical performance exercises is also presented in the paper.

  9. Analytical Estimation of Water-Oil Relative Permeabilities through Fractures

    Directory of Open Access Journals (Sweden)

    Saboorian-Jooybari Hadi

    2016-05-01

    Full Text Available Modeling multiphase flow through fractures is a key issue for understanding flow mechanism and performance prediction of fractured petroleum reservoirs, geothermal reservoirs, underground aquifers and carbon-dioxide sequestration. One of the most challenging subjects in modeling of fractured petroleum reservoirs is quantifying fluids competition for flow in fracture network (relative permeability curves. Unfortunately, there is no standard technique for experimental measurement of relative permeabilities through fractures and the existing methods are very expensive, time consuming and erroneous. Although, several formulations were presented to calculate fracture relative permeability curves in the form of linear and power functions of flowing fluids saturation, it is still unclear what form of relative permeability curves must be used for proper modeling of flow through fractures and consequently accurate reservoir simulation. Basically, the classic linear relative permeability (X-type curves are used in almost all of reservoir simulators. In this work, basic fluid flow equations are combined to develop a new simple analytical model for water-oil two phase flow in a single fracture. The model gives rise to simple analytic formulations for fracture relative permeabilities. The model explicitly proves that water-oil relative permeabilities in fracture network are functions of fluids saturation, viscosity ratio, fluids density, inclination of fracture plane from horizon, pressure gradient along fracture and rock matrix wettability, however they were considered to be only functions of saturations in the classic X-type and power (Corey [35] and Honarpour et al. [28, 29] models. Eventually, validity of the proposed formulations is checked against literature experimental data. The proposed fracture relative permeability functions have several advantages over the existing ones. Firstly, they are explicit functions of the parameters which are known for

  10. A simple method for estimating thermal response of building ...

    African Journals Online (AJOL)

    This paper develops a simple method for estimating the thermal response of building materials in the tropical climatic zone using the basic heat equation. The efficacy of the developed model has been tested with data from three West African cities, namely Kano (lat. 12.1 ºN) Nigeria, Ibadan (lat. 7.4 ºN) Nigeria and Cotonou ...

  11. Fabricating Simple Wax Screen-Printing Paper-Based Analytical Devices to Demonstrate the Concept of Limiting Reagent in Acid- Base Reactions

    Science.gov (United States)

    Namwong, Pithakpong; Jarujamrus, Purim; Amatatongchai, Maliwan; Chairam, Sanoe

    2018-01-01

    In this article, a low-cost, simple, and rapid fabrication of paper-based analytical devices (PADs) using a wax screen-printing method is reported here. The acid-base reaction is implemented in the simple PADs to demonstrate to students the chemistry concept of a limiting reagent. When a fixed concentration of base reacts with a gradually…

  12. Coupling impedance of an in-vacuum undulator: Measurement, simulation, and analytical estimation

    Directory of Open Access Journals (Sweden)

    Victor Smaluk

    2014-07-01

    Full Text Available One of the important issues of the in-vacuum undulator design is the coupling impedance of the vacuum chamber, which includes tapered transitions with variable gap size. To get complete and reliable information on the impedance, analytical estimate, numerical simulations and beam-based measurements have been performed at Diamond Light Source, a forthcoming upgrade of which includes introducing additional insertion device (ID straights. The impedance of an already existing ID vessel geometrically similar to the new one has been measured using the orbit bump method. The measurement results in comparison with analytical estimations and numerical simulations are discussed in this paper.

  13. Estimates of emittance dilution and stability in high-energy linear accelerators

    Directory of Open Access Journals (Sweden)

    T. O. Raubenheimer

    2000-12-01

    Full Text Available In this paper, we present a series of analytic expressions to predict the beam dynamics in a long linear accelerator. These expressions can be used to model the linac optics, calculate the magnitude of the wakefields, estimate the emittance dilution due to misaligned accelerator components, and estimate the stability and jitter limitations. The analytic expressions are based on the results of simple physics models and are useful to understand the parameter sensitivities. They are also useful when using simple codes or spreadsheets to optimize a linac system.

  14. Simple analytical model reveals the functional role of embodied sensorimotor interaction in hexapod gaits

    Science.gov (United States)

    Aoi, Shinya; Nachstedt, Timo; Manoonpong, Poramate; Wörgötter, Florentin; Matsuno, Fumitoshi

    2018-01-01

    Insects have various gaits with specific characteristics and can change their gaits smoothly in accordance with their speed. These gaits emerge from the embodied sensorimotor interactions that occur between the insect’s neural control and body dynamic systems through sensory feedback. Sensory feedback plays a critical role in coordinated movements such as locomotion, particularly in stick insects. While many previously developed insect models can generate different insect gaits, the functional role of embodied sensorimotor interactions in the interlimb coordination of insects remains unclear because of their complexity. In this study, we propose a simple physical model that is amenable to mathematical analysis to explain the functional role of these interactions clearly. We focus on a foot contact sensory feedback called phase resetting, which regulates leg retraction timing based on touchdown information. First, we used a hexapod robot to determine whether the distributed decoupled oscillators used for legs with the sensory feedback generate insect-like gaits through embodied sensorimotor interactions. The robot generated two different gaits and one had similar characteristics to insect gaits. Next, we proposed the simple model as a minimal model that allowed us to analyze and explain the gait mechanism through the embodied sensorimotor interactions. The simple model consists of a rigid body with massless springs acting as legs, where the legs are controlled using oscillator phases with phase resetting, and the governed equations are reduced such that they can be explained using only the oscillator phases with some approximations. This simplicity leads to analytical solutions for the hexapod gaits via perturbation analysis, despite the complexity of the embodied sensorimotor interactions. This is the first study to provide an analytical model for insect gaits under these interaction conditions. Our results clarified how this specific foot contact sensory

  15. Evaluating the performance of simple estimators for probit models with two dummy endogenous regressors

    DEFF Research Database (Denmark)

    Holm, Anders; Nielsen, Jacob Arendt

    2013-01-01

    This study considers the small sample performance of approximate but simple two-stage estimators for probit models with two endogenous binary covariates. Monte Carlo simulations showthat all the considered estimators, including the simulated maximum-likelihood (SML) estimation, of the trivariate ...

  16. Estimates of Inequality Indices Based on Simple Random, Ranked Set, and Systematic Sampling

    OpenAIRE

    Bansal, Pooja; Arora, Sangeeta; Mahajan, Kalpana K.

    2013-01-01

    Gini index, Bonferroni index, and Absolute Lorenz index are some popular indices of inequality showing different features of inequality measurement. In general simple random sampling procedure is commonly used to estimate the inequality indices and their related inference. The key condition that the samples must be drawn via simple random sampling procedure though makes calculations much simpler but this assumption is often violated in practice as the data does not always yield simple random ...

  17. Analytical Estimates of the Dispersion Curve in Planar Ionization Fronts

    International Nuclear Information System (INIS)

    Arrayas, Manuel; Trueba, Jose L.; Betelu, Santiago; Fontelos, Marco A.

    2009-01-01

    Fingers from ionization fronts for a hydrodynamic plasma model result from a balance between impact ionization and electron diffusion in a non-attaching gas. An analytical estimation of the size of the fingers and its dependence on both the electric field and electron diffusion coefficient can be done when the diffusion is low and the electric field is strong.

  18. A simple, direct method for x-ray scatter estimation and correction in digital radiography and cone-beam CT

    International Nuclear Information System (INIS)

    Siewerdsen, J.H.; Daly, M.J.; Bakhtiar, B.

    2006-01-01

    X-ray scatter poses a significant limitation to image quality in cone-beam CT (CBCT), resulting in contrast reduction, image artifacts, and lack of CT number accuracy. We report the performance of a simple scatter correction method in which scatter fluence is estimated directly in each projection from pixel values near the edge of the detector behind the collimator leaves. The algorithm operates on the simple assumption that signal in the collimator shadow is attributable to x-ray scatter, and the 2D scatter fluence is estimated by interpolating between pixel values measured along the top and bottom edges of the detector behind the collimator leaves. The resulting scatter fluence estimate is subtracted from each projection to yield an estimate of the primary-only images for CBCT reconstruction. Performance was investigated in phantom experiments on an experimental CBCT benchtop, and the effect on image quality was demonstrated in patient images (head, abdomen, and pelvis sites) obtained on a preclinical system for CBCT-guided radiation therapy. The algorithm provides significant reduction in scatter artifacts without compromise in contrast-to-noise ratio (CNR). For example, in a head phantom, cupping artifact was essentially eliminated, CT number accuracy was restored to within 3%, and CNR (breast-to-water) was improved by up to 50%. Similarly in a body phantom, cupping artifact was reduced by at least a factor of 2 without loss in CNR. Patient images demonstrate significantly increased uniformity, accuracy, and contrast, with an overall improvement in image quality in all sites investigated. Qualitative evaluation illustrates that soft-tissue structures that are otherwise undetectable are clearly delineated in scatter-corrected reconstructions. Since scatter is estimated directly in each projection, the algorithm is robust with respect to system geometry, patient size and heterogeneity, patient motion, etc. Operating without prior information, analytical modeling

  19. Semi-analytical Model for Estimating Absorption Coefficients of Optically Active Constituents in Coastal Waters

    Science.gov (United States)

    Wang, D.; Cui, Y.

    2015-12-01

    The objectives of this paper are to validate the applicability of a multi-band quasi-analytical algorithm (QAA) in retrieval absorption coefficients of optically active constituents in turbid coastal waters, and to further improve the model using a proposed semi-analytical model (SAA). The ap(531) and ag(531) semi-analytically derived using SAA model are quite different from the retrievals procedures of QAA model that ap(531) and ag(531) are semi-analytically derived from the empirical retrievals results of a(531) and a(551). The two models are calibrated and evaluated against datasets taken from 19 independent cruises in West Florida Shelf in 1999-2003, provided by SeaBASS. The results indicate that the SAA model produces a superior performance to QAA model in absorption retrieval. Using of the SAA model in retrieving absorption coefficients of optically active constituents from West Florida Shelf decreases the random uncertainty of estimation by >23.05% from the QAA model. This study demonstrates the potential of the SAA model in absorption coefficients of optically active constituents estimating even in turbid coastal waters. Keywords: Remote sensing; Coastal Water; Absorption Coefficient; Semi-analytical Model

  20. Computationally simple, analytic, closed form solution of the Coulomb self-interaction problem in Kohn Sham density functional theory

    International Nuclear Information System (INIS)

    Gonis, Antonios; Daene, Markus W.; Nicholson, Don M.; Stocks, George Malcolm

    2012-01-01

    We have developed and tested in terms of atomic calculations an exact, analytic and computationally simple procedure for determining the functional derivative of the exchange energy with respect to the density in the implementation of the Kohn Sham formulation of density functional theory (KS-DFT), providing an analytic, closed-form solution of the self-interaction problem in KS-DFT. We demonstrate the efficacy of our method through ground-state calculations of the exchange potential and energy for atomic He and Be atoms, and comparisons with experiment and the results obtained within the optimized effective potential (OEP) method.

  1. A Simple Model of Global Aerosol Indirect Effects

    Science.gov (United States)

    Ghan, Steven J.; Smith, Steven J.; Wang, Minghuai; Zhang, Kai; Pringle, Kirsty; Carslaw, Kenneth; Pierce, Jeffrey; Bauer, Susanne; Adams, Peter

    2013-01-01

    Most estimates of the global mean indirect effect of anthropogenic aerosol on the Earth's energy balance are from simulations by global models of the aerosol lifecycle coupled with global models of clouds and the hydrologic cycle. Extremely simple models have been developed for integrated assessment models, but lack the flexibility to distinguish between primary and secondary sources of aerosol. Here a simple but more physically based model expresses the aerosol indirect effect (AIE) using analytic representations of cloud and aerosol distributions and processes. Although the simple model is able to produce estimates of AIEs that are comparable to those from some global aerosol models using the same global mean aerosol properties, the estimates by the simple model are sensitive to preindustrial cloud condensation nuclei concentration, preindustrial accumulation mode radius, width of the accumulation mode, size of primary particles, cloud thickness, primary and secondary anthropogenic emissions, the fraction of the secondary anthropogenic emissions that accumulates on the coarse mode, the fraction of the secondary mass that forms new particles, and the sensitivity of liquid water path to droplet number concentration. Estimates of present-day AIEs as low as 5 W/sq m and as high as 0.3 W/sq m are obtained for plausible sets of parameter values. Estimates are surprisingly linear in emissions. The estimates depend on parameter values in ways that are consistent with results from detailed global aerosol-climate simulation models, which adds to understanding of the dependence on AIE uncertainty on uncertainty in parameter values.

  2. Models for the analytic estimation of low energy photon albedo

    International Nuclear Information System (INIS)

    Simovic, R.; Markovic, S.; Ljubenov, V.

    2005-01-01

    This paper shows some monoenergetic models for estimation of photon reflection in the energy range from 20 keV to 80 keV. Using the DP0 approximation of the H-function we have derived the analytic expressions of the η and R functions in purpose to facilitate photon reflection analyses as well as the radiation shield designee. (author) [sr

  3. Chapter 16 - Predictive Analytics for Comprehensive Energy Systems State Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yingchen [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Yang, Rui [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hodge, Brian S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhang, Jie [University of Texas at Dallas; Weng, Yang [Arizona State University

    2017-12-01

    Energy sustainability is a subject of concern to many nations in the modern world. It is critical for electric power systems to diversify energy supply to include systems with different physical characteristics, such as wind energy, solar energy, electrochemical energy storage, thermal storage, bio-energy systems, geothermal, and ocean energy. Each system has its own range of control variables and targets. To be able to operate such a complex energy system, big-data analytics become critical to achieve the goal of predicting energy supplies and consumption patterns, assessing system operation conditions, and estimating system states - all providing situational awareness to power system operators. This chapter presents data analytics and machine learning-based approaches to enable predictive situational awareness of the power systems.

  4. A simple model to estimate the optimal doping of p - Type oxide superconductors

    Directory of Open Access Journals (Sweden)

    Adir Moysés Luiz

    2008-12-01

    Full Text Available Oxygen doping of superconductors is discussed. Doping high-Tc superconductors with oxygen seems to be more efficient than other doping procedures. Using the assumption of double valence fluctuations, we present a simple model to estimate the optimal doping of p-type oxide superconductors. The experimental values of oxygen content for optimal doping of the most important p-type oxide superconductors can be accounted for adequately using this simple model. We expect that our simple model will encourage further experimental and theoretical researches in superconducting materials.

  5. Estimating the uncertainty of damage costs of pollution: A simple transparent method and typical results

    International Nuclear Information System (INIS)

    Spadaro, Joseph V.; Rabl, Ari

    2008-01-01

    Whereas the uncertainty of environmental impacts and damage costs is usually estimated by means of a Monte Carlo calculation, this paper shows that most (and in many cases all) of the uncertainty calculation involves products and/or sums of products and can be accomplished with an analytic solution which is simple and transparent. We present our own assessment of the component uncertainties and calculate the total uncertainty for the impacts and damage costs of the classical air pollutants; results for a Monte Carlo calculation for the dispersion part are also shown. The distribution of the damage costs is approximately lognormal and can be characterized in terms of geometric mean μ g and geometric standard deviation σ g , implying that the confidence interval is multiplicative. We find that for the classical air pollutants σ g is approximately 3 and the 68% confidence interval is [μ g / σ g , μ g σ g ]. Because the lognormal distribution is highly skewed for large σ g , the median is significantly smaller than the mean. We also consider the case where several lognormally distributed damage costs are added, for example to obtain the total damage cost due to all the air pollutants emitted by a power plant, and we find that the relative error of the sum can be significantly smaller than the relative errors of the summands. Even though the distribution for such sums is not exactly lognormal, we present a simple lognormal approximation that is quite adequate for most applications

  6. A New and Simple Method for Crosstalk Estimation in Homogeneous Trench-Assisted Multi-Core Fibers

    DEFF Research Database (Denmark)

    Ye, Feihong; Tu, Jiajing; Saitoh, Kunimasa

    2014-01-01

    A new and simple method for inter-core crosstalk estimation in homogeneous trench-assisted multi-core fibers is presented. The crosstalk calculated by this method agrees well with experimental measurement data for two kinds of fabricated 12-core fibers.......A new and simple method for inter-core crosstalk estimation in homogeneous trench-assisted multi-core fibers is presented. The crosstalk calculated by this method agrees well with experimental measurement data for two kinds of fabricated 12-core fibers....

  7. Simple quasi-analytical holonomic homogenization model for the non-linear analysis of in-plane loaded masonry panels: Part 1, meso-scale

    Science.gov (United States)

    Milani, G.; Bertolesi, E.

    2017-07-01

    A simple quasi analytical holonomic homogenization approach for the non-linear analysis of masonry walls in-plane loaded is presented. The elementary cell (REV) is discretized with 24 triangular elastic constant stress elements (bricks) and non-linear interfaces (mortar). A holonomic behavior with softening is assumed for mortar. It is shown how the mechanical problem in the unit cell is characterized by very few displacement variables and how homogenized stress-strain behavior can be evaluated semi-analytically.

  8. Simple method for the estimation of glomerular filtration rate

    Energy Technology Data Exchange (ETDEWEB)

    Groth, T [Group for Biomedical Informatics, Uppsala Univ. Data Center, Uppsala (Sweden); Tengstroem, B [District General Hospital, Skoevde (Sweden)

    1977-02-01

    A simple method is presented for indirect estimation of the glomerular filtration rate from two venous blood samples, drawn after a single injection of a small dose of (/sup 125/I)sodium iothalamate (10 ..mu..Ci). The method does not require exact dosage, as the first sample, taken after a few minutes (t=5 min) after injection, is used to normilize the value of the second sample, which should be taken in between 2 to 4 h after injection. The glomerular filtration rate, as measured by standard insulin clearance, may then be predicted from the logarithm of the normalized value and linear regression formulas with a standard error of estimate of the order of 1 to 2 ml/min/1.73 m/sup 2/. The slope-intercept method for direct estimation of glomerular filtration rate is also evaluated and found to significantly underestimate standard insulin clearance. The normalized 'single-point' method is concluded to be superior to the slope-intercept method and more sophisticated methods using curve fitting technique, with regard to predictive force and clinical applicability.

  9. Use of eddy-covariance methods to "calibrate" simple estimators of evapotranspiration

    Science.gov (United States)

    Sumner, David M.; Geurink, Jeffrey S.; Swancar, Amy

    2017-01-01

    Direct measurement of actual evapotranspiration (ET) provides quantification of this large component of the hydrologic budget, but typically requires long periods of record and large instrumentation and labor costs. Simple surrogate methods of estimating ET, if “calibrated” to direct measurements of ET, provide a reliable means to quantify ET. Eddy-covariance measurements of ET were made for 12 years (2004-2015) at an unimproved bahiagrass (Paspalum notatum) pasture in Florida. These measurements were compared to annual rainfall derived from rain gage data and monthly potential ET (PET) obtained from a long-term (since 1995) U.S. Geological Survey (USGS) statewide, 2-kilometer, daily PET product. The annual proportion of ET to rainfall indicates a strong correlation (r2=0.86) to annual rainfall; the ratio increases linearly with decreasing rainfall. Monthly ET rates correlated closely (r2=0.84) to the USGS PET product. The results indicate that simple surrogate methods of estimating actual ET show positive potential in the humid Florida climate given the ready availability of historical rainfall and PET.

  10. A simple analytical infiltration model for short-duration rainfall

    Science.gov (United States)

    Wang, Kaiwen; Yang, Xiaohua; Liu, Xiaomang; Liu, Changming

    2017-12-01

    Many infiltration models have been proposed to simulate infiltration process. Different initial soil conditions and non-uniform initial water content can lead to infiltration simulation errors, especially for short-duration rainfall (SHR). Few infiltration models are specifically derived to eliminate the errors caused by the complex initial soil conditions. We present a simple analytical infiltration model for SHR infiltration simulation, i.e., Short-duration Infiltration Process model (SHIP model). The infiltration simulated by 5 models (i.e., SHIP (high) model, SHIP (middle) model, SHIP (low) model, Philip model and Parlange model) were compared based on numerical experiments and soil column experiments. In numerical experiments, SHIP (middle) and Parlange models had robust solutions for SHR infiltration simulation of 12 typical soils under different initial soil conditions. The absolute values of percent bias were less than 12% and the values of Nash and Sutcliffe efficiency were greater than 0.83. Additionally, in soil column experiments, infiltration rate fluctuated in a range because of non-uniform initial water content. SHIP (high) and SHIP (low) models can simulate an infiltration range, which successfully covered the fluctuation range of the observed infiltration rate. According to the robustness of solutions and the coverage of fluctuation range of infiltration rate, SHIP model can be integrated into hydrologic models to simulate SHR infiltration process and benefit the flood forecast.

  11. Understanding Business Analytics

    Science.gov (United States)

    2015-01-05

    analytics have been used in organizations for a variety of reasons for quite some time; ranging from the simple (generating and understanding business analytics...process. understanding business analytics 3 How well these two components are orchestrated will determine the level of success an organization has in

  12. Analytical estimation of the dynamic apertures of circular accelerators

    International Nuclear Information System (INIS)

    Gao, J.

    2000-02-01

    By considering delta function sextupole, octupole, and deca-pole perturbations and using difference action-angle variable equations, we find some useful analytical formulae for the estimation of the dynamic apertures of circular accelerators due to single sextupole, single octupole, single deca-pole (single 2 m pole in general). Their combined effects are derived based on the Chirikov criterion of the onset of stochastic motions. Comparisons with numerical simulations are made, and the agreement is quite satisfactory. These formulae have been applied to determine the beam-beam limited dynamic aperture in a circular collider. (author)

  13. Analytical solution for the electrical properties of a radio-frequency quadrupole (RFQ) with simple vanes

    International Nuclear Information System (INIS)

    Lancaster, H.

    1982-01-01

    Although the SUPERFISH program is used for calculating the design parameters of an RFQ structure with complex vanes, an analytical solution for electrical properties of an RFQ with simple vanes provides insight into the parametric behavior of these more complicated resonators. The fields in an inclined plane wave guide with proper boundary conditions match those in one quadrant of an RFQ. The principle of duality is used to exploit the solutions to a radial transmission line in solving the field equations. Calculated are the frequency equation, frequency sensitivity factors, electric field, magnetic field, stored energy (U), power dissipation, and quality factor

  14. Analytical method for estimating the thermal expansion coefficient of metals at high temperature

    International Nuclear Information System (INIS)

    Takamoto, S; Izumi, S; Nakata, T; Sakai, S; Oinuma, S; Nakatani, Y

    2015-01-01

    In this paper, we propose an analytical method for estimating the thermal expansion coefficient (TEC) of metals at high-temperature ranges. Although the conventional method based on quasiharmonic approximation (QHA) shows good results at low temperatures, anharmonic effects caused by large-amplitude thermal vibrations reduces its accuracy at high temperatures. Molecular dynamics (MD) naturally includes the anharmonic effect. However, since the computational cost of MD is relatively high, in order to make an interatomic potential capable of reproducing TEC, an analytical method is essential. In our method, analytical formulation of the radial distribution function (RDF) at finite temperature realizes the estimation of the TEC. Each peak of the RDF is approximated by the Gaussian distribution. The average and variance of the Gaussian distribution are formulated by decomposing the fluctuation of interatomic distance into independent elastic waves. We incorporated two significant anharmonic effects into the method. One is the increase in the averaged interatomic distance caused by large amplitude vibration. The second is the variation in the frequency of elastic waves. As a result, the TECs of fcc and bcc crystals estimated by our method show good agreement with those of MD. Our method enables us to make an interatomic potential that reproduces the TEC at high temperature. We developed the GEAM potential for nickel. The TEC of the fitted potential showed good agreement with experimental data from room temperature to 1000 K. As compared with the original potential, it was found that the third derivative of the wide-range curve was modified, while the zeroth, first and second derivatives were unchanged. This result supports the conventional theory of solid state physics. We believe our analytical method and developed interatomic potential will contribute to future high-temperature material development. (paper)

  15. Simple method for quick estimation of aquifer hydrogeological parameters

    Science.gov (United States)

    Ma, C.; Li, Y. Y.

    2017-08-01

    Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.

  16. Limited analytical capacity for cyanotoxins in developing countries may hide serious environmental health problems: simple and affordable methods may be the answer.

    Science.gov (United States)

    Pírez, Macarena; Gonzalez-Sapienza, Gualberto; Sienra, Daniel; Ferrari, Graciela; Last, Michael; Last, Jerold A; Brena, Beatriz M

    2013-01-15

    In recent years, the international demand for commodities has prompted enormous growth in agriculture in most South American countries. Due to intensive use of fertilizers, cyanobacterial blooms have become a recurrent phenomenon throughout the continent, but their potential health risk remains largely unknown due to the lack of analytical capacity. In this paper we report the main results and conclusions of more than five years of systematic monitoring of cyanobacterial blooms in 20 beaches of Montevideo, Uruguay, on the Rio de la Plata, the fifth largest basin in the world. A locally developed microcystin ELISA was used to establish a sustainable monitoring program that revealed seasonal peaks of extremely high toxicity, more than one-thousand-fold greater than the WHO limit for recreational water. Comparison with cyanobacterial cell counts and chlorophyll-a determination, two commonly used parameters for indirect estimation of toxicity, showed that such indicators can be highly misleading. On the other hand, the accumulated experience led to the definition of a simple criterion for visual classification of blooms, that can be used by trained lifeguards and technicians to take rapid on-site decisions on beach management. The simple and low cost approach is broadly applicable to risk assessment and risk management in developing countries. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. RSMASS: A simple model for estimating reactor and shield masses

    International Nuclear Information System (INIS)

    Marshall, A.C.; Aragon, J.; Gallup, D.

    1987-01-01

    A simple mathematical model (RSMASS) has been developed to provide rapid estimates of reactor and shield masses for space-based reactor power systems. Approximations are used rather than correlations or detailed calculations to estimate the reactor fuel mass and the masses of the moderator, structure, reflector, pressure vessel, miscellaneous components, and the reactor shield. The fuel mass is determined either by neutronics limits, thermal/hydraulic limits, or fuel damage limits, whichever yields the largest mass. RSMASS requires the reactor power and energy, 24 reactor parameters, and 20 shield parameters to be specified. This parametric approach should be applicable to a very broad range of reactor types. Reactor and shield masses calculated by RSMASS were found to be in good agreement with the masses obtained from detailed calculations

  18. Analytical estimation show low depth-independent water loss due to vapor flux from deep aquifers

    Science.gov (United States)

    Selker, John S.

    2017-06-01

    Recent articles have provided estimates of evaporative flux from water tables in deserts that span 5 orders of magnitude. In this paper, we present an analytical calculation that indicates aquifer vapor flux to be limited to 0.01 mm/yr for sites where there is negligible recharge and the water table is well over 20 m below the surface. This value arises from the geothermal gradient, and therefore, is nearly independent of the actual depth of the aquifer. The value is in agreement with several numerical studies, but is 500 times lower than recently reported experimental values, and 100 times larger than an earlier analytical estimate.

  19. Estimation of a simple agent-based model of financial markets: An application to Australian stock and foreign exchange data

    Science.gov (United States)

    Alfarano, Simone; Lux, Thomas; Wagner, Friedrich

    2006-10-01

    Following Alfarano et al. [Estimation of agent-based models: the case of an asymmetric herding model, Comput. Econ. 26 (2005) 19-49; Excess volatility and herding in an artificial financial market: analytical approach and estimation, in: W. Franz, H. Ramser, M. Stadler (Eds.), Funktionsfähigkeit und Stabilität von Finanzmärkten, Mohr Siebeck, Tübingen, 2005, pp. 241-254], we consider a simple agent-based model of a highly stylized financial market. The model takes Kirman's ant process [A. Kirman, Epidemics of opinion and speculative bubbles in financial markets, in: M.P. Taylor (Ed.), Money and Financial Markets, Blackwell, Cambridge, 1991, pp. 354-368; A. Kirman, Ants, rationality, and recruitment, Q. J. Econ. 108 (1993) 137-156] of mimetic contagion as its starting point, but allows for asymmetry in the attractiveness of both groups. Embedding the contagion process into a standard asset-pricing framework, and identifying the abstract groups of the herding model as chartists and fundamentalist traders, a market with periodic bubbles and bursts is obtained. Taking stock of the availability of a closed-form solution for the stationary distribution of returns for this model, we can estimate its parameters via maximum likelihood. Expanding our earlier work, this paper presents pertinent estimates for the Australian dollar/US dollar exchange rate and the Australian stock market index. As it turns out, our model indicates dominance of fundamentalist behavior in both the stock and foreign exchange market.

  20. Debye potentials, electromagnetic reciprocity and impedance boundary conditions for efficient analytic approximation of coupling impedances in complex heterogeneous accelerator pipes

    Energy Technology Data Exchange (ETDEWEB)

    Petracca, S [Salerno Univ. (Italy)

    1996-08-01

    Debye potentials, the Lorentz reciprocity theorem, and (extended) Leontovich boundary conditions can be used to obtain simple and accurate analytic estimates of the longitudinal and transverse coupling impedances of (piecewise longitudinally uniform) multi-layered pipes with non simple transverse geometry and/or (spatially inhomogeneous) boundary conditions. (author)

  1. A simple method for estimating potential source term bypass fractions from confinement structures

    International Nuclear Information System (INIS)

    Kalinich, D.A.; Paddleford, D.F.

    1997-01-01

    Confinement structures house many of the operating processes at the Savannah River Site (SRS). Under normal operating conditions, a confinement structure in conjunction with its associated ventilation systems prevents the release of radiological material to the environment. However, under potential accident conditions, the performance of the ventilation systems and integrity of the structure may be challenged. In order to calculate the radiological consequences associated with a potential accident (e.g. fires, explosion, spills, etc.), it is necessary to determine the fraction of the source term initially generated by the accident that escapes from the confinement structure to the environment. While it would be desirable to estimate the potential bypass fraction using sophisticated control-volume/flow path computer codes (e.g. CONTAIN, MELCOR, etc.) in order to take as much credit as possible for the mitigative effects of the confinement structure, there are many instances where using such codes is not tractable due to limits on the level-of-effort allotted to perform the analysis. Moreover, the current review environment, with its emphasis on deterministic/bounding-versus probabilistic/best-estimate-analysis discourages using analytical techniques that require the consideration of a large number of parameters. Discussed herein is a simplified control-volume/flow path approach for calculating source term bypass fraction that is amenable to solution in a spreadsheet or with a commercial mathematical solver (e.g. MathCad or Mathematica). It considers the effects of wind and fire pressure gradients on the structure, ventilation system operation, and Halon discharges. Simple models are used to characterize the engineered and non-engineered flow paths. By making judicious choices for the limited set of problem parameters, the results from this approach can be defended as bounding and conservative

  2. Simple analytical method for acrylamide in the workplace air adsorbed by charcoal tube

    Energy Technology Data Exchange (ETDEWEB)

    Yang, J S; Lee, M Y; Park, I J; Kang, S K [Korea Industrial Safety corporation, Inchon (Korea, Republic of)

    1998-04-01

    For the ambient monitoring of acrylamide, the adequate condition of sampling and analysis was checked. The adequate adsorbents and desorption solvents were tested. The combination of char-coal tube as a adsorbent and acetone as a desorption solvent showed 87% desorption efficiency. Flame ionization detector was used to detect acrylamide. The detection limit was 0.814 mg acrylamide in 1 L acetone. It is the equivalent concentration of 0.0203 mg acrylamide in 1 3{sup 3} air if the volume of air collected was 40 L. The permissible exposure level (PEL) of acrylamide in the workplace air recommended by Occupational Safety and Health Administration (OSHA, USA) is 0.3 mg acrylamide in 1 m{sup 3} air. So, it is very simple and economic analytical method for acrylamide to be set in the industrial hygiene laboratories.

  3. An interactive website for analytical method comparison and bias estimation.

    Science.gov (United States)

    Bahar, Burak; Tuncel, Ayse F; Holmes, Earle W; Holmes, Daniel T

    2017-12-01

    Regulatory standards mandate laboratories to perform studies to ensure accuracy and reliability of their test results. Method comparison and bias estimation are important components of these studies. We developed an interactive website for evaluating the relative performance of two analytical methods using R programming language tools. The website can be accessed at https://bahar.shinyapps.io/method_compare/. The site has an easy-to-use interface that allows both copy-pasting and manual entry of data. It also allows selection of a regression model and creation of regression and difference plots. Available regression models include Ordinary Least Squares, Weighted-Ordinary Least Squares, Deming, Weighted-Deming, Passing-Bablok and Passing-Bablok for large datasets. The server processes the data and generates downloadable reports in PDF or HTML format. Our website provides clinical laboratories a practical way to assess the relative performance of two analytical methods. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  4. Tax planning: analysis between national simple and the estimated gain

    OpenAIRE

    Bassoli, Marlene Kempfer; Somma, Giovana Mattioli

    2010-01-01

    This study was initiated because of the need to define the legal situation that, really, is the tax planning in Brazil. The use of comparative method between the estimated gain and the national simple level to clarify an avoidance induced by the law, mainly, demonstrate the possibility of a reduced tax burden and tax savings for companies. Under the focus of the State of Law that honors the principles of strict legality and typicality closed. At first, the article focuses on Tax Planning, tal...

  5. Simple robust technique using time delay estimation for the control and synchronization of Lorenz systems

    International Nuclear Information System (INIS)

    Jin, Maolin; Chang, Pyung Hun

    2009-01-01

    This work presents two simple and robust techniques based on time delay estimation for the respective control and synchronization of chaos systems. First, one of these techniques is applied to the control of a chaotic Lorenz system with both matched and mismatched uncertainties. The nonlinearities in the Lorenz system is cancelled by time delay estimation and desired error dynamics is inserted. Second, the other technique is applied to the synchronization of the Lue system and the Lorenz system with uncertainties. The synchronization input consists of three elements that have transparent and clear meanings. Since time delay estimation enables a very effective and efficient cancellation of disturbances and nonlinearities, the techniques turn out to be simple and robust. Numerical simulation results show fast, accurate and robust performance of the proposed techniques, thereby demonstrating their effectiveness for the control and synchronization of Lorenz systems.

  6. Optimized theory for simple and molecular fluids.

    Science.gov (United States)

    Marucho, M; Montgomery Pettitt, B

    2007-03-28

    An optimized closure approximation for both simple and molecular fluids is presented. A smooth interpolation between Perkus-Yevick and hypernetted chain closures is optimized by minimizing the free energy self-consistently with respect to the interpolation parameter(s). The molecular version is derived from a refinement of the method for simple fluids. In doing so, a method is proposed which appropriately couples an optimized closure with the variant of the diagrammatically proper integral equation recently introduced by this laboratory [K. M. Dyer et al., J. Chem. Phys. 123, 204512 (2005)]. The simplicity of the expressions involved in this proposed theory has allowed the authors to obtain an analytic expression for the approximate excess chemical potential. This is shown to be an efficient tool to estimate, from first principles, the numerical value of the interpolation parameters defining the aforementioned closure. As a preliminary test, representative models for simple fluids and homonuclear diatomic Lennard-Jones fluids were analyzed, obtaining site-site correlation functions in excellent agreement with simulation data.

  7. Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study

    Directory of Open Access Journals (Sweden)

    Samir Saoudi

    2008-07-01

    Full Text Available The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs. Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE depends directly upon J(f which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of J(f, the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.

  8. Simple analytical methods for computing the gravity-wave contribution to the cosmic background radiation anisotropy

    International Nuclear Information System (INIS)

    Wang, Y.

    1996-01-01

    We present two simple analytical methods for computing the gravity-wave contribution to the cosmic background radiation (CBR) anisotropy in inflationary models; one method uses a time-dependent transfer function, the other methods uses an approximate gravity-mode function which is a simple combination of the lowest order spherical Bessel functions. We compare the CBR anisotropy tensor multipole spectrum computed using our methods with the previous result of the highly accurate numerical method, the open-quote open-quote Boltzmann close-quote close-quote method. Our time-dependent transfer function is more accurate than the time-independent transfer function found by Turner, White, and Lindsey; however, we find that the transfer function method is only good for l approx-lt 120. Using our approximate gravity-wave mode function, we obtain much better accuracy; the tensor multipole spectrum we find differs by less than 2% for l approx-lt 50, less than 10% for l approx-lt 120, and less than 20% for l≤300 from the open-quote open-quote Boltzmann close-quote close-quote result. Our approximate graviton mode function should be quite useful in studying tensor perturbations from inflationary models. copyright 1996 The American Physical Society

  9. Utilising temperature differences as constraints for estimating parameters in a simple climate model

    International Nuclear Information System (INIS)

    Bodman, Roger W; Karoly, David J; Enting, Ian G

    2010-01-01

    Simple climate models can be used to estimate the global temperature response to increasing greenhouse gases. Changes in the energy balance of the global climate system are represented by equations that necessitate the use of uncertain parameters. The values of these parameters can be estimated from historical observations, model testing, and tuning to more complex models. Efforts have been made at estimating the possible ranges for these parameters. This study continues this process, but demonstrates two new constraints. Previous studies have shown that land-ocean temperature differences are only weakly correlated with global mean temperature for natural internal climate variations. Hence, these temperature differences provide additional information that can be used to help constrain model parameters. In addition, an ocean heat content ratio can also provide a further constraint. A pulse response technique was used to identify relative parameter sensitivity which confirmed the importance of climate sensitivity and ocean vertical diffusivity, but the land-ocean warming ratio and the land-ocean heat exchange coefficient were also found to be important. Experiments demonstrate the utility of the land-ocean temperature difference and ocean heat content ratio for setting parameter values. This work is based on investigations with MAGICC (Model for the Assessment of Greenhouse-gas Induced Climate Change) as the simple climate model.

  10. Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study

    Science.gov (United States)

    Troudi, Molka; Alimi, Adel M.; Saoudi, Samir

    2008-12-01

    The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.

  11. Simple Analytic Models of Gravitational Collapse

    Energy Technology Data Exchange (ETDEWEB)

    Adler, R.

    2005-02-09

    Most general relativity textbooks devote considerable space to the simplest example of a black hole containing a singularity, the Schwarzschild geometry. However only a few discuss the dynamical process of gravitational collapse, by which black holes and singularities form. We present here two types of analytic models for this process, which we believe are the simplest available; the first involves collapsing spherical shells of light, analyzed mainly in Eddington-Finkelstein coordinates; the second involves collapsing spheres filled with a perfect fluid, analyzed mainly in Painleve-Gullstrand coordinates. Our main goal is pedagogical simplicity and algebraic completeness, but we also present some results that we believe are new, such as the collapse of a light shell in Kruskal-Szekeres coordinates.

  12. Magnetic anomaly depth and structural index estimation using different height analytic signals data

    Science.gov (United States)

    Zhou, Shuai; Huang, Danian; Su, Chao

    2016-09-01

    This paper proposes a new semi-automatic inversion method for magnetic anomaly data interpretation that uses the combination of analytic signals of the anomaly at different heights to determine the depth and the structural index N of the sources. The new method utilizes analytic signals of the original anomaly at different height to effectively suppress the noise contained in the anomaly. Compared with the other high-order derivative calculation methods based on analytic signals, our method only computes first-order derivatives of the anomaly, which can be used to obtain more stable and accurate results. Tests on synthetic noise-free and noise-corrupted magnetic data indicate that the new method can estimate the depth and N efficiently. The technique is applied to a real measured magnetic anomaly in Southern Illinois caused by a known dike, and the result is in agreement with the drilling information and inversion results within acceptable calculation error.

  13. Analytical model for relativistic corrections to the nuclear magnetic shielding constant in atoms

    International Nuclear Information System (INIS)

    Romero, Rodolfo H.; Gomez, Sergio S.

    2006-01-01

    We present a simple analytical model for calculating and rationalizing the main relativistic corrections to the nuclear magnetic shielding constant in atoms. It provides good estimates for those corrections and their trends, in reasonable agreement with accurate four-component calculations and perturbation methods. The origin of the effects in deep core atomic orbitals is manifestly shown

  14. Analytical model for relativistic corrections to the nuclear magnetic shielding constant in atoms

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Rodolfo H. [Facultad de Ciencias Exactas, Universidad Nacional del Nordeste, Avenida Libertad 5500 (3400), Corrientes (Argentina)]. E-mail: rhromero@exa.unne.edu.ar; Gomez, Sergio S. [Facultad de Ciencias Exactas, Universidad Nacional del Nordeste, Avenida Libertad 5500 (3400), Corrientes (Argentina)

    2006-04-24

    We present a simple analytical model for calculating and rationalizing the main relativistic corrections to the nuclear magnetic shielding constant in atoms. It provides good estimates for those corrections and their trends, in reasonable agreement with accurate four-component calculations and perturbation methods. The origin of the effects in deep core atomic orbitals is manifestly shown.

  15. Heat as a groundwater tracer in shallow and deep heterogeneous media: Analytical solution, spreadsheet tool, and field applications

    Science.gov (United States)

    Kurylyk, Barret L.; Irvine, Dylan J.; Carey, Sean K.; Briggs, Martin A.; Werkema, Dale D.; Bonham, Mariah

    2017-01-01

    Groundwater flow advects heat, and thus, the deviation of subsurface temperatures from an expected conduction‐dominated regime can be analysed to estimate vertical water fluxes. A number of analytical approaches have been proposed for using heat as a groundwater tracer, and these have typically assumed a homogeneous medium. However, heterogeneous thermal properties are ubiquitous in subsurface environments, both at the scale of geologic strata and at finer scales in streambeds. Herein, we apply the analytical solution of Shan and Bodvarsson (2004), developed for estimating vertical water fluxes in layered systems, in 2 new environments distinct from previous vadose zone applications. The utility of the solution for studying groundwater‐surface water exchange is demonstrated using temperature data collected from an upwelling streambed with sediment layers, and a simple sensitivity analysis using these data indicates the solution is relatively robust. Also, a deeper temperature profile recorded in a borehole in South Australia is analysed to estimate deeper water fluxes. The analytical solution is able to match observed thermal gradients, including the change in slope at sediment interfaces. Results indicate that not accounting for layering can yield errors in the magnitude and even direction of the inferred Darcy fluxes. A simple automated spreadsheet tool (Flux‐LM) is presented to allow users to input temperature and layer data and solve the inverse problem to estimate groundwater flux rates from shallow (e.g., regimes.

  16. Estimating Fuel Cycle Externalities: Analytical Methods and Issues, Report 2

    Energy Technology Data Exchange (ETDEWEB)

    Barnthouse, L.W.; Cada, G.F.; Cheng, M.-D.; Easterly, C.E.; Kroodsma, R.L.; Lee, R.; Shriner, D.S.; Tolbert, V.R.; Turner, R.S.

    1994-07-01

    of complex issues that also have not been fully addressed. This document contains two types of papers that seek to fill part of this void. Some of the papers describe analytical methods that can be applied to one of the five steps of the damage function approach. The other papers discuss some of the complex issues that arise in trying to estimate externalities. This report, the second in a series of eight reports, is part of a joint study by the U.S. Department of Energy (DOE) and the Commission of the European Communities (EC)* on the externalities of fuel cycles. Most of the papers in this report were originally written as working papers during the initial phases of this study. The papers provide descriptions of the (non-radiological) atmospheric dispersion modeling that the study uses; reviews much of the relevant literature on ecological and health effects, and on the economic valuation of those impacts; contains several papers on some of the more complex and contentious issues in estimating externalities; and describes a method for depicting the quality of scientific information that a study uses. The analytical methods and issues that this report discusses generally pertain to more than one of the fuel cycles, though not necessarily to all of them. The report is divided into six parts, each one focusing on a different subject area.

  17. Analytical treatment of large leak pressure behavior in LMFBR steam generators

    International Nuclear Information System (INIS)

    Hori, Masao; Miyake, Osamu

    1980-07-01

    Simplified analytical methods applicable to the estimation of initial pressure spike in case of a large leak accident in LMFBR steam generators were devised as follows; (i) Estimation of the initial water leak rate by the centered rarefaction wave method, (ii) Estimation of the initial pressure spike by the one-dimensional compressible method with either the columnar bubble growth model or the spherical bubble growth model. These methods were compared with relevant experimental data or other more elaborate analyses and validated to be usable in simple geometry and limited time span cases. Application of these methods to an actual steam generator case was explained and demonstrated. (author)

  18. Analytic Investigation Into Effect of Population Heterogeneity on Parameter Ratio Estimates

    International Nuclear Information System (INIS)

    Schinkel, Colleen; Carlone, Marco; Warkentin, Brad; Fallone, B. Gino

    2007-01-01

    Purpose: A homogeneous tumor control probability (TCP) model has previously been used to estimate the α/β ratio for prostate cancer from clinical dose-response data. For the ratio to be meaningful, it must be assumed that parameter ratios are not sensitive to the type of tumor control model used. We investigated the validity of this assumption by deriving analytic relationships between the α/β estimates from a homogeneous TCP model, ignoring interpatient heterogeneity, and those of the corresponding heterogeneous (population-averaged) model that incorporated heterogeneity. Methods and Materials: The homogeneous and heterogeneous TCP models can both be written in terms of the geometric parameters D 50 and γ 50 . We show that the functional forms of these models are similar. This similarity was used to develop an expression relating the homogeneous and heterogeneous estimates for the α/β ratio. The expression was verified numerically by generating pseudo-data from a TCP curve with known parameters and then using the homogeneous and heterogeneous TCP models to estimate the α/β ratio for the pseudo-data. Results: When the dominant form of interpatient heterogeneity is that of radiosensitivity, the homogeneous and heterogeneous α/β estimates differ. This indicates that the presence of this heterogeneity affects the value of the α/β ratio derived from analysis of TCP curves. Conclusions: The α/β ratio estimated from clinical dose-response data is model dependent-a heterogeneous TCP model that accounts for heterogeneity in radiosensitivity will produce a greater α/β estimate than that resulting from a homogeneous TCP model

  19. A simple method to estimate radiation interception by nursery stock conifers: a case study of eastern white cedar

    International Nuclear Information System (INIS)

    Pronk, A.A.; Goudriaan, J.; Stilma, E.; Challa, H.

    2003-01-01

    A simple method was developed to estimate the fraction radiation intercepted by small eastern white cedar plants (Thuja occidentalis ‘Brabant’). The method, which describes the crop canopy as rows of cuboids, was compared with methods used for estimating radiation interception by crops with homogeneous canopies and crops grown in rows. The extinction coefficient k was determined at different plant arrangements and an average k-value of 0.48 ± 0.03 (R 2 = 0.89) was used in the calculations. Effects of changing plant characteristics and inter- and intra-row plant distances were explored. The fraction radiation intercepted that was estimated with the method for rows of cuboids was up to 20% and for row crops up to 8% lower than estimated with the method for homogeneous canopies at low plant densities and a LAI of 1. The fraction radiation intercepted by small plants of Thuja occidentalis ‘Brabant’ was best estimated by the simple method described in this paper

  20. A Simple Method to Estimate Large Fixed Effects Models Applied to Wage Determinants and Matching

    OpenAIRE

    Mittag, Nikolas

    2016-01-01

    Models with high dimensional sets of fixed effects are frequently used to examine, among others, linked employer-employee data, student outcomes and migration. Estimating these models is computationally difficult, so simplifying assumptions that are likely to cause bias are often invoked to make computation feasible and specification tests are rarely conducted. I present a simple method to estimate large two-way fixed effects (TWFE) and worker-firm match effect models without additional assum...

  1. Analytical Modeling for Underground Risk Assessment in Smart Cities

    Directory of Open Access Journals (Sweden)

    Israr Ullah

    2018-06-01

    Full Text Available In the developed world, underground facilities are increasing day-by-day, as it is considered as an improved utilization of available space in smart cities. Typical facilities include underground railway lines, electricity lines, parking lots, water supply systems, sewerage network, etc. Besides its utility, these facilities also pose serious threats to citizens and property. To preempt accidental loss of precious human lives and properties, a real time monitoring system is highly desirable for conducting risk assessment on continuous basis and timely report any abnormality before its too late. In this paper, we present an analytical formulation to model system behavior for risk analysis and assessment based on various risk contributing factors. Based on proposed analytical model, we have evaluated three approximation techniques for computing final risk index: (a simple linear approximation based on multiple linear regression analysis; (b hierarchical fuzzy logic based technique in which related risk factors are combined in a tree like structure; and (c hybrid approximation approach which is a combination of (a and (b. Experimental results shows that simple linear approximation fails to accurately estimate final risk index as compared to hierarchical fuzzy logic based system which shows that the latter provides an efficient method for monitoring and forecasting critical issues in the underground facilities and may assist in maintenance efficiency as well. Estimation results based on hybrid approach fails to accurately estimate final risk index. However, hybrid scheme reveals some interesting and detailed information by performing automatic clustering based on location risk index.

  2. Simple approximation for estimating centerline gamma absorbed dose rates due to a continuous Gaussian plume

    International Nuclear Information System (INIS)

    Overcamp, T.J.; Fjeld, R.A.

    1987-01-01

    A simple approximation for estimating the centerline gamma absorbed dose rates due to a continuous Gaussian plume was developed. To simplify the integration of the dose integral, this approach makes use of the Gaussian cloud concentration distribution. The solution is expressed in terms of the I1 and I2 integrals which were developed for estimating long-term dose due to a sector-averaged Gaussian plume. Estimates of tissue absorbed dose rates for the new approach and for the uniform cloud model were compared to numerical integration of the dose integral over a Gaussian plume distribution

  3. Equation of state of a hard core fluid with a two-Yukawa tail: toward a simple analytic theory

    International Nuclear Information System (INIS)

    Jedrzejek, C.

    1980-01-01

    Thermodynamic properties of simple fluids are calculated using variational theory for a system of hard-core potential with a two-Yukawa tail. Likewise one Yukawa-tail case the working formulas are analytic. Five parameters of the two Yukawa system are chosen so as to get the best fit to a real argon potential or an ''argon-like'' Lennard-Jones potential. The results are fairly good in light of the extreme simplicity of the method. The discrepancies result from using the variational method and a different shape of Yukawa type potential in comparision to the real argon and Lennard-Jones potentials. (author)

  4. Sequential fitting-and-separating reflectance components for analytical bidirectional reflectance distribution function estimation.

    Science.gov (United States)

    Lee, Yu; Yu, Chanki; Lee, Sang Wook

    2018-01-10

    We present a sequential fitting-and-separating algorithm for surface reflectance components that separates individual dominant reflectance components and simultaneously estimates the corresponding bidirectional reflectance distribution function (BRDF) parameters from the separated reflectance values. We tackle the estimation of a Lafortune BRDF model, which combines a nonLambertian diffuse reflection and multiple specular reflectance components with a different specular lobe. Our proposed method infers the appropriate number of BRDF lobes and their parameters by separating and estimating each of the reflectance components using an interval analysis-based branch-and-bound method in conjunction with iterative K-ordered scale estimation. The focus of this paper is the estimation of the Lafortune BRDF model. Nevertheless, our proposed method can be applied to other analytical BRDF models such as the Cook-Torrance and Ward models. Experiments were carried out to validate the proposed method using isotropic materials from the Mitsubishi Electric Research Laboratories-Massachusetts Institute of Technology (MERL-MIT) BRDF database, and the results show that our method is superior to a conventional minimization algorithm.

  5. A simple method to estimate radiation interception by nursery stock conifers: a case study of eastern white cedar

    NARCIS (Netherlands)

    Pronk, A.A.; Goudriaan, J.; Stilma, E.S.C.; Challa, H.

    2003-01-01

    A simple method was developed to estimate the fraction radiation intercepted by small eastern white cedar plants (Thuja occidentalis 'Brabant'). The method, which describes the crop canopy as rows of cuboids, was compared with methods used for estimating radiation interception by crops with

  6. Temperature based validation of the analytical model for the estimation of the amount of heat generated during friction stir welding

    Directory of Open Access Journals (Sweden)

    Milčić Dragan S.

    2012-01-01

    Full Text Available Friction stir welding is a solid-state welding technique that utilizes thermomechanical influence of the rotating welding tool on parent material resulting in a monolith joint - weld. On the contact of welding tool and parent material, significant stirring and deformation of parent material appears, and during this process, mechanical energy is partially transformed into heat. Generated heat affects the temperature of the welding tool and parent material, thus the proposed analytical model for the estimation of the amount of generated heat can be verified by temperature: analytically determined heat is used for numerical estimation of the temperature of parent material and this temperature is compared to the experimentally determined temperature. Numerical solution is estimated using the finite difference method - explicit scheme with adaptive grid, considering influence of temperature on material's conductivity, contact conditions between welding tool and parent material, material flow around welding tool, etc. The analytical model shows that 60-100% of mechanical power given to the welding tool is transformed into heat, while the comparison of results shows the maximal relative difference between the analytical and experimental temperature of about 10%.

  7. Parametric validations of analytical lifetime estimates for radiation belt electron diffusion by whistler waves

    Directory of Open Access Journals (Sweden)

    A. V. Artemyev

    2013-04-01

    Full Text Available The lifetimes of electrons trapped in Earth's radiation belts can be calculated from quasi-linear pitch-angle diffusion by whistler-mode waves, provided that their frequency spectrum is broad enough and/or their average amplitude is not too large. Extensive comparisons between improved analytical lifetime estimates and full numerical calculations have been performed in a broad parameter range representative of a large part of the magnetosphere from L ~ 2 to 6. The effects of observed very oblique whistler waves are taken into account in both numerical and analytical calculations. Analytical lifetimes (and pitch-angle diffusion coefficients are found to be in good agreement with full numerical calculations based on CRRES and Cluster hiss and lightning-generated wave measurements inside the plasmasphere and Cluster lower-band chorus waves measurements in the outer belt for electron energies ranging from 100 keV to 5 MeV. Comparisons with lifetimes recently obtained from electron flux measurements on SAMPEX, SCATHA, SAC-C and DEMETER also show reasonable agreement.

  8. Estimation method of finger tapping dynamics using simple magnetic detection system.

    Science.gov (United States)

    Kandori, Akihiko; Sano, Yuko; Miyashita, Tsuyoshi; Okada, Yoshihisa; Irokawa, Masataka; Shima, Keisuke; Tsuji, Toshio; Yokoe, Masaru; Sakoda, Saburo

    2010-05-01

    We have developed the simple estimation method of a finger tapping dynamics model for investigating muscle resistance and stiffness during tapping movement in normal subjects. We measured finger tapping movements of 207 normal subjects using a magnetic finger tapping detection system. Each subject tapped two fingers in time with a metronome at 1, 2, 3, 4, and 5 Hz. The velocity and acceleration values for both the closing and opening tapping data were used to estimate a finger tapping dynamics model. Using the frequency response of the ratio of acceleration to velocity of the mechanical impedance parameters, we estimated the resistance (friction coefficient) and compliance (stiffness). We found two dynamics models for the maximum open position and tap position. In the maximum open position, the extensor muscle resistance was twice as high as the flexor muscle resistance and males had a higher spring constant. In the tap position, the flexor muscle resistance was much higher than the extensor muscle resistance. This indicates that the tapping dynamics in the maximum open position are controlled by the balance of extensor and flexor muscle friction resistances and the flexor stiffness, and the flexor friction resistance is the main component in the tap position. It can be concluded that our estimation method makes it possible to understand the tapping dynamics.

  9. Estimation method of finger tapping dynamics using simple magnetic detection system

    Science.gov (United States)

    Kandori, Akihiko; Sano, Yuko; Miyashita, Tsuyoshi; Okada, Yoshihisa; Irokawa, Masataka; Shima, Keisuke; Tsuji, Toshio; Yokoe, Masaru; Sakoda, Saburo

    2010-05-01

    We have developed the simple estimation method of a finger tapping dynamics model for investigating muscle resistance and stiffness during tapping movement in normal subjects. We measured finger tapping movements of 207 normal subjects using a magnetic finger tapping detection system. Each subject tapped two fingers in time with a metronome at 1, 2, 3, 4, and 5 Hz. The velocity and acceleration values for both the closing and opening tapping data were used to estimate a finger tapping dynamics model. Using the frequency response of the ratio of acceleration to velocity of the mechanical impedance parameters, we estimated the resistance (friction coefficient) and compliance (stiffness). We found two dynamics models for the maximum open position and tap position. In the maximum open position, the extensor muscle resistance was twice as high as the flexor muscle resistance and males had a higher spring constant. In the tap position, the flexor muscle resistance was much higher than the extensor muscle resistance. This indicates that the tapping dynamics in the maximum open position are controlled by the balance of extensor and flexor muscle friction resistances and the flexor stiffness, and the flexor friction resistance is the main component in the tap position. It can be concluded that our estimation method makes it possible to understand the tapping dynamics.

  10. Small field depth dose profile of 6 MV photon beam in a simple air-water heterogeneity combination: A comparison between anisotropic analytical algorithm dose estimation with thermoluminescent dosimeter dose measurement.

    Science.gov (United States)

    Mandal, Abhijit; Ram, Chhape; Mourya, Ankur; Singh, Navin

    2017-01-01

    To establish trends of estimation error of dose calculation by anisotropic analytical algorithm (AAA) with respect to dose measured by thermoluminescent dosimeters (TLDs) in air-water heterogeneity for small field size photon. TLDs were irradiated along the central axis of the photon beam in four different solid water phantom geometries using three small field size single beams. The depth dose profiles were estimated using AAA calculation model for each field sizes. The estimated and measured depth dose profiles were compared. The over estimation (OE) within air cavity were dependent on field size (f) and distance (x) from solid water-air interface and formulated as OE = - (0.63 f + 9.40) x2+ (-2.73 f + 58.11) x + (0.06 f2 - 1.42 f + 15.67). In postcavity adjacent point and distal points from the interface have dependence on field size (f) and equations are OE = 0.42 f2 - 8.17 f + 71.63, OE = 0.84 f2 - 1.56 f + 17.57, respectively. The trend of estimation error of AAA dose calculation algorithm with respect to measured value have been formulated throughout the radiation path length along the central axis of 6 MV photon beam in air-water heterogeneity combination for small field size photon beam generated from a 6 MV linear accelerator.

  11. Analytical solution to DGLAP integro-differential equation in a simple toy-model with a fixed gauge coupling

    Energy Technology Data Exchange (ETDEWEB)

    Alvarez, Gustavo [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Concepcion Univ. (Chile). Dept. de Fisica; Cvetic, Gorazd [Univ. Tecnica Federico Santa Maria, Valparaiso (Chile). Dept. de Fisica; Kniehl, Bernd A. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Kondrashuk, Igor [Univ. del Bio-Bio, Chillan (Chile). Grupo de Matematica Aplicada; Univ. del Bio-Bio, Chillan (Chile). Grupo de Fisica de Altas Energias; Parra-Ferrada, Ivan [Talca Univ. (Chile). Inst. de Matematica y Fisica

    2016-11-15

    We consider a simple model for QCD dynamics in which DGLAP integro-differential equation may be solved analytically. This is a gauge model which possesses dominant evolution of gauge boson (gluon) distribution and in which the gauge coupling does not run. This may be N=4 supersymmetric gauge theory with softly broken supersymmetry, other finite supersymmetric gauge theory with lower level of supersymmetry, or topological Chern-Simons field theories. We maintain only one term in the splitting function of unintegrated gluon distribution and solve DGLAP analytically for this simplified splitting function. The solution is found by use of the Cauchy integral formula. The solution restricts form of the unintegrated gluon distribution as function of transfer momentum and of Bjorken x. Then we consider an almost realistic splitting function of unintegrated gluon distribution as an input to DGLAP equation and solve it by the same method which we have developed to solve DGLAP equation for the toy-model. We study a result obtained for the realistic gluon distribution and find a singular Bessel-like behaviour in the vicinity of the point x=0 and a smooth behaviour in the vicinity of the point x=1.

  12. eAnalytics: Dynamic Web-based Analytics for the Energy Industry

    Directory of Open Access Journals (Sweden)

    Paul Govan

    2016-11-01

    Full Text Available eAnalytics is a web application built on top of R that provides dynamic data analytics to energy industry stakeholders. The application allows users to dynamically manipulate chart data and style through the Shiny package’s reactive framework. eAnalytics currently supports a number of features including interactive datatables, dynamic charting capabilities, and the ability to save, download, or export information for further use. Going forward, the goal for this project is that it will serve as a research hub for discovering new relationships in the data. The application is illustrated with a simple tutorial of the user interface design.

  13. An unbiased estimator of the variance of simple random sampling using mixed random-systematic sampling

    OpenAIRE

    Padilla, Alberto

    2009-01-01

    Systematic sampling is a commonly used technique due to its simplicity and ease of implementation. The drawback of this simplicity is that it is not possible to estimate the design variance without bias. There are several ways to circumvent this problem. One method is to suppose that the variable of interest has a random order in the population, so the sample variance of simple random sampling without replacement is used. By means of a mixed random - systematic sample, an unbiased estimator o...

  14. Analytical model for real time, noninvasive estimation of blood glucose level.

    Science.gov (United States)

    Adhyapak, Anoop; Sidley, Matthew; Venkataraman, Jayanti

    2014-01-01

    The paper presents an analytical model to estimate blood glucose level from measurements made non-invasively and in real time by an antenna strapped to a patient's wrist. Some promising success has been shown by the RIT ETA Lab research group that an antenna's resonant frequency can track, in real time, changes in glucose concentration. Based on an in-vitro study of blood samples of diabetic patients, the paper presents a modified Cole-Cole model that incorporates a factor to represent the change in glucose level. A calibration technique using the input impedance technique is discussed and the results show a good estimation as compared to the glucose meter readings. An alternate calibration methodology has been developed that is based on the shift in the antenna resonant frequency using an equivalent circuit model containing a shunt capacitor to represent the shift in resonant frequency with changing glucose levels. Work under progress is the optimization of the technique with a larger sample of patients.

  15. Analytical solutions of the electrostatically actuated curled beam problem

    KAUST Repository

    Younis, Mohammad I.

    2014-07-24

    This works presents analytical expressions of the electrostatically actuated initially deformed cantilever beam problem. The formulation is based on the continuous Euler-Bernoulli beam model combined with a single-mode Galerkin approximation. We derive simple analytical expressions for two commonly observed deformed beams configurations: the curled and tilted configurations. The derived analytical formulas are validated by comparing their results to experimental data and numerical results of a multi-mode reduced order model. The derived expressions do not involve any complicated integrals or complex terms and can be conveniently used by designers for quick, yet accurate, estimations. The formulas are found to yield accurate results for most commonly encountered microbeams of initial tip deflections of few microns. For largely deformed beams, we found that these formulas yield less accurate results due to the limitations of the single-mode approximation. In such cases, multi-mode reduced order models are shown to yield accurate results. © 2014 Springer-Verlag Berlin Heidelberg.

  16. Analytical performance of refractometry in quantitative estimation of isotopic concentration of heavy water in nuclear reactor

    International Nuclear Information System (INIS)

    Dhole, K.; Ghosh, S.; Datta, A.; Tripathy, M.K.; Bose, H.; Roy, M.; Tyagi, A.K.

    2011-01-01

    The method of refractometry has been investigated for the quantitative estimation of isotopic concentration of D 2 O (heavy water) in a simulated water sample. Viability of Refractometry as an excellent analytical technique for rapid and non-invasive determination of D 2 O concentration in water samples has been demonstrated. Temperature of the samples was precisely controlled to eliminate effect of temperature fluctuation on refractive index measurement. Calibration performance by this technique exhibited reasonable analytical response over a wide range (1-100%) of D 2 O concentration. (author)

  17. Analytical estimation of effective charges at saturation in Poisson-Boltzmann cell models

    International Nuclear Information System (INIS)

    Trizac, Emmanuel; Aubouy, Miguel; Bocquet, Lyderic

    2003-01-01

    We propose a simple approximation scheme for computing the effective charges of highly charged colloids (spherical or cylindrical with infinite length). Within non-linear Poisson-Boltzmann theory, we start from an expression for the effective charge in the infinite-dilution limit which is asymptotically valid for large salt concentrations; this result is then extended to finite colloidal concentration, approximating the salt partitioning effect which relates the salt content in the suspension to that of a dialysing reservoir. This leads to an analytical expression for the effective charge as a function of colloid volume fraction and salt concentration. These results compare favourably with the effective charges at saturation (i.e. in the limit of large bare charge) computed numerically following the standard prescription proposed by Alexander et al within the cell model

  18. A simple method for estimating the size of nuclei on fractal surfaces

    Science.gov (United States)

    Zeng, Qiang

    2017-10-01

    Determining the size of nuclei on complex surfaces remains a big challenge in aspects of biological, material and chemical engineering. Here the author reported a simple method to estimate the size of the nuclei in contact with complex (fractal) surfaces. The established approach was based on the assumptions of contact area proportionality for determining nucleation density and the scaling congruence between nuclei and surfaces for identifying contact regimes. It showed three different regimes governing the equations for estimating the nucleation site density. Nuclei in the size large enough could eliminate the effect of fractal structure. Nuclei in the size small enough could lead to the independence of nucleation site density on fractal parameters. Only when nuclei match the fractal scales, the nucleation site density is associated with the fractal parameters and the size of the nuclei in a coupling pattern. The method was validated by the experimental data reported in the literature. The method may provide an effective way to estimate the size of nuclei on fractal surfaces, through which a number of promising applications in relative fields can be envisioned.

  19. Methods for estimating wake flow and effluent dispersion near simple block-like buildings

    International Nuclear Information System (INIS)

    Hosker, R.P. Jr.

    1981-05-01

    This report is intended as an interim guide for those who routinely face air quality problems associated with near-building exhaust stack placement and height, and the resulting concentration patterns. Available data and methods for estimating wake flow and effluent dispersion near isolated block-like structures are consolidated. The near-building and wake flows are described, and quantitative estimates for frontal eddy size, height and extent of roof and wake cavities, and far wake behavior are provided. Concentration calculation methods for upwind, near-building, and downwind pollutant sources are given. For an upwind source, it is possible to estimate the required stack height, and to place upper limits on the likely near-building concentration. The influences of near-building source location and characteristics relative to the building geometry and orientation are considered. Methods to estimate effective stack height, upper limits for concentration due to flush roof vents, and the effect of changes in rooftop stack height are summarized. Current wake and wake cavity models are presented. Numerous graphs of important expressions have been prepared to facilitate computations and quick estimates of flow patterns and concentration levels for specific simple buildings. Detailed recommendations for additional work are given

  20. Supersaturation Control using Analytical Crystal Size Distribution Estimator for Temperature Dependent in Nucleation and Crystal Growth Phenomena

    Science.gov (United States)

    Zahari, Zakirah Mohd; Zubaidah Adnan, Siti; Kanthasamy, Ramesh; Saleh, Suriyati; Samad, Noor Asma Fazli Abdul

    2018-03-01

    The specification of the crystal product is usually given in terms of crystal size distribution (CSD). To this end, optimal cooling strategy is necessary to achieve the CSD. The direct design control involving analytical CSD estimator is one of the approaches that can be used to generate the set-point. However, the effects of temperature on the crystal growth rate are neglected in the estimator. Thus, the temperature dependence on the crystal growth rate needs to be considered in order to provide an accurate set-point. The objective of this work is to extend the analytical CSD estimator where Arrhenius expression is employed to cover the effects of temperature on the growth rate. The application of this work is demonstrated through a potassium sulphate crystallisation process. Based on specified target CSD, the extended estimator is capable of generating the required set-point where a proposed controller successfully maintained the operation at the set-point to achieve the target CSD. Comparison with other cooling strategies shows a reduction up to 18.2% of the total number of undesirable crystals generated from secondary nucleation using linear cooling strategy is achieved.

  1. Testing Convergence of Different Free-Energy Methods in a Simple Analytical System with Hidden Barriers

    Directory of Open Access Journals (Sweden)

    S. Alexis Paz

    2018-03-01

    Full Text Available In this work, we study the influence of hidden barriers on the convergence behavior of three free-energy calculation methods: well-tempered metadynamics (WTMD, adaptive-biasing forces (ABF, and on-the-fly parameterization (OTFP. We construct a simple two-dimensional potential-energy surfaces (PES that allows for an exact analytical result for the free-energy in any one-dimensional order parameter. Then we chose different CV definitions and PES parameters to create three different systems with increasing sampling challenges. We find that all three methods are not greatly affected by the hidden-barriers in the simplest case considered. The adaptive sampling methods show faster sampling while the auxiliary high-friction requirement of OTFP makes it slower for this case. However, a slight change in the CV definition has a strong impact in the ABF and WTMD performance, illustrating the importance of choosing suitable collective variables.

  2. Estimating contaminant discharge rates from stabilized uranium tailings embankments

    International Nuclear Information System (INIS)

    Weber, M.F.

    1986-01-01

    Estimates of contaminant discharge rates from stabilized uranium tailings embankments are essential in evaluating long-term impacts of tailings disposal on groundwater resources. Contaminant discharge rates are a function of water flux through tailings covers, the mass and distribution of tailings, and the concentrations of contaminants in percolating pore fluids. Simple calculations, laboratory and field testing, and analytical and numerical modeling may be used to estimate water flux through variably-saturated tailings under steady-state conditions, which develop after consolidation and dewatering have essentially ceased. Contaminant concentrations in water discharging from the tailings depend on tailings composition, leachability and solubility of contaminants, geochemical conditions within the embankment, tailings-water interactions, and flux of water through the embankment. These concentrations may be estimated based on maximum reported concentrations, pore water concentrations, extrapolations of column leaching data, or geochemical equilibria and reaction pathway modeling. Attempts to estimate contaminant discharge rates should begin with simple, conservative calculations and progress to more-complicated approaches, as necessary

  3. Development and validation of simple RP-HPLC-PDA analytical protocol for zileuton assisted with Design of Experiments for robustness determination

    OpenAIRE

    Saurabh B. Ganorkar; Dinesh M. Dhumal; Atul A. Shirkhedkar

    2017-01-01

    A simple, rapid, sensitive, robust, stability-indicating RP-HPLC-PDA analytical protocol was developed and validated for the analysis of zileuton racemate in bulk and in tablet formulation. Development of method and resolution of degradation products from forced; hydrolytic (acidic, basic, neutral), oxidative, photolytic (acidic, basic, neutral, solid state) and thermal (dry heat) degradation was achieved on a LC – GC Qualisil BDS C18 column (250 mm × 4.6 mm × 5 μm) by isocratic mode at ambie...

  4. Groundwater Seepage Estimation into Amirkabir Tunnel Using Analytical Methods and DEM and SGR Method

    OpenAIRE

    Hadi Farhadian; Homayoon Katibeh

    2015-01-01

    In this paper, groundwater seepage into Amirkabir tunnel has been estimated using analytical and numerical methods for 14 different sections of the tunnel. Site Groundwater Rating (SGR) method also has been performed for qualitative and quantitative classification of the tunnel sections. The obtained results of above mentioned methods were compared together. The study shows reasonable accordance with results of the all methods unless for two sections of tunnel. In these t...

  5. A simple high-performance liquid chromatographic method for the estimation of boswellic acids from the market formulations containing Boswellia serrata extract.

    Science.gov (United States)

    Shah, Shailesh A; Rathod, Ishwarsinh S; Suhagia, Bhanubhai N; Pandya, Saurabh S; Parmar, Vijay K

    2008-09-01

    A simple, rapid, and reproducible reverse-phase high-performance liquid chromatographic method is developed for the estimation of boswellic acids, the active constituents in Boswellia serrata oleo-gum resin. The chromatographic separation is performed using a mobile phase consisting of acetonitrile-water (90:10, % v/v) adjusted to pH 4 with glacial acetic acid on a Kromasil 100 C18 analytical column with flow rate of 2.0 mL/min and detection at 260 nm. The elution times are 4.30 and 7.11 min for 11-keto beta-boswellic acid (11-KBA) and 3-acetyl 11-keto beta-boswellic acid (A-11-KBA), respectively. The calibration curve is linear in the 11.66-58.30 microg/mL and 6.50-32.50 microg/mL range for 11-KBA and A-11-KBA, respectively. The limits of detection are 2.33 microg/mL and 1.30 microg/mL for 11-KBA and A-11-KBA, respectively. The mean recoveries are 98.24% to 104.17% and 94.12% to 105.92% for 11-KBA and A-11-KBA, respectively. The inter- and intra-day variation coefficients are less than 5%. The present method is successfully applied for the estimation of boswellic acids from the market formulations containing Boswellia serrata extract.

  6. Parameters estimation of the single and double diode photovoltaic models using a Gauss–Seidel algorithm and analytical method: A comparative study

    International Nuclear Information System (INIS)

    Et-torabi, K.; Nassar-eddine, I.; Obbadi, A.; Errami, Y.; Rmaily, R.; Sahnoun, S.; El fajri, A.; Agunaou, M.

    2017-01-01

    Highlights: • Comparative study of two methods: a Gauss Seidel method and an analytical method. • Five models are implemented to estimate the five parameters for single diode. • Two models are used to estimate the seven parameters for double diode. • The parameters are estimated under changing environmental conditions. • To choose method/model combination more adequate for each PV module technology. - Abstract: In the photovoltaic (PV) panels modeling field, this paper presents a comparative study of two parameter estimation methods: the iterative method called Gauss Seidel, applied on the single diode model, and the analytical method used on the double diode model. These parameter estimation methods are based on the manufacturer's datasheets. They are also tested on three PV modules of different technologies: multicrystalline (kyocera KC200GT), monocrystalline (Shell SQ80), and thin film (Shell ST40). For the iterative method, five existing mathematical models classified from 1 to 5 are used to estimate the parameters of these PV modules under varying environmental conditions. Only two models of them are used for the analytical method. Each model is based on the combination of the photocurrent and the reverse saturation current’s expressions in terms of temperature and irradiance. In addition, the results of the models’ simulation are compared with the experimental data obtained from the PV modules’ datasheets, in order to evaluate the accuracy of the models. The simulation shows that the I-V characteristics obtained are matching to the experimental data. In order to validate the reliability of the two methods, both the Absolute Error (AE) and the Root Mean Square Error (RMSE) were calculated. The results suggest that the analytical method can be very useful for monocrystalline and multicrystalline modules, but for the thin film module, the iterative method is the most suitable.

  7. Analytical Method to Estimate Fatigue Life Time Duration in Service for Runner Blade Mechanism of Kaplan Turbines

    Directory of Open Access Journals (Sweden)

    Ana – Maria Budai

    2010-10-01

    Full Text Available The paper present an analytical method that can be used to determianted fatigue life time duration in service for runner blade mechanism of Kaplan turbines. The study was made for lever button of runer blade mechanism using two analytical relation to calculate the maximum number of stress cycles whereupon the mechanism work without any damage. To estimate fatigue life time duration will be used a formula obtained from one of most comon cumulative damage methodology taking in consideration the real exploatation conditions of a specified Kapaln turbine.

  8. Preliminary Upper Estimate of Peak Currents in Transcranial Magnetic Stimulation at Distant Locations From a TMS Coil.

    Science.gov (United States)

    Makarov, Sergey N; Yanamadala, Janakinadh; Piazza, Matthew W; Helderman, Alex M; Thang, Niang S; Burnham, Edward H; Pascual-Leone, Alvaro

    2016-09-01

    Transcranial magnetic stimulation (TMS) is increasingly used as a diagnostic and therapeutic tool for numerous neuropsychiatric disorders. The use of TMS might cause whole-body exposure to undesired induced currents in patients and TMS operators. The aim of this study is to test and justify a simple analytical model known previously, which may be helpful as an upper estimate of eddy current density at a particular distant observation point for any body composition and any coil setup. We compare the analytical solution with comprehensive adaptive mesh refinement-based FEM simulations of a detailed full-body human model, two coil types, five coil positions, about 100 000 observation points, and two distinct pulse rise times; thus, providing a representative number of different datasets for comparison, while also using other numerical data. Our simulations reveal that, after a certain modification, the analytical model provides an upper estimate for the eddy current density at any location within the body. In particular, it overestimates the peak eddy currents at distant locations from a TMS coil by a factor of 10 on average. The simple analytical model tested in this study may be valuable as a rapid method to safely estimate levels of TMS currents at different locations within a human body. At present, safe limits of general exposure to TMS electric and magnetic fields are an open subject, including fetal exposure for pregnant women.

  9. A SIMPLE ANALYTICAL METHOD TO DETERMINE SOLAR ENERGETIC PARTICLES' MEAN FREE PATH

    International Nuclear Information System (INIS)

    He, H.-Q.; Qin, G.

    2011-01-01

    To obtain the mean free path of solar energetic particles (SEPs) for a solar event, one usually has to fit time profiles of both flux and anisotropy from spacecraft observations to numerical simulations of SEPs' transport processes. This method can be called a simulation method. But a reasonably good fitting needs a lot of simulations, which demand a large amount of calculation resources. Sometimes, it is necessary to find an easy way to obtain the mean free path of SEPs quickly, for example, in space weather practice. Recently, Shalchi et al. provided an approximate analytical formula of SEPs' anisotropy time profile as a function of particles' mean free path for impulsive events. In this paper, we determine SEPs' mean free path by fitting the anisotropy time profiles from Shalchi et al.'s analytical formula to spacecraft observations. This new method can be called an analytical method. In addition, we obtain SEPs' mean free path with the traditional simulation methods. Finally, we compare the mean free path obtained with the simulation method to that of the analytical method to show that the analytical method, with some minor modifications, can give us a good, quick approximation of SEPs' mean free path for impulsive events.

  10. Analytical Method to Estimate the Complex Permittivity of Oil Samples

    Directory of Open Access Journals (Sweden)

    Lijuan Su

    2018-03-01

    Full Text Available In this paper, an analytical method to estimate the complex dielectric constant of liquids is presented. The method is based on the measurement of the transmission coefficient in an embedded microstrip line loaded with a complementary split ring resonator (CSRR, which is etched in the ground plane. From this response, the dielectric constant and loss tangent of the liquid under test (LUT can be extracted, provided that the CSRR is surrounded by such LUT, and the liquid level extends beyond the region where the electromagnetic fields generated by the CSRR are present. For that purpose, a liquid container acting as a pool is added to the structure. The main advantage of this method, which is validated from the measurement of the complex dielectric constant of olive and castor oil, is that reference samples for calibration are not required.

  11. A simple approach to estimate soil organic carbon and soil co/sub 2/ emission

    International Nuclear Information System (INIS)

    Abbas, F.

    2013-01-01

    SOC (Soil Organic Carbon) and soil CO/sub 2/ (Carbon Dioxide) emission are among the indicator of carbon sequestration and hence global climate change. Researchers in developed countries benefit from advance technologies to estimate C (Carbon) sequestration. However, access to the latest technologies has always been challenging in developing countries to conduct such estimates. This paper presents a simple and comprehensive approach for estimating SOC and soil CO/sub 2/ emission from arable- and forest soils. The approach includes various protocols that can be followed in laboratories of the research organizations or academic institutions equipped with basic research instruments and technology. The protocols involve soil sampling, sample analysis for selected properties, and the use of a worldwide tested Rothamsted carbon turnover model. With this approach, it is possible to quantify SOC and soil CO/sub 2/ emission over short- and long-term basis for global climate change assessment studies. (author)

  12. Theoretical estimates of spherical and chromatic aberration in photoemission electron microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Fitzgerald, J.P.S., E-mail: fit@pdx.edu; Word, R.C.; Könenkamp, R.

    2016-01-15

    We present theoretical estimates of the mean coefficients of spherical and chromatic aberration for low energy photoemission electron microscopy (PEEM). Using simple analytic models, we find that the aberration coefficients depend primarily on the difference between the photon energy and the photoemission threshold, as expected. However, the shape of the photoelectron spectral distribution impacts the coefficients by up to 30%. These estimates should allow more precise correction of aberration in PEEM in experimental situations where the aberration coefficients and precise electron energy distribution cannot be readily measured. - Highlights: • Spherical and chromatic aberration coefficients of the accelerating field in PEEM. • Compact, analytic expressions for coefficients depending on two emission parameters. • Effect of an aperture stop on the distribution is also considered.

  13. NUMERICAL AND ANALYTIC METHODS OF ESTIMATION BRIDGES’ CONSTRUCTIONS

    Directory of Open Access Journals (Sweden)

    Y. Y. Luchko

    2010-03-01

    Full Text Available In this article the numerical and analytical methods of calculation of the stressed-and-strained state of bridge constructions are considered. The task on increasing of reliability and accuracy of the numerical method and its solution by means of calculations in two bases are formulated. The analytical solution of the differential equation of deformation of a ferro-concrete plate under the action of local loads is also obtained.

  14. Analytical expression for the phantom generated bremsstrahlung background in high energy electron beams

    International Nuclear Information System (INIS)

    Sorcini, B.B.; Hyoedynmaa, S; Brahme, A.

    1995-01-01

    Qualification of the bremsstrahlung photon background generated by an electron beam in a phantom is important for accurate high energy electron beam dosimetry in radiation therapy. An analytical expression has been derived for the background of phantom generated bremsstrahlung photons in plane parallel electron beams normally incident on phantoms of any atomic number between 4 and 92 (Be, C, H 2 O, Al, Cu, Ag, Pb and U). The expression can be used with fairly good accuracy in the energy range between 1 and 50 MeV. The expression is globally based on known scattering power and radiation and collision stopping power data for the phantom material at the mean energy of the incident electrons. The depth dose distribution due to the bremsstrahlung generated in the phantom is derived by folding the bremsstrahlung energy fluence with a simple analytical one-dimensional photon energy deposition kernel. The energy loss of the primary electrons and the generation, attenuation and absorption of bremsstrahlung photons are taken into account in the analytical formula. The photon energy deposition kernel is used to account for the bremsstrahlung produced at one depth that will contribute to the down stream dose. A simple analytical expression for photon energy deposition kernel is consistent with the classical analytical relation describing the photon depth dose distribution. From the surface to the practical range the photon dose increases almost linearly due to accumulation and buildup of the photon produced at different phantom layers. At depths beyond the practical range a simple exponential function can be use to describe the bremsstrahlung attenuation in the phantom. For comparison Monte Carlo calculated distributions using ITS3 Monte Carlo Code were used. Good agreement is found between the analytical expression and Monte Carlo calculation. Deviations of 5% from Monte Carlo calculated bremmstrahlung background are observed for high atomic number materials. The method can

  15. Analytical estimation of control rod shadowing effect for excess reactivity measurement of HTTR

    International Nuclear Information System (INIS)

    Nakano, Masaaki; Fujimoto, Nozomu; Yamashita, Kiyonobu

    1999-01-01

    The fuel addition method is generally used for the excess reactivity measurement of the initial core. The control rod shadowing effect for the excess reactivity measurement has been estimated analytically for High Temperature Engineering Test Reactor (HTTR). 3-dimensional whole core analyses were carried out. The movements of control rods in measurements were simulated in the calculation. It was made clear that the value of excess reactivity strongly depend on combinations of measuring control rods and compensating control rods. The differences in excess reactivity between combinations come from the control rod shadowing effect. The shadowing effect is reduced by the use of plural number of measuring and compensating control rods to prevent deep insertion of them into the core. The measured excess reactivity in the experiments is, however, smaller than the estimated value with shadowing effect. (author)

  16. An analytical approach to estimate the number of small scatterers in 2D inverse scattering problems

    International Nuclear Information System (INIS)

    Fazli, Roohallah; Nakhkash, Mansor

    2012-01-01

    This paper presents an analytical method to estimate the location and number of actual small targets in 2D inverse scattering problems. This method is motivated from the exact maximum likelihood estimation of signal parameters in white Gaussian noise for the linear data model. In the first stage, the method uses the MUSIC algorithm to acquire all possible target locations and in the next stage, it employs an analytical formula that works as a spatial filter to determine which target locations are associated to the actual ones. The ability of the method is examined for both the Born and multiple scattering cases and for the cases of well-resolved and non-resolved targets. Many numerical simulations using both the coincident and non-coincident arrays demonstrate that the proposed method can detect the number of actual targets even in the case of very noisy data and when the targets are closely located. Using the experimental microwave data sets, we further show that this method is successful in specifying the number of small inclusions. (paper)

  17. On the distribution of estimators of diffusion constants for Brownian motion

    International Nuclear Information System (INIS)

    Boyer, Denis; Dean, David S

    2011-01-01

    We discuss the distribution of various estimators for extracting the diffusion constant of single Brownian trajectories obtained by fitting the squared displacement of the trajectory. The analysis of the problem can be framed in terms of quadratic functionals of Brownian motion that correspond to the Euclidean path integral for simple Harmonic oscillators with time dependent frequencies. Explicit analytical results are given for the distribution of the diffusion constant estimator in a number of cases and our results are confirmed by numerical simulations.

  18. Intermediate algebra & analytic geometry

    CERN Document Server

    Gondin, William R

    1967-01-01

    Intermediate Algebra & Analytic Geometry Made Simple focuses on the principles, processes, calculations, and methodologies involved in intermediate algebra and analytic geometry. The publication first offers information on linear equations in two unknowns and variables, functions, and graphs. Discussions focus on graphic interpretations, explicit and implicit functions, first quadrant graphs, variables and functions, determinate and indeterminate systems, independent and dependent equations, and defective and redundant systems. The text then examines quadratic equations in one variable, system

  19. The Similarity Hypothesis and New Analytical Support on the Estimation of Horizontal Infiltration into Sand

    International Nuclear Information System (INIS)

    Prevedello, C.L.; Loyola, J.M.T.

    2010-01-01

    A method based on a specific power-law relationship between the hydraulic head and the Boltzmann variable, presented using a similarity hypothesis, was recently generalized to a range of powers to satisfy the Bruce and Klute equation exactly. Here, considerations are presented on the proposed similarity assumption, and new analytical support is given to estimate the water density flux into and inside the soil, based on the concept of sorptivity and on Buckingham-Darcy's law. Results show that the new analytical solution satisfies both theories in the calculation of water density fluxes and is in agreement with experimental results of water infiltrating horizontally into sand. However, the utility of this analysis still needs to be verified for a variety of different textured soils having a diverse range of initial soil water contents.

  20. A simple algorithm for estimation of source-to-detector distance in Compton imaging

    International Nuclear Information System (INIS)

    Rawool-Sullivan, Mohini W.; Sullivan, John P.; Tornga, Shawn R.; Brumby, Steven P.

    2008-01-01

    Compton imaging is used to predict the location of gamma-emitting radiation sources. The X and Y coordinates of the source can be obtained using a back-projected image and a two-dimensional peak-finding algorithm. The emphasis of this work is to estimate the source-to-detector distance (Z). The algorithm presented uses the solid angle subtended by the reconstructed image at various source-to-detector distances. This algorithm was validated using both measured data from the prototype Compton imager (PCI) constructed at the Los Alamos National Laboratory and simulated data of the same imager. Results show this method can be applied successfully to estimate Z, and it provides a way of determining Z without prior knowledge of the source location. This method is faster than the methods that employ maximum likelihood method because it is based on simple back projections of Compton scatter data

  1. The Simple Analytics of the Environmental Kuznets Curve

    OpenAIRE

    James Andreoni; Arik Levinson

    1998-01-01

    Evidence suggests that some pollutants follow an inverse-U-shaped pattern relative to countries' incomes. This relationship has been called the out a simple and straight-forward static model of the microfoundations of the pollution-income relationship. We show that the environmental Kuznets curve can be derived directly from the technological link between consumption of a desired good and abatement of its undesirable byproduct. The inverse-U shape does not depend on the dynamics of growth, po...

  2. Exploring Simple Algorithms for Estimating Gross Primary Production in Forested Areas from Satellite Data

    Directory of Open Access Journals (Sweden)

    Ramakrishna R. Nemani

    2012-01-01

    Full Text Available Algorithms that use remotely-sensed vegetation indices to estimate gross primary production (GPP, a key component of the global carbon cycle, have gained a lot of popularity in the past decade. Yet despite the amount of research on the topic, the most appropriate approach is still under debate. As an attempt to address this question, we compared the performance of different vegetation indices from the Moderate Resolution Imaging Spectroradiometer (MODIS in capturing the seasonal and the annual variability of GPP estimates from an optimal network of 21 FLUXNET forest towers sites. The tested indices include the Normalized Difference Vegetation Index (NDVI, Enhanced Vegetation Index (EVI, Leaf Area Index (LAI, and Fraction of Photosynthetically Active Radiation absorbed by plant canopies (FPAR. Our results indicated that single vegetation indices captured 50–80% of the variability of tower-estimated GPP, but no one index performed universally well in all situations. In particular, EVI outperformed the other MODIS products in tracking seasonal variations in tower-estimated GPP, but annual mean MODIS LAI was the best estimator of the spatial distribution of annual flux-tower GPP (GPP = 615 × LAI − 376, where GPP is in g C/m2/year. This simple algorithm rehabilitated earlier approaches linking ground measurements of LAI to flux-tower estimates of GPP and produced annual GPP estimates comparable to the MODIS 17 GPP product. As such, remote sensing-based estimates of GPP continue to offer a useful alternative to estimates from biophysical models, and the choice of the most appropriate approach depends on whether the estimates are required at annual or sub-annual temporal resolution.

  3. Compact tokamak reactors. Part 1 (analytic results)

    International Nuclear Information System (INIS)

    Wootton, A.J.; Wiley, J.C.; Edmonds, P.H.; Ross, D.W.

    1996-01-01

    We discuss the possible use of tokamaks for thermonuclear power plants, in particular tokamaks with low aspect ratio and copper toroidal field coils. Three approaches are presented. First we review and summarize the existing literature. Second, using simple analytic estimates, the size of the smallest tokamak to produce an ignited plasma is derived. This steady state energy balance analysis is then extended to determine the smallest tokamak power plant, by including the power required to drive the toroidal field, and considering two extremes of plasma current drive efficiency. The analytic results will be augmented by a numerical calculation which permits arbitrary plasma current drive efficiency; the results of which will be presented in Part II. Third, a scaling from any given reference reactor design to a copper toroidal field coil device is discussed. Throughout the paper the importance of various restrictions is emphasized, in particular plasma current drive efficiency, plasma confinement, plasma safety factor, plasma elongation, plasma beta, neutron wall loading, blanket availability and recirculating electric power. We conclude that the latest published reactor studies, which show little advantage in using low aspect ratio unless remarkably high efficiency plasma current drive and low safety factor are combined, can be reproduced with the analytic model

  4. Analytical estimates and proof of the scale-free character of efficiency and improvement in Barabasi-Albert trees

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez-Bermejo, B. [Departamento de Fisica, Universidad Rey Juan Carlos, Escuela Superior de Ciencias Experimentales y Tecnologia, Edificio Departamental II, Calle Tulipan S/N, 28933-Mostoles-Madrid (Spain)], E-mail: benito.hernandez@urjc.es; Marco-Blanco, J. [Departamento de Fisica, Universidad Rey Juan Carlos, Escuela Superior de Ciencias Experimentales y Tecnologia, Edificio Departamental II, Calle Tulipan S/N, 28933-Mostoles-Madrid (Spain); Romance, M. [Departamento de Matematica Aplicada, Universidad Rey Juan Carlos, Escuela Superior de Ciencias Experimentales y Tecnologia, Edificio Departamental II, Calle Tulipan S/N, 28933-Mostoles-Madrid (Spain)

    2009-02-23

    Estimates for the efficiency of a tree are derived, leading to new analytical expressions for Barabasi-Albert trees efficiency. These expressions are used to investigate the dynamic behaviour of such networks. It is proved that the preferential attachment leads to an asymptotic conservation of efficiency as the Barabasi-Albert trees grow.

  5. Analytical estimates and proof of the scale-free character of efficiency and improvement in Barabasi-Albert trees

    International Nuclear Information System (INIS)

    Hernandez-Bermejo, B.; Marco-Blanco, J.; Romance, M.

    2009-01-01

    Estimates for the efficiency of a tree are derived, leading to new analytical expressions for Barabasi-Albert trees efficiency. These expressions are used to investigate the dynamic behaviour of such networks. It is proved that the preferential attachment leads to an asymptotic conservation of efficiency as the Barabasi-Albert trees grow

  6. Simple, efficient estimators of treatment effects in randomized trials using generalized linear models to leverage baseline variables.

    Science.gov (United States)

    Rosenblum, Michael; van der Laan, Mark J

    2010-04-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation.

  7. Simple, Efficient Estimators of Treatment Effects in Randomized Trials Using Generalized Linear Models to Leverage Baseline Variables

    Science.gov (United States)

    Rosenblum, Michael; van der Laan, Mark J.

    2010-01-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636

  8. An Analytical Method for Jack-Up Riser’s Fatigue Life Estimation

    Directory of Open Access Journals (Sweden)

    Fengde Wang

    2018-01-01

    Full Text Available In order to determine whether a special sea area and its sea state are available for the jack-up riser with surface blowout preventers, an analytical method is presented to estimate the jack-up riser’s wave loading fatigue life in this study. In addition, an approximate formula is derived to compute the random wave force spectrum of the small-scale structures. The results show that the response of jack-up riser is a narrow band random vibration. The infinite water depth dispersion relation between wavenumber and wave frequency can be used to calculate the wave force spectrum of small-scale structures. The riser’s response mainly consists of the additional displacement response. The fatigue life obtained by the formula proposed by Steinberg is less than that of the Bendat method.

  9. Estimate of rain evaporation rates from dual-wavelength lidar measurements: comparison against a model analytical solution

    Science.gov (United States)

    Lolli, Simone; Di Girolamo, Paolo; Demoz, Belay; Li, Xiaowen; Welton, Ellsworth J.

    2018-04-01

    Rain evaporation significantly contributes to moisture and heat cloud budgets. In this paper, we illustrate an approach to estimate the median volume raindrop diameter and the rain evaporation rate profiles from dual-wavelength lidar measurements. These observational results are compared with those provided by a model analytical solution. We made use of measurements from the multi-wavelength Raman lidar BASIL.

  10. A simple tool for estimating city-wide annual electrical energy savings from cooler surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Pomerantz, Melvin; Rosado, Pablo J.; Levinson, Ronnen

    2015-12-01

    We present a simple method to estimate the maximum possible electrical energy saving that might be achieved by increasing the albedo of surfaces in a large city. We restrict this to the “indirect effect”, the cooling of outside air that lessens the demand for air conditioning (AC). Given the power demand of the electric utilities and data about the city, we can use a single linear equation to estimate the maximum savings. For example, the result for an albedo change of 0.2 of pavements in a typical warm city in California, such as Sacramento, is that the saving is less than about 2 kWh per m2 per year. This may help decision makers choose which heat island mitigation techniques are economical from an energy-saving perspective.

  11. Validation of the replica trick for simple models

    Science.gov (United States)

    Shinzato, Takashi

    2018-04-01

    We discuss the replica analytic continuation using several simple models in order to prove mathematically the validity of the replica analysis, which is used in a wide range of fields related to large-scale complex systems. While replica analysis consists of two analytical techniques—the replica trick (or replica analytic continuation) and the thermodynamical limit (and/or order parameter expansion)—we focus our study on replica analytic continuation, which is the mathematical basis of the replica trick. We apply replica analysis to solve a variety of analytical models, and examine the properties of replica analytic continuation. Based on the positive results for these models we propose that replica analytic continuation is a robust procedure in replica analysis.

  12. Simultaneous pre-concentration and separation on simple paper-based analytical device for protein analysis.

    Science.gov (United States)

    Niu, Ji-Cheng; Zhou, Ting; Niu, Li-Li; Xie, Zhen-Sheng; Fang, Fang; Yang, Fu-Quan; Wu, Zhi-Yong

    2018-02-01

    In this work, fast isoelectric focusing (IEF) was successfully implemented on an open paper fluidic channel for simultaneous concentration and separation of proteins from complex matrix. With this simple device, IEF can be finished in 10 min with a resolution of 0.03 pH units and concentration factor of 10, as estimated by color model proteins by smartphone-based colorimetric detection. Fast detection of albumin from human serum and glycated hemoglobin (HBA1c) from blood cell was demonstrated. In addition, off-line identification of the model proteins from the IEF fractions with matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF-MS) was also shown. This PAD IEF is potentially useful either for point of care test (POCT) or biomarker analysis as a cost-effective sample pretreatment method.

  13. Simple analytical technique for liquid scintillation counting of environmental carbon-14 using gel suspension method

    International Nuclear Information System (INIS)

    Okai, Tomio; Wakabayashi, Genichiro; Nagao, Kenjiro; Matoba, Masaru; Ohura, Hirotaka; Momoshima, Noriyuki; Kawamura, Hidehisa

    2000-01-01

    A simple analytical technique for liquid scintillation counting of environmental 14 C was developed. Commercially available gelling agent, N-lauroyl-L -glutamic -α,γ-dibutylamide, was used for the gel-formation of the samples (gel suspension method) and for the subsequent liquid scintillation counting of 14 C in the form of CaCO 3 . Our procedure for sample preparation is much simpler than that of the conventional methods and requires no special equipment. Self absorption, stability and reproducibility of gel suspension samples were investigated in order to evaluate the characteristics of the gel suspension method for 14 C activity measurement. The self absorption factor is about 70% and slightly decrease as CaCO 3 weight increase. This is considered to be mainly due to the absorption of β-rays and scintillation light by the CaCO 3 sample itself. No change of the counting rate for the gel suspension sample was observed for more than 2 years after the sample preparation. Four samples were used for checking the reproducibility of the sample preparation method. The same values were obtained for the counting rate of 24 C activity within the counting error. No change of the counting rate was observed for the 're-gelated' sample. These results show that the gel suspension method is appropriate for the 14 C activity measurement by the liquid scintillation counting method and useful for a long-term preservation of the sample for repeated measurement. The above analytical technique was applied to actual environmental samples in Fukuoka prefecture, Japan. Results obtained were comparable with those by other researchers and appear to be reasonable. Therefore, the newly developed technique is useful for the routine monitoring of environmental 14 C. (author)

  14. Analytical calculation of magnet interactions in 3D

    OpenAIRE

    Yonnet , Jean-Paul; Allag , Hicham

    2009-01-01

    International audience; A synthesis of all the analytical expressions of the interaction energy, force components and torque components is presented. It allows the analytical calculation of all the interactions when the magnetizations are in any direction. The 3D analytical expressions are difficult to obtain, but the torque and force expressions are very simple to use.

  15. The Theory of Ratio Scale Estimation: Saaty's Analytic Hierarchy Process

    OpenAIRE

    Patrick T. Harker; Luis G. Vargas

    1987-01-01

    The Analytic Hierarchy Process developed by Saaty (Saaty, T. L. 1980. The Analytic Hierarchy Process. McGraw-Hill, New York.) has proven to be an extremely useful method for decision making and planning. However, some researchers in these areas have raised concerns over the theoretical basis underlying this process. This paper addresses currently debated issues concerning the theoretical foundations of the Analytic Hierarchy Process. We also illustrate through proof and through examples the v...

  16. Estimating the approximation error when fixing unessential factors in global sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sobol' , I.M. [Institute for Mathematical Modelling of the Russian Academy of Sciences, Moscow (Russian Federation); Tarantola, S. [Joint Research Centre of the European Commission, TP361, Institute of the Protection and Security of the Citizen, Via E. Fermi 1, 21020 Ispra (Italy)]. E-mail: stefano.tarantola@jrc.it; Gatelli, D. [Joint Research Centre of the European Commission, TP361, Institute of the Protection and Security of the Citizen, Via E. Fermi 1, 21020 Ispra (Italy)]. E-mail: debora.gatelli@jrc.it; Kucherenko, S.S. [Imperial College London (United Kingdom); Mauntz, W. [Department of Biochemical and Chemical Engineering, Dortmund University (Germany)

    2007-07-15

    One of the major settings of global sensitivity analysis is that of fixing non-influential factors, in order to reduce the dimensionality of a model. However, this is often done without knowing the magnitude of the approximation error being produced. This paper presents a new theorem for the estimation of the average approximation error generated when fixing a group of non-influential factors. A simple function where analytical solutions are available is used to illustrate the theorem. The numerical estimation of small sensitivity indices is discussed.

  17. The link between employee attitudes and employee effectiveness: Data matrix of meta-analytic estimates based on 1161 unique correlations

    Directory of Open Access Journals (Sweden)

    Michael M. Mackay

    2016-09-01

    Full Text Available This article offers a correlation matrix of meta-analytic estimates between various employee job attitudes (i.e., Employee engagement, job satisfaction, job involvement, and organizational commitment and indicators of employee effectiveness (i.e., Focal performance, contextual performance, turnover intention, and absenteeism. The meta-analytic correlations in the matrix are based on over 1100 individual studies representing over 340,000 employees. Data was collected worldwide via employee self-report surveys. Structural path analyses based on the matrix, and the interpretation of the data, can be found in “Investigating the incremental validity of employee engagement in the prediction of employee effectiveness: a meta-analytic path analysis” (Mackay et al., 2016 [1]. Keywords: Meta-analysis, Job attitudes, Job performance, Employee, Engagement, Employee effectiveness

  18. A SIMPLE TOY MODEL OF THE ADVECTIVE-ACOUSTIC INSTABILITY. I. PERTURBATIVE APPROACH

    International Nuclear Information System (INIS)

    Foglizzo, T.

    2009-01-01

    Some general properties of the advective-acoustic instability are described and understood using a toy model, which is simple enough to allow for analytical estimates of the eigenfrequencies. The essential ingredients of this model, in the unperturbed regime, are a stationary shock and a subsonic region of deceleration. For the sake of analytical simplicity, the two-dimensional unperturbed flow is parallel and the deceleration is produced adiabatically by an external potential. The instability mechanism is determined unambiguously as the consequence of a cycle between advected and acoustic perturbations. The purely acoustic cycle, considered alone, is proven to be stable in this flow. Its contribution to the instability can be either constructive or destructive. A frequency cutoff is associated with the advection time through the region of deceleration. This cutoff frequency explains why the instability favors eigenmodes with a low frequency and a large horizontal wavelength. The relation between the instability occurring in this highly simplified toy model and the properties of standing accretion shock instability observed in the numerical simulations of stellar core collapse is discussed. This simple setup is proposed as a benchmark test to evaluate the accuracy, in the linear regime, of numerical simulations involving this instability. We illustrate such benchmark simulations in a companion paper.

  19. Analytical calculation of dE/dx cluster-charge loss due to threshold effects

    International Nuclear Information System (INIS)

    Brady, F.P.; Dunn, J.

    1997-01-01

    This letter presents a simple analytical approximation which allows one to estimate the effect of ADC threshold on the measured cluster-charge size as used for dE/dx determinations. The idea is to gain some intuitive understanding of the cluster-charge loss and not to replace more accurate simulations. The method is applied to the multiple sampling measurements of energy loss in the main time projection chambers (TPCs) of the NA49 experiment at CERN SPS. The calculations are in reasonable agreement with data. (orig.)

  20. A family of analytical solutions of a nonlinear diffusion-convection equation

    Science.gov (United States)

    Hayek, Mohamed

    2018-01-01

    Despite its popularity in many engineering fields, the nonlinear diffusion-convection equation has no general analytical solutions. This work presents a family of closed-form analytical traveling wave solutions for the nonlinear diffusion-convection equation with power law nonlinearities. This kind of equations typically appears in nonlinear problems of flow and transport in porous media. The solutions that are addressed are simple and fully analytical. Three classes of analytical solutions are presented depending on the type of the nonlinear diffusion coefficient (increasing, decreasing or constant). It has shown that the structure of the traveling wave solution is strongly related to the diffusion term. The main advantage of the proposed solutions is that they are presented in a unified form contrary to existing solutions in the literature where the derivation of each solution depends on the specific values of the diffusion and convection parameters. The proposed closed-form solutions are simple to use, do not require any numerical implementation, and may be implemented in a simple spreadsheet. The analytical expressions are also useful to mathematically analyze the structure and properties of the solutions.

  1. Estimates of the Damage Costs of Climate Change. Part 1. Benchmark Estimates

    International Nuclear Information System (INIS)

    Tol, R.S.J.

    2002-01-01

    A selection of the potential impacts of climate change - on agriculture, forestry, unmanaged ecosystems, sea level rise, human mortality, energy consumption, and water resources - are estimated and valued in monetary terms. Estimates are derived from globally comprehensive, internally consistent studies using GCM based scenarios. An underestimate of the uncertainty is given. New impact studies can be included following the meta-analytical methods described here. A 1C increase in the global mean surface air temperature would have, on balance, a positive effect on the OECD, China, and the Middle East, and a negative effect on other countries. Confidence intervals of regionally aggregated impacts, however, include both positive and negative impacts for all regions. Global estimates depend on the aggregation rule. Using a simple sum, world impact of a 1C warming would be a positive 2% of GDP, with a standard deviation of 1%. Using globally averaged values, world impact would be a negative 3% (standard deviation: 1%). Using equity weighting, world impact would amount to 0% (standard deviation: 1%)

  2. Albedo analytical method for multi-scattered neutron flux calculation in cavity

    International Nuclear Information System (INIS)

    Shin, Kazuo; Selvi, S.; Hyodo, Tomonori

    1986-01-01

    A simple formula which describes multi-scattered neutron flux in a spherical cavity was derived based on the albedo concept. The formura treats a neutron source which has an arbitrary energy-angle distribution and is placed at any point in the cavity. The derived formula was applied to the estimation of neutron fluxes in two cavities, i.e. a spherical concrete cell with a 14-MeV neutron source at the center and the ''YAYOI'' reactor cavity with a pencil beam of reactor neutrons. The results of the analytical formula agreed very well with the reference data in the both problems. It was concluded that the formula is applicable to estimate the neutron fluxes in a spherical cell except for special cases that tangential source neutrons are incident to the cavity wall. (author)

  3. Secular Orbit Evolution in Systems with a Strong External Perturber—A Simple and Accurate Model

    Energy Technology Data Exchange (ETDEWEB)

    Andrade-Ines, Eduardo [Institute de Mécanique Céleste et des Calcul des Éphémérides—Observatoire de Paris, 77 Avenue Denfert Rochereau, F-75014 Paris (France); Eggl, Siegfried, E-mail: eandrade.ines@gmail.com, E-mail: siegfried.eggl@jpl.nasa.gov [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, 91109 Pasadena, CA (United States)

    2017-04-01

    We present a semi-analytical correction to the seminal solution for the secular motion of a planet’s orbit under gravitational influence of an external perturber derived by Heppenheimer. A comparison between analytical predictions and numerical simulations allows us to determine corrective factors for the secular frequency and forced eccentricity in the coplanar restricted three-body problem. The correction is given in the form of a polynomial function of the system’s parameters that can be applied to first-order forced eccentricity and secular frequency estimates. The resulting secular equations are simple, straight forward to use, and improve the fidelity of Heppenheimers solution well beyond higher-order models. The quality and convergence of the corrected secular equations are tested for a wide range of parameters and limits of its applicability are given.

  4. Exploration of Simple Analytical Approaches for Rapid Detection of Pathogenic Bacteria

    Energy Technology Data Exchange (ETDEWEB)

    Rahman, Salma [Iowa State Univ., Ames, IA (United States)

    2005-01-01

    Many of the current methods for pathogenic bacterial detection require long sample-preparation and analysis time, as well as complex instrumentation. This dissertation explores simple analytical approaches (e.g., flow cytometry and diffuse reflectance spectroscopy) that may be applied towards ideal requirements of a microbial detection system, through method and instrumentation development, and by the creation and characterization of immunosensing platforms. This dissertation is organized into six sections. In the general Introduction section a literature review on several of the key aspects of this work is presented. First, different approaches for detection of pathogenic bacteria will be reviewed, with a comparison of the relative strengths and weaknesses of each approach, A general overview regarding diffuse reflectance spectroscopy is then presented. Next, the structure and function of self-assembled monolayers (SAMs) formed from organosulfur molecules at gold and micrometer and sub-micrometer patterning of biomolecules using SAMs will be discussed. This section is followed by four research chapters, presented as separate manuscripts. Chapter 1 describes the efforts and challenges towards the creation of imunosensing platforms that exploit the flexibility and structural stability of SAMs of thiols at gold. 1H, 1H, 2H, 2H-perfluorodecyl-1-thiol SAM (PFDT) and dithio-bis(succinimidyl propionate)-(DSP)-derived SAMs were used to construct the platform. Chapter 2 describes the characterization of the PFDT- and DSP-derived SAMs, and the architectures formed when it is coupled to antibodies as well as target bacteria. These studies used infrared reflection spectroscopy (IRS), X-ray photoelectron spectroscopy (XPS), and electrochemical quartz crystal microbalance (EQCM), Chapter 3 presents a new sensitive, and portable diffuse reflection based technique for the rapid identification and quantification of pathogenic bacteria. Chapter 4 reports research efforts in the

  5. A multi-band semi-analytical algorithm for estimating chlorophyll-a concentration in the Yellow River Estuary, China.

    Science.gov (United States)

    Chen, Jun; Quan, Wenting; Cui, Tingwei

    2015-01-01

    In this study, two sample semi-analytical algorithms and one new unified multi-band semi-analytical algorithm (UMSA) for estimating chlorophyll-a (Chla) concentration were constructed by specifying optimal wavelengths. The three sample semi-analytical algorithms, including the three-band semi-analytical algorithm (TSA), four-band semi-analytical algorithm (FSA), and UMSA algorithm, were calibrated and validated by the dataset collected in the Yellow River Estuary between September 1 and 10, 2009. By comparing of the accuracy of assessment of TSA, FSA, and UMSA algorithms, it was found that the UMSA algorithm had a superior performance in comparison with the two other algorithms, TSA and FSA. Using the UMSA algorithm in retrieving Chla concentration in the Yellow River Estuary decreased by 25.54% NRMSE (normalized root mean square error) when compared with the FSA algorithm, and 29.66% NRMSE in comparison with the TSA algorithm. These are very significant improvements upon previous methods. Additionally, the study revealed that the TSA and FSA algorithms are merely more specific forms of the UMSA algorithm. Owing to the special form of the UMSA algorithm, if the same bands were used for both the TSA and UMSA algorithms or FSA and UMSA algorithms, the UMSA algorithm would theoretically produce superior results in comparison with the TSA and FSA algorithms. Thus, good results may also be produced if the UMSA algorithm were to be applied for predicting Chla concentration for datasets of Gitelson et al. (2008) and Le et al. (2009).

  6. A simple model for calculating air pollution within street canyons

    Science.gov (United States)

    Venegas, Laura E.; Mazzeo, Nicolás A.; Dezzutti, Mariana C.

    2014-04-01

    This paper introduces the Semi-Empirical Urban Street (SEUS) model. SEUS is a simple mathematical model based on the scaling of air pollution concentration inside street canyons employing the emission rate, the width of the canyon, the dispersive velocity scale and the background concentration. Dispersive velocity scale depends on turbulent motions related to wind and traffic. The parameterisations of these turbulent motions include two dimensionless empirical parameters. Functional forms of these parameters have been obtained from full scale data measured in street canyons at four European cities. The sensitivity of SEUS model is studied analytically. Results show that relative errors in the evaluation of the two dimensionless empirical parameters have less influence on model uncertainties than uncertainties in other input variables. The model estimates NO2 concentrations using a simple photochemistry scheme. SEUS is applied to estimate NOx and NO2 hourly concentrations in an irregular and busy street canyon in the city of Buenos Aires. The statistical evaluation of results shows that there is a good agreement between estimated and observed hourly concentrations (e.g. fractional bias are -10.3% for NOx and +7.8% for NO2). The agreement between the estimated and observed values has also been analysed in terms of its dependence on wind speed and direction. The model shows a better performance for wind speeds >2 m s-1 than for lower wind speeds and for leeward situations than for others. No significant discrepancies have been found between the results of the proposed model and that of a widely used operational dispersion model (OSPM), both using the same input information.

  7. Simple estimating method of damages of concrete gravity dam based on linear dynamic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sasaki, T.; Kanenawa, K.; Yamaguchi, Y. [Public Works Research Institute, Tsukuba, Ibaraki (Japan). Hydraulic Engineering Research Group

    2004-07-01

    Due to the occurrence of large earthquakes like the Kobe Earthquake in 1995, there is a strong need to verify seismic resistance of dams against much larger earthquake motions than those considered in the present design standard in Japan. Problems exist in using nonlinear analysis to evaluate the safety of dams including: that the influence which the set material properties have on the results of nonlinear analysis is large, and that the results of nonlinear analysis differ greatly according to the damage estimation models or analysis programs. This paper reports the evaluation indices based on a linear dynamic analysis method and the characteristics of the progress of cracks in concrete gravity dams with different shapes using a nonlinear dynamic analysis method. The study concludes that if simple linear dynamic analysis is appropriately conducted to estimate tensile stress at potential locations of initiating cracks, the damage due to cracks would be predicted roughly. 4 refs., 1 tab., 13 figs.

  8. Development and validation of a simple high-performance liquid chromatography analytical method for simultaneous determination of phytosterols, cholesterol and squalene in parenteral lipid emulsions.

    Science.gov (United States)

    Novak, Ana; Gutiérrez-Zamora, Mercè; Domenech, Lluís; Suñé-Negre, Josep M; Miñarro, Montserrat; García-Montoya, Encarna; Llop, Josep M; Ticó, Josep R; Pérez-Lozano, Pilar

    2018-02-01

    A simple analytical method for simultaneous determination of phytosterols, cholesterol and squalene in lipid emulsions was developed owing to increased interest in their clinical effects. Method development was based on commonly used stationary (C 18 , C 8 and phenyl) and mobile phases (mixtures of acetonitrile, methanol and water) under isocratic conditions. Differences in stationary phases resulted in peak overlapping or coelution of different peaks. The best separation of all analyzed compounds was achieved on Zorbax Eclipse XDB C 8 (150 × 4.6 mm, 5 μm; Agilent) and ACN-H 2 O-MeOH, 80:19.5:0.5 (v/v/v). In order to achieve a shorter time of analysis, the method was further optimized and gradient separation was established. The optimized analytical method was validated and tested for routine use in lipid emulsion analyses. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Analytic Models of Brown Dwarfs and the Substellar Mass Limit

    Directory of Open Access Journals (Sweden)

    Sayantan Auddy

    2016-01-01

    Full Text Available We present the analytic theory of brown dwarf evolution and the lower mass limit of the hydrogen burning main-sequence stars and introduce some modifications to the existing models. We give an exact expression for the pressure of an ideal nonrelativistic Fermi gas at a finite temperature, therefore allowing for nonzero values of the degeneracy parameter. We review the derivation of surface luminosity using an entropy matching condition and the first-order phase transition between the molecular hydrogen in the outer envelope and the partially ionized hydrogen in the inner region. We also discuss the results of modern simulations of the plasma phase transition, which illustrate the uncertainties in determining its critical temperature. Based on the existing models and with some simple modification, we find the maximum mass for a brown dwarf to be in the range 0.064M⊙–0.087M⊙. An analytic formula for the luminosity evolution allows us to estimate the time period of the nonsteady state (i.e., non-main-sequence nuclear burning for substellar objects. We also calculate the evolution of very low mass stars. We estimate that ≃11% of stars take longer than 107 yr to reach the main sequence, and ≃5% of stars take longer than 108 yr.

  10. Inter-particle gap distribution and spectral rigidity of the totally asymmetric simple exclusion process with open boundaries

    International Nuclear Information System (INIS)

    Krbalek, Milan; Hrabak, Pavel

    2011-01-01

    We consider the one-dimensional totally asymmetric simple exclusion process (TASEP model) with open boundary conditions and present the analytical computations leading to the exact formula for distance clearance distribution, i.e. probability density for a clear distance between subsequent particles of the model. The general relation is rapidly simplified for the middle part of the one-dimensional lattice. Both the analytical formulas and their approximations are compared with the numerical representation of the TASEP model. Such a comparison is presented for particles occurring in the internal part as well as in the boundary part of the lattice. Furthermore, we introduce the pertinent estimation for the so-called spectral rigidity of the model. The results obtained are sequentially discussed within the scope of vehicular traffic theory.

  11. Validation of a simple evaporation-transpiration scheme (SETS) to estimate evaporation using micro-lysimeter measurements

    Science.gov (United States)

    Ghazanfari, Sadegh; Pande, Saket; Savenije, Hubert

    2014-05-01

    Several methods exist to estimate E and T. The Penman-Montieth or Priestly-Taylor methods along with the Jarvis scheme for estimating vegetation resistance are commonly used to estimate these fluxes as a function of land cover, atmospheric forcing and soil moisture content. In this study, a simple evaporation transpiration method is developed based on MOSAIC Land Surface Model that explicitly accounts for soil moisture. Soil evaporation and transpiration estimated by SETS is validated on a single column of soil profile with measured evaporation data from three micro-lysimeters located at Ferdowsi University of Mashhad synoptic station, Iran, for the year 2005. SETS is run using both implicit and explicit computational schemes. Results show that the implicit scheme estimates the vapor flux close to that by the explicit scheme. The mean difference between the implicit and explicit scheme is -0.03 mm/day. The paired T-test of mean difference (p-Value = 0.042 and t-Value = 2.04) shows that there is no significant difference between the two methods. The sum of soil evaporation and transpiration from SETS is also compared with P-M equation and micro-lysimeters measurements. The SETS predicts the actual evaporation with a lower bias (= 1.24mm/day) than P-M (= 1.82 mm/day) and with R2 value of 0.82.

  12. A simple and efficient algorithm to estimate daily global solar radiation from geostationary satellite data

    International Nuclear Information System (INIS)

    Lu, Ning; Qin, Jun; Yang, Kun; Sun, Jiulin

    2011-01-01

    Surface global solar radiation (GSR) is the primary renewable energy in nature. Geostationary satellite data are used to map GSR in many inversion algorithms in which ground GSR measurements merely serve to validate the satellite retrievals. In this study, a simple algorithm with artificial neural network (ANN) modeling is proposed to explore the non-linear physical relationship between ground daily GSR measurements and Multi-functional Transport Satellite (MTSAT) all-channel observations in an effort to fully exploit information contained in both data sets. Singular value decomposition is implemented to extract the principal signals from satellite data and a novel method is applied to enhance ANN performance at high altitude. A three-layer feed-forward ANN model is trained with one year of daily GSR measurements at ten ground sites. This trained ANN is then used to map continuous daily GSR for two years, and its performance is validated at all 83 ground sites in China. The evaluation result demonstrates that this algorithm can quickly and efficiently build the ANN model that estimates daily GSR from geostationary satellite data with good accuracy in both space and time. -- Highlights: → A simple and efficient algorithm to estimate GSR from geostationary satellite data. → ANN model fully exploits both the information from satellite and ground measurements. → Good performance of the ANN model is comparable to that of the classical models. → Surface elevation and infrared information enhance GSR inversion.

  13. Simple estimation procedures for regression analysis of interval-censored failure time data under the proportional hazards model.

    Science.gov (United States)

    Sun, Jianguo; Feng, Yanqin; Zhao, Hui

    2015-01-01

    Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.

  14. Superhydrophobic analyte concentration utilizing colloid-pillar array SERS substrates.

    Science.gov (United States)

    Wallace, Ryan A; Charlton, Jennifer J; Kirchner, Teresa B; Lavrik, Nickolay V; Datskos, Panos G; Sepaniak, Michael J

    2014-12-02

    The ability to detect a few molecules present in a large sample is of great interest for the detection of trace components in both medicinal and environmental samples. Surface enhanced Raman spectroscopy (SERS) is a technique that can be utilized to detect molecules at very low absolute numbers. However, detection at trace concentration levels in real samples requires properly designed delivery and detection systems. The following work involves superhydrophobic surfaces that have as a framework deterministic or stochastic silicon pillar arrays formed by lithographic or metal dewetting protocols, respectively. In order to generate the necessary plasmonic substrate for SERS detection, simple and flow stable Ag colloid was added to the functionalized pillar array system via soaking. Native pillars and pillars with hydrophobic modification are used. The pillars provide a means to concentrate analyte via superhydrophobic droplet evaporation effects. A ≥ 100-fold concentration of analyte was estimated, with a limit of detection of 2.9 × 10(-12) M for mitoxantrone dihydrochloride. Additionally, analytes were delivered to the surface via a multiplex approach in order to demonstrate an ability to control droplet size and placement for scaled-up uses in real world applications. Finally, a concentration process involving transport and sequestration based on surface treatment selective wicking is demonstrated.

  15. Pre-analytical and analytical variation of drug determination in segmented hair using ultra-performance liquid chromatography-tandem mass spectrometry.

    Science.gov (United States)

    Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian

    2014-01-01

    Assessment of total uncertainty of analytical methods for the measurements of drugs in human hair has mainly been derived from the analytical variation. However, in hair analysis several other sources of uncertainty will contribute to the total uncertainty. Particularly, in segmental hair analysis pre-analytical variations associated with the sampling and segmentation may be significant factors in the assessment of the total uncertainty budget. The aim of this study was to develop and validate a method for the analysis of 31 common drugs in hair using ultra-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) with focus on the assessment of both the analytical and pre-analytical sampling variations. The validated method was specific, accurate (80-120%), and precise (CV≤20%) across a wide linear concentration range from 0.025-25 ng/mg for most compounds. The analytical variation was estimated to be less than 15% for almost all compounds. The method was successfully applied to 25 segmented hair specimens from deceased drug addicts showing a broad pattern of poly-drug use. The pre-analytical sampling variation was estimated from the genuine duplicate measurements of two bundles of hair collected from each subject after subtraction of the analytical component. For the most frequently detected analytes, the pre-analytical variation was estimated to be 26-69%. Thus, the pre-analytical variation was 3-7 folds larger than the analytical variation (7-13%) and hence the dominant component in the total variation (29-70%). The present study demonstrated the importance of including the pre-analytical variation in the assessment of the total uncertainty budget and in the setting of the 95%-uncertainty interval (±2CVT). Excluding the pre-analytical sampling variation could significantly affect the interpretation of results from segmental hair analysis. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  16. A simple method to estimate interwell autocorrelation

    Energy Technology Data Exchange (ETDEWEB)

    Pizarro, J.O.S.; Lake, L.W. [Univ. of Texas, Austin, TX (United States)

    1997-08-01

    The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.

  17. A simple localized-itinerant model for PrAl3: crystal field and exchange effects

    International Nuclear Information System (INIS)

    Ranke, P.J. von; Palermo, L.

    1990-01-01

    We present a simple magnetic model for PrAl sub(3). The effects of crystal field are treated using a reduced set of levels and the corresponding wave functions are extracted from the actual crystal field levels of Pr sup(+3) in a hexagonal symmetry. The exchange between 4f- and conduction electrons are dealt within a molecular field approximation. An analytical magnetic state equation is derived and the magnetic behaviour discussed. The parameters of the model are estimated from a fitting of the inverse susceptibility of PrAl sub(3) given in the literature. (author)

  18. Analytical model of diffuse reflectance spectrum of skin tissue

    Energy Technology Data Exchange (ETDEWEB)

    Lisenko, S A; Kugeiko, M M; Firago, V A [Belarusian State University, Minsk (Belarus); Sobchuk, A N [B.I. Stepanov Institute of Physics, National Academy of Sciences of Belarus, Minsk (Belarus)

    2014-01-31

    We have derived simple analytical expressions that enable highly accurate calculation of diffusely reflected light signals of skin in the spectral range from 450 to 800 nm at a distance from the region of delivery of exciting radiation. The expressions, taking into account the dependence of the detected signals on the refractive index, transport scattering coefficient, absorption coefficient and anisotropy factor of the medium, have been obtained in the approximation of a two-layer medium model (epidermis and dermis) for the same parameters of light scattering but different absorption coefficients of layers. Numerical experiments on the retrieval of the skin biophysical parameters from the diffuse reflectance spectra simulated by the Monte Carlo method show that commercially available fibre-optic spectrophotometers with a fixed distance between the radiation source and detector can reliably determine the concentration of bilirubin, oxy- and deoxyhaemoglobin in the dermis tissues and the tissue structure parameter characterising the size of its effective scatterers. We present the examples of quantitative analysis of the experimental data, confirming the correctness of estimates of biophysical parameters of skin using the obtained analytical expressions. (biophotonics)

  19. A pilot study of a simple screening technique for estimation of salivary flow.

    Science.gov (United States)

    Kanehira, Takashi; Yamaguchi, Tomotaka; Takehara, Junji; Kashiwazaki, Haruhiko; Abe, Takae; Morita, Manabu; Asano, Kouzo; Fujii, Yoshinori; Sakamoto, Wataru

    2009-09-01

    The purpose of this study was to develop a simple screening technique for estimation of salivary flow and to test the usefulness of the method for determining decreased salivary flow. A novel assay system comprising 3 spots containing 30 microg starch and 49.6 microg potassium iodide per spot on filter paper and a coloring reagent, based on the color reaction of iodine-starch and theory of paper chromatography, was designed. We investigated the relationship between resting whole salivary rates and the number of colored spots on the filter produced by 41 hospitalized subjects. A significant negative correlation was observed between the number of colored spots and the resting salivary flow rate (n = 41; r = -0.803; P bedridden and disabled elderly people.

  20. A novel analytical solution for estimating aquifer properties within a horizontally anisotropic aquifer bounded by a stream

    Science.gov (United States)

    Huang, Yibin; Zhan, Hongbin; Knappett, Peter S. K.

    2018-04-01

    Past studies modeling stream-aquifer interaction commonly account for vertical anisotropy in hydraulic conductivity, but rarely address horizontal anisotropy, which may exist in certain sedimentary environments. If present, horizontal anisotropy will greatly impact stream depletion and the amount of recharge a pumped aquifer captures from the river. This scenario requires a different and somewhat more sophisticated mathematical approach to model and interpret pumping test results than previous models used to describe captured recharge from rivers. In this study, a new mathematical model is developed to describe the spatiotemporal distribution of drawdown from stream-bank pumping with a well screened across a horizontally anisotropic, confined aquifer, laterally bounded by a river. This new model is used to estimate four aquifer parameters including the magnitude and directions of major and minor principal transmissivities and storativity based on the observed drawdown-time curves within a minimum of three non-collinear observation wells. In order to approve the efficacy of the new model, a MATLAB script file is programmed to conduct a four-parameter inversion to estimate the four parameters of concern. By comparing the results of analytical and numerical inversions, the accuracy of estimated results from both inversions is acceptable, but the MATLAB program sometimes becomes problematic because of the difficulty of separating the local minima from the global minima. It appears that the new analytical model of this study is applicable and robust in estimating parameter values for a horizontally anisotropic aquifer laterally bounded by a stream. Besides that, the new model calculates stream depletion rate as a function of stream-bank pumping. Unique to horizontally anisotropic and homogeneous aquifers, the stream depletion rate at any given pumping rate depends closely on the horizontal anisotropy ratio and the direction of the principle transmissivities relative to

  1. Bayesian mixture modeling of significant p values: A meta-analytic method to estimate the degree of contamination from H₀.

    Science.gov (United States)

    Gronau, Quentin Frederik; Duizer, Monique; Bakker, Marjan; Wagenmakers, Eric-Jan

    2017-09-01

    Publication bias and questionable research practices have long been known to corrupt the published record. One method to assess the extent of this corruption is to examine the meta-analytic collection of significant p values, the so-called p -curve (Simonsohn, Nelson, & Simmons, 2014a). Inspired by statistical research on false-discovery rates, we propose a Bayesian mixture model analysis of the p -curve. Our mixture model assumes that significant p values arise either from the null-hypothesis H ₀ (when their distribution is uniform) or from the alternative hypothesis H1 (when their distribution is accounted for by a simple parametric model). The mixture model estimates the proportion of significant results that originate from H ₀, but it also estimates the probability that each specific p value originates from H ₀. We apply our model to 2 examples. The first concerns the set of 587 significant p values for all t tests published in the 2007 volumes of Psychonomic Bulletin & Review and the Journal of Experimental Psychology: Learning, Memory, and Cognition; the mixture model reveals that p values higher than about .005 are more likely to stem from H ₀ than from H ₁. The second example concerns 159 significant p values from studies on social priming and 130 from yoked control studies. The results from the yoked controls confirm the findings from the first example, whereas the results from the social priming studies are difficult to interpret because they are sensitive to the prior specification. To maximize accessibility, we provide a web application that allows researchers to apply the mixture model to any set of significant p values. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Improved hybridization of Fuzzy Analytic Hierarchy Process (FAHP) algorithm with Fuzzy Multiple Attribute Decision Making - Simple Additive Weighting (FMADM-SAW)

    Science.gov (United States)

    Zaiwani, B. E.; Zarlis, M.; Efendi, S.

    2018-03-01

    In this research, the improvement of hybridization algorithm of Fuzzy Analytic Hierarchy Process (FAHP) with Fuzzy Technique for Order Preference by Similarity to Ideal Solution (FTOPSIS) in selecting the best bank chief inspector based on several qualitative and quantitative criteria with various priorities. To improve the performance of the above research, FAHP algorithm hybridization with Fuzzy Multiple Attribute Decision Making - Simple Additive Weighting (FMADM-SAW) algorithm was adopted, which applied FAHP algorithm to the weighting process and SAW for the ranking process to determine the promotion of employee at a government institution. The result of improvement of the average value of Efficiency Rate (ER) is 85.24%, which means that this research has succeeded in improving the previous research that is equal to 77.82%. Keywords: Ranking and Selection, Fuzzy AHP, Fuzzy TOPSIS, FMADM-SAW.

  3. Is simple nephrectomy truly simple? Comparison with the radical alternative.

    Science.gov (United States)

    Connolly, S S; O'Brien, M Frank; Kunni, I M; Phelan, E; Conroy, R; Thornhill, J A; Grainger, R

    2011-03-01

    The Oxford English dictionary defines the term "simple" as "easily done" and "uncomplicated". We tested the validity of this terminology in relation to open nephrectomy surgery. Retrospective review of 215 patients undergoing open, simple (n = 89) or radical (n = 126) nephrectomy in a single university-affiliated institution between 1998 and 2002. Operative time (OT), estimated blood loss (EBL), operative complications (OC) and length of stay in hospital (LOS) were analysed. Statistical analysis employed Fisher's exact test and Stata Release 8.2. Simple nephrectomy was associated with shorter OT (mean 126 vs. 144 min; p = 0.002), reduced EBL (mean 729 vs. 859 cc; p = 0.472), lower OC (9 vs. 17%; 0.087), and more brief LOS (mean 6 vs. 8 days; p < 0.001). All parameters suggest favourable outcome for the simple nephrectomy group, supporting the use of this terminology. This implies "simple" nephrectomies are truly easier to perform with less complication than their radical counterpart.

  4. Inclusion of Topological Measurements into Analytic Estimates of Effective Permeability in Fractured Media

    Science.gov (United States)

    Sævik, P. N.; Nixon, C. W.

    2017-11-01

    We demonstrate how topology-based measures of connectivity can be used to improve analytical estimates of effective permeability in 2-D fracture networks, which is one of the key parameters necessary for fluid flow simulations at the reservoir scale. Existing methods in this field usually compute fracture connectivity using the average fracture length. This approach is valid for ideally shaped, randomly distributed fractures, but is not immediately applicable to natural fracture networks. In particular, natural networks tend to be more connected than randomly positioned fractures of comparable lengths, since natural fractures often terminate in each other. The proposed topological connectivity measure is based on the number of intersections and fracture terminations per sampling area, which for statistically stationary networks can be obtained directly from limited outcrop exposures. To evaluate the method, numerical permeability upscaling was performed on a large number of synthetic and natural fracture networks, with varying topology and geometry. The proposed method was seen to provide much more reliable permeability estimates than the length-based approach, across a wide range of fracture patterns. We summarize our results in a single, explicit formula for the effective permeability.

  5. Recent analytical applications of magnetic nanoparticles

    Directory of Open Access Journals (Sweden)

    Mohammad Faraji

    2016-07-01

    Full Text Available Analytical chemistry has experienced, as well as other areas of science, a big change due to the needs and opportunities provided by analytical nanoscience and nanotechnology. Now, nanotechnology is increasingly proving to be a powerful ally of analytical chemistry to achieve its objectives, and to simplify analytical processes. Moreover, the information needs arising from the growing nanotechnological activity are opening an exciting new field of action for analytical chemists. Magnetic nanoparticles have been used in various fields owing to their unique properties including large specific surface area and simple separation with magnetic fields. For Analytical applications, they have been used mainly for sample preparation techniques (magnetic solid phase extraction with different advanced functional groups (layered double hydroxide, β-cyclodextrin, carbon nanotube, graphen, polymer, octadecylsilane and automation of it, microextraction techniques enantioseparation and chemosensors. This review summarizes the basic principles and achievements of magnetic nanoparticles in sample preparation techniques, enantioseparation and chemosensors. Also, some selected articles recently published (2010-2016 have been reviewed and discussed.

  6. Bioanalytical HPTLC Method for Estimation of Zolpidem Tartrate from Human Plasma

    OpenAIRE

    Abhay R. Shirode; Bharti G. Jadhav; Vilasrao J. Kadam

    2016-01-01

    A simple and selective high performance thin layer chromatographic (HPTLC) method was developed and validated for the estimation of zolpidem tartrate from human plasma using eperisone hydrochloride as an internal standard (IS). Analyte and IS were extracted from human plasma by liquid liquid extraction (LLE) technique. The Camag HPTLC system, employed with software winCATS (ver.1.4.1.8) was used for the proposed bioanalytical work. Planar chromatographic development was carried out with the h...

  7. Experimental investigation and numerical simulation of 3He gas diffusion in simple geometries: implications for analytical models of 3He MR lung morphometry.

    Science.gov (United States)

    Parra-Robles, J; Ajraoui, S; Deppe, M H; Parnell, S R; Wild, J M

    2010-06-01

    Models of lung acinar geometry have been proposed to analytically describe the diffusion of (3)He in the lung (as measured with pulsed gradient spin echo (PGSE) methods) as a possible means of characterizing lung microstructure from measurement of the (3)He ADC. In this work, major limitations in these analytical models are highlighted in simple diffusion weighted experiments with (3)He in cylindrical models of known geometry. The findings are substantiated with numerical simulations based on the same geometry using finite difference representation of the Bloch-Torrey equation. The validity of the existing "cylinder model" is discussed in terms of the physical diffusion regimes experienced and the basic reliance of the cylinder model and other ADC-based approaches on a Gaussian diffusion behaviour is highlighted. The results presented here demonstrate that physical assumptions of the cylinder model are not valid for large diffusion gradient strengths (above approximately 15 mT/m), which are commonly used for (3)He ADC measurements in human lungs. (c) 2010 Elsevier Inc. All rights reserved.

  8. Reliability of stellar inclination estimated from asteroseismology: analytical criteria, mock simulations and Kepler data analysis

    Science.gov (United States)

    Kamiaka, Shoya; Benomar, Othman; Suto, Yasushi

    2018-05-01

    Advances in asteroseismology of solar-like stars, now provide a unique method to estimate the stellar inclination i⋆. This enables to evaluate the spin-orbit angle of transiting planetary systems, in a complementary fashion to the Rossiter-McLaughlineffect, a well-established method to estimate the projected spin-orbit angle λ. Although the asteroseismic method has been broadly applied to the Kepler data, its reliability has yet to be assessed intensively. In this work, we evaluate the accuracy of i⋆ from asteroseismology of solar-like stars using 3000 simulated power spectra. We find that the low signal-to-noise ratio of the power spectra induces a systematic under-estimate (over-estimate) bias for stars with high (low) inclinations. We derive analytical criteria for the reliable asteroseismic estimate, which indicates that reliable measurements are possible in the range of 20° ≲ i⋆ ≲ 80° only for stars with high signal-to-noise ratio. We also analyse and measure the stellar inclination of 94 Kepler main-sequence solar-like stars, among which 33 are planetary hosts. According to our reliability criteria, a third of them (9 with planets, 22 without) have accurate stellar inclination. Comparison of our asteroseismic estimate of vsin i⋆ against spectroscopic measurements indicates that the latter suffers from a large uncertainty possibly due to the modeling of macro-turbulence, especially for stars with projected rotation speed vsin i⋆ ≲ 5km/s. This reinforces earlier claims, and the stellar inclination estimated from the combination of measurements from spectroscopy and photometric variation for slowly rotating stars needs to be interpreted with caution.

  9. Linear Calibration – Is It so Simple?

    International Nuclear Information System (INIS)

    Arsova, Diana; Babanova, Sofia; Mandjukov, Petko

    2009-01-01

    Calibration procedure is an important part of instrumental analysis. Usually it is not the major uncertainty source in whole analytical procedure. However, improper calibration might cause a significant bias of the analytical results from the real (certified) value. Standard Gaussian linear regression is the most frequently used mathematical approach for estimation of calibration function parameters. In the present article are discussed some not quite popular, but highly recommended in certain cases methods for parameter estimation, such as: weighted regression, orthogonal regression, robust regression, bracketing calibration etc. Some useful approximations are also presented. Special attention is paid to the statistical criteria which to be used for selection of proper calibration model. Standard UV-VIS spectrometric procedure for determination of phosphates in water was used as a practical example. Several different approaches for estimation of the contribution of calibration to the general un-certainty of the analytical result are presented and compared

  10. 3-D discrete analytical ridgelet transform.

    Science.gov (United States)

    Helbert, David; Carré, Philippe; Andres, Eric

    2006-12-01

    In this paper, we propose an implementation of the 3-D Ridgelet transform: the 3-D discrete analytical Ridgelet transform (3-D DART). This transform uses the Fourier strategy for the computation of the associated 3-D discrete Radon transform. The innovative step is the definition of a discrete 3-D transform with the discrete analytical geometry theory by the construction of 3-D discrete analytical lines in the Fourier domain. We propose two types of 3-D discrete lines: 3-D discrete radial lines going through the origin defined from their orthogonal projections and 3-D planes covered with 2-D discrete line segments. These discrete analytical lines have a parameter called arithmetical thickness, allowing us to define a 3-D DART adapted to a specific application. Indeed, the 3-D DART representation is not orthogonal, It is associated with a flexible redundancy factor. The 3-D DART has a very simple forward/inverse algorithm that provides an exact reconstruction without any iterative method. In order to illustrate the potentiality of this new discrete transform, we apply the 3-D DART and its extension to the Local-DART (with smooth windowing) to the denoising of 3-D image and color video. These experimental results show that the simple thresholding of the 3-D DART coefficients is efficient.

  11. Median of patient results as a tool for assessment of analytical stability.

    Science.gov (United States)

    Jørgensen, Lars Mønster; Hansen, Steen Ingemann; Petersen, Per Hyltoft; Sölétormos, György

    2015-06-15

    In spite of the well-established external quality assessment and proficiency testing surveys of analytical quality performance in laboratory medicine, a simple tool to monitor the long-term analytical stability as a supplement to the internal control procedures is often needed. Patient data from daily internal control schemes was used for monthly appraisal of the analytical stability. This was accomplished by using the monthly medians of patient results to disclose deviations from analytical stability, and by comparing divergences with the quality specifications for allowable analytical bias based on biological variation. Seventy five percent of the twenty analytes achieved on two COBASs INTEGRA 800 instruments performed in accordance with the optimum and with the desirable specifications for bias. Patient results applied in analytical quality performance control procedures are the most reliable sources of material as they represent the genuine substance of the measurements and therefore circumvent the problems associated with non-commutable materials in external assessment. Patient medians in the monthly monitoring of analytical stability in laboratory medicine are an inexpensive, simple and reliable tool to monitor the steadiness of the analytical practice. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Corrosion-induced bond strength degradation in reinforced concrete-Analytical and empirical models

    International Nuclear Information System (INIS)

    Bhargava, Kapilesh; Ghosh, A.K.; Mori, Yasuhiro; Ramanujam, S.

    2007-01-01

    The present paper aims to investigate the relationship between the bond strength and the reinforcement corrosion in reinforced concrete (RC). Analytical and empirical models are proposed for the bond strength of corroded reinforcing bars. Analytical model proposed by Cairns.and Abdullah [Cairns, J., Abdullah, R.B., 1996. Bond strength of black and epoxy-coated reinforcement-a theoretical approach. ACI Mater. J. 93 (4), 362-369] for splitting bond failure and later modified by Coronelli [Coronelli, D. 2002. Corrosion cracking and bond strength modeling for corroded bars in reinforced concrete. ACI Struct. J. 99 (3), 267-276] to consider the corroded bars, has been adopted. Estimation of the various parameters in the earlier analytical model has been proposed by the present authors. These parameters include corrosion pressure due to expansive action of corrosion products, modeling of tensile behaviour of cracked concrete and adhesion and friction coefficient between the corroded bar and cracked concrete. Simple empirical models are also proposed to evaluate the reduction in bond strength as a function of reinforcement corrosion in RC specimens. These empirical models are proposed by considering a wide range of published experimental investigations related to the bond degradation in RC specimens due to reinforcement corrosion. It has been found that the proposed analytical and empirical bond models are capable of providing the estimates of predicted bond strength of corroded reinforcement that are in reasonably good agreement with the experimentally observed values and with those of the other reported published data on analytical and empirical predictions. An attempt has also been made to evaluate the flexural strength of RC beams with corroded reinforcement failing in bond. It has also been found that the analytical predictions for the flexural strength of RC beams based on the proposed bond degradation models are in agreement with those of the experimentally

  13. A Monte Carlo evaluation of analytical multiple scattering corrections for unpolarised neutron scattering and polarisation analysis data

    International Nuclear Information System (INIS)

    Mayers, J.; Cywinski, R.

    1985-03-01

    Some of the approximations commonly used for the analytical estimation of multiple scattering corrections to thermal neutron elastic scattering data from cylindrical and plane slab samples have been tested using a Monte Carlo program. It is shown that the approximations are accurate for a wide range of sample geometries and scattering cross-sections. Neutron polarisation analysis provides the most stringent test of multiple scattering calculations as multiply scattered neutrons may be redistributed not only geometrically but also between the spin flip and non spin flip scattering channels. A very simple analytical technique for correcting for multiple scattering in neutron polarisation analysis has been tested using the Monte Carlo program and has been shown to work remarkably well in most circumstances. (author)

  14. A simple numerical model to estimate the effect of coal selection on pulverized fuel burnout

    Energy Technology Data Exchange (ETDEWEB)

    Sun, J.K.; Hurt, R.H.; Niksa, S.; Muzio, L.; Mehta, A.; Stallings, J. [Brown University, Providence, RI (USA). Division Engineering

    2003-06-01

    The amount of unburned carbon in ash is an important performance characteristic in commercial boilers fired with pulverized coal. Unburned carbon levels are known to be sensitive to fuel selection, and there is great interest in methods of estimating the burnout propensity of coals based on proximate and ultimate analysis - the only fuel properties readily available to utility practitioners. A simple numerical model is described that is specifically designed to estimate the effects of coal selection on burnout in a way that is useful for commercial coal screening. The model is based on a highly idealized description of the combustion chamber but employs detailed descriptions of the fundamental fuel transformations. The model is validated against data from laboratory and pilot-scale combustors burning a range of international coals, and then against data obtained from full-scale units during periods of coal switching. The validated model form is then used in a series of sensitivity studies to explore the role of various individual fuel properties that influence burnout.

  15. Analytic representation for first-principles pseudopotentials

    International Nuclear Information System (INIS)

    Lam, P.K.; Cohen, M.L.; Zunger, A.

    1980-01-01

    The first-principles pseudopotentials developed by Zunger and Cohen are fit with a simple analytic form chosen to model the main physical properties of the potentials. The fitting parameters for the first three rows of the Periodic Table are presented, and the quality of the fit is discussed. The parameters reflect chemical trends of the elements. We find that a minimum of three parameters is required to reproduce the regularities of the Periodic Table. Application of these analytic potentials is also discussed

  16. Effective charge versus bare charge: an analytical estimate for colloids in the infinite dilution limit

    International Nuclear Information System (INIS)

    Aubouy, Miguel; Trizac, Emmanuel; Bocquet, Lyderic

    2003-01-01

    We propose an analytical approximation for the dependence of the effective charge on the bare charge for spherical and cylindrical macro-ions as a function of the size of the colloid and salt content, for the situation of a unique colloid immersed in a sea of electrolyte (where the definition of an effective charge is non-ambiguous). Our approach is based on the Poisson-Boltzmann (PB) mean-field theory. Mathematically speaking, our estimate is asymptotically exact in the limit κa >> 1, where a is the radius of the colloid and κ is the inverse screening length. In practice, a careful comparison with effective charge parameters, obtained by numerically solving the full nonlinear PB theory, proves that our estimate is good down to κa ∼ 1. This is precisely the limit appropriate to treat colloidal suspensions. A particular emphasis is put on the range of parameters suitable to describe both single and double strand DNA molecules under physiological conditions

  17. New Tools to Prepare ACE Cross-section Files for MCNP Analytic Test Problems

    International Nuclear Information System (INIS)

    Brown, Forrest B.

    2016-01-01

    Monte Carlo calculations using one-group cross sections, multigroup cross sections, or simple continuous energy cross sections are often used to: (1) verify production codes against known analytical solutions, (2) verify new methods and algorithms that do not involve detailed collision physics, (3) compare Monte Carlo calculation methods with deterministic methods, and (4) teach fundamentals to students. In this work we describe 2 new tools for preparing the ACE cross-section files to be used by MCNP ® for these analytic test problems, simple a ce.pl and simple a ce m g.pl.

  18. The generation of simple compliance boundaries for mobile communication base station antennas using formulae for SAR estimation.

    Science.gov (United States)

    Thors, B; Hansson, B; Törnevik, C

    2009-07-07

    In this paper, a procedure is proposed for generating simple and practical compliance boundaries for mobile communication base station antennas. The procedure is based on a set of formulae for estimating the specific absorption rate (SAR) in certain directions around a class of common base station antennas. The formulae, given for both whole-body and localized SAR, require as input the frequency, the transmitted power and knowledge of antenna-related parameters such as dimensions, directivity and half-power beamwidths. With knowledge of the SAR in three key directions it is demonstrated how simple and practical compliance boundaries can be generated outside of which the exposure levels do not exceed certain limit values. The conservativeness of the proposed procedure is discussed based on results from numerical radio frequency (RF) exposure simulations with human body phantoms from the recently developed Virtual Family.

  19. Total decay heat estimates in a proto-type fast reactor

    International Nuclear Information System (INIS)

    Sridharan, M.S.

    2003-01-01

    Full text: In this paper, total decay heat values generated in a proto-type fast reactor are estimated. These values are compared with those of certain fast reactors. Simple analytical fits are also obtained for these values which can serve as a handy and convenient tool in engineering design studies. These decay heat values taken as their ratio to the nominal operating power are, in general, applicable to any typical plutonium based fast reactor and are useful inputs to the design of decay-heat removal systems

  20. A novel technique for real-time estimation of edge pedestal density gradients via reflectometer time delay data

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, L., E-mail: zeng@fusion.gat.com; Doyle, E. J.; Rhodes, T. L.; Wang, G.; Sung, C.; Peebles, W. A. [Physics and Astronomy Department, University of California, Los Angeles, California 90095 (United States); Bobrek, M. [Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6006 (United States)

    2016-11-15

    A new model-based technique for fast estimation of the pedestal electron density gradient has been developed. The technique uses ordinary mode polarization profile reflectometer time delay data and does not require direct profile inversion. Because of its simple data processing, the technique can be readily implemented via a Field-Programmable Gate Array, so as to provide a real-time density gradient estimate, suitable for use in plasma control systems such as envisioned for ITER, and possibly for DIII-D and Experimental Advanced Superconducting Tokamak. The method is based on a simple edge plasma model with a linear pedestal density gradient and low scrape-off-layer density. By measuring reflectometer time delays for three adjacent frequencies, the pedestal density gradient can be estimated analytically via the new approach. Using existing DIII-D profile reflectometer data, the estimated density gradients obtained from the new technique are found to be in good agreement with the actual density gradients for a number of dynamic DIII-D plasma conditions.

  1. Analytical model of impedance in elliptical beam pipes

    CERN Document Server

    Pesah, Arthur Chalom

    2017-01-01

    Beam instabilities are among the main limitations in building higher intensity accelerators. Having a good impedance model for every accelerators is necessary in order to build components that minimize the probability of instabilities caused by the interaction beam-environment and to understand what piece to change in case of intensity increasing. Most of accelerator components have their impedance simulated with finite elements method (using softwares like CST Studio), but simple components such as circular or flat pipes are modeled analytically, with a decreasing computation time and an increasing precision compared to their simulated model. Elliptical beam pipes, while being a simple component present in some accelerators, still misses a good analytical model working for the hole range of velocities and frequencies. In this report, we present a general framework to study the impedance of elliptical pipes analytically. We developed a model for both longitudinal and transverse impedance, first in the case of...

  2. Simple models of the hydrofracture process

    KAUST Repository

    Marder, M.

    2015-12-29

    Hydrofracturing to recover natural gas and oil relies on the creation of a fracture network with pressurized water. We analyze the creation of the network in two ways. First, we assemble a collection of analytical estimates for pressure-driven crack motion in simple geometries, including crack speed as a function of length, energy dissipated by fluid viscosity and used to break rock, and the conditions under which a second crack will initiate while a first is running. We develop a pseudo-three-dimensional numerical model that couples fluid motion with solid mechanics and can generate branching crack structures not specified in advance. One of our main conclusions is that the typical spacing between fractures must be on the order of a meter, and this conclusion arises in two separate ways. First, it arises from analysis of gas production rates, given the diffusion constants for gas in the rock. Second, it arises from the number of fractures that should be generated given the scale of the affected region and the amounts of water pumped into the rock.

  3. Simple models of the hydrofracture process

    KAUST Repository

    Marder, M.; Chen, Chih-Hung; Patzek, Tadeusz

    2015-01-01

    Hydrofracturing to recover natural gas and oil relies on the creation of a fracture network with pressurized water. We analyze the creation of the network in two ways. First, we assemble a collection of analytical estimates for pressure-driven crack motion in simple geometries, including crack speed as a function of length, energy dissipated by fluid viscosity and used to break rock, and the conditions under which a second crack will initiate while a first is running. We develop a pseudo-three-dimensional numerical model that couples fluid motion with solid mechanics and can generate branching crack structures not specified in advance. One of our main conclusions is that the typical spacing between fractures must be on the order of a meter, and this conclusion arises in two separate ways. First, it arises from analysis of gas production rates, given the diffusion constants for gas in the rock. Second, it arises from the number of fractures that should be generated given the scale of the affected region and the amounts of water pumped into the rock.

  4. Analytic evaluation of LAMPF II Booster Cavity design

    International Nuclear Information System (INIS)

    Friedrichs, C.C.

    1985-01-01

    Through the past few decades, a great deal of sophistication has evolved in the numeric codes used to evaluate electromagnetically resonant structures. The numeric methods are extremely precise, even for complicated geometries, whereas analytic methods require a simple uniform geometry and a simple, known mode configuration if the same precision is to be obtained. The code SUPERFISH, which is near the present state-of-the-art of numeric methods, does have the following limitations: No circumferential geometry variations are permissible; there are no provisions for magnetic or dielectric losses; and finally, it is impractical (because of the complexity of the code) to modify it to extract particular bits of data one might want that are not provided by the code as written. This paper describes how SUPERFISH was used as an aid in derivating an analytic model of the LAMPF II Booster Cavity. Once a satisfactory model was derived, simple FORTRAN codes were generated to provide whatever data was required. The analytic model is made up of TEM- and radial-mode transmission-line sections, as well as lumped elements where appropriate. Radial transmission-line equations, which include losses, were not found in any literature, and the extension of the lossless equations to include magnetic and dielectric losses are included in this paper

  5. A simple model for the estimation of rain-induced attenuation along earth-space paths at millimeter wavelengths

    Science.gov (United States)

    Stutzman, W. L.; Dishman, W. K.

    1982-01-01

    A simple attenuation model (SAM) is presented for estimating rain-induced attenuation along an earth-space path. The rain model uses an effective spatial rain distribution which is uniform for low rain rates and which has an exponentially shaped horizontal rain profile for high rain rates. When compared to other models, the SAM performed well in the important region of low percentages of time, and had the lowest percent standard deviation of all percent time values tested.

  6. A simple, reproducible and sensitive spectrophotometric method to estimate microalgal lipids

    Energy Technology Data Exchange (ETDEWEB)

    Chen Yimin [ChELSI Institute, Department of Chemical and Biological Engineering, University of Sheffield, Sheffield S1 3JD (United Kingdom); Vaidyanathan, Seetharaman, E-mail: s.vaidyanathan@sheffield.ac.uk [ChELSI Institute, Department of Chemical and Biological Engineering, University of Sheffield, Sheffield S1 3JD (United Kingdom)

    2012-04-29

    Highlights: Black-Right-Pointing-Pointer FAs released from lipids form complex with Cu-TEA in chloroform. Black-Right-Pointing-Pointer The FA-Cu-TEA complex gives strong absorbance at 260 nm. Black-Right-Pointing-Pointer The absorbance is sensitive and independent of C-atom number in the FAs (10-18). Black-Right-Pointing-Pointer Microalgal lipid extract and pure FA (such as C16) can both be used as standards. - Abstract: Quantification of total lipids is a necessity for any study of lipid production by microalgae, especially given the current interest in microalgal carbon capture and biofuels. In this study, we employed a simple yet sensitive method to indirectly measure the lipids in microalgae by measuring the fatty acids (FA) after saponification. The fatty acids were reacted with triethanolamine-copper salts (TEA-Cu) and the ternary TEA-Cu-FA complex was detected at 260 nm using a UV-visible spectrometer without any colour developer. The results showed that this method could be used to analyse low levels of lipids in the range of nano-moles from as little as 1 mL of microalgal culture. Furthermore, the structure of the TEA-Cu-FA complex and related reaction process are proposed to better understand this assay. There is no special instrument required and the method is very reproducible. To the best of our knowledge, this is the first report of the use of UV absorbance of copper salts with FA as a method to estimate lipids in algal cultures. It will pave the way for a more convenient assay of lipids in microalgae and can readily be expanded for estimating lipids in other biological systems.

  7. A simple, reproducible and sensitive spectrophotometric method to estimate microalgal lipids

    International Nuclear Information System (INIS)

    Chen Yimin; Vaidyanathan, Seetharaman

    2012-01-01

    Highlights: ► FAs released from lipids form complex with Cu–TEA in chloroform. ► The FA–Cu–TEA complex gives strong absorbance at 260 nm. ► The absorbance is sensitive and independent of C-atom number in the FAs (10–18). ► Microalgal lipid extract and pure FA (such as C16) can both be used as standards. - Abstract: Quantification of total lipids is a necessity for any study of lipid production by microalgae, especially given the current interest in microalgal carbon capture and biofuels. In this study, we employed a simple yet sensitive method to indirectly measure the lipids in microalgae by measuring the fatty acids (FA) after saponification. The fatty acids were reacted with triethanolamine–copper salts (TEA–Cu) and the ternary TEA–Cu–FA complex was detected at 260 nm using a UV–visible spectrometer without any colour developer. The results showed that this method could be used to analyse low levels of lipids in the range of nano-moles from as little as 1 mL of microalgal culture. Furthermore, the structure of the TEA–Cu–FA complex and related reaction process are proposed to better understand this assay. There is no special instrument required and the method is very reproducible. To the best of our knowledge, this is the first report of the use of UV absorbance of copper salts with FA as a method to estimate lipids in algal cultures. It will pave the way for a more convenient assay of lipids in microalgae and can readily be expanded for estimating lipids in other biological systems.

  8. Retention of ionisable compounds on high-performance liquid chromatography XVI. Estimation of retention with acetonitrile/water mobile phases from aqueous buffer pH and analyte pKa.

    Science.gov (United States)

    Subirats, Xavier; Bosch, Elisabeth; Rosés, Martí

    2006-07-21

    In agreement with our previous studies and those of other authors, it is shown that much better fits of retention time as a function of pH are obtained for acid-base analytes when pH is measured in the mobile phase, than when pH is measured in the aqueous buffer when buffers of different nature are used. However, in some instances it may be more practical to measure the pH in the aqueous buffer before addition of the organic modifier. Thus, an open methodology is presented that allows prediction of chromatographic retention of acid-base analytes from the pH measured in the aqueous buffer. The model presented estimates the pH of the buffer and the pKa of the analyte in a particular acetonitrile/water mobile phase from the pH and pKa values in water. The retention of the analyte can be easily estimated, at a buffer pH close to the solute pKa, from these values and from the retentions of the pure acidic and basic forms of the analyte. Since in many instances, the analyte pKa values in water are not known, the methodology has been also tested by using Internet software, at reach of many chemists, which calculates analyte pKa values from chemical structure. The approach is successfully tested for some pharmaceutical drugs.

  9. A simple method to estimate restoration volume as a possible predictor for tooth fracture.

    Science.gov (United States)

    Sturdevant, J R; Bader, J D; Shugars, D A; Steet, T C

    2003-08-01

    Many dentists cite the fracture risk posed by a large existing restoration as a primary reason for their decision to place a full-coverage restoration. However, there is poor agreement among dentists as to when restoration placement is necessary because of the inability to make objective measurements of restoration size. The purpose of this study was to compare a new method to estimate restoration volumes in posterior teeth with analytically determined volumes. True restoration volume proportion (RVP) was determined for 96 melamine typodont teeth: 24 each of maxillary second premolar, mandibular second premolar, maxillary first molar, and mandibular first molar. Each group of 24 was subdivided into 3 groups to receive an O, MO, or MOD amalgam preparation design. Each preparation design was further subdivided into 4 groups of increasingly larger size. The density of amalgam used was calculated according to ANSI/ADA Specification 1. The teeth were weighed before and after restoration with amalgam. Restoration weight was calculated, and the density of amalgam was used to calculate restoration volume. A liquid pycnometer was used to calculate coronal volume after sectioning the anatomic crown from the root horizontally at the cementoenamel junction. True RVP was calculated by dividing restoration volume by coronal volume. An occlusal photograph and a bitewing radiograph were made of each restored tooth to provide 2 perpendicular views. Each image was digitized, and software was used to measure the percentage of the anatomic crown restored with amalgam. Estimated RVP was calculated by multiplying the percentage of the anatomic crown restored from the 2 views together. Pearson correlation coefficients were used to compare estimated RVP with true RVP. The Pearson correlation coefficient of true RVP with estimated RVP was 0.97 overall (Pvolume of restorative material in coronal tooth structure. The fact that it can be done in a nondestructive manner makes it attractive for

  10. Medical Data Analytics Is Not a Simple Task.

    Science.gov (United States)

    Babič, František; Vadovský, Michal; Paralič, Ján

    2018-01-01

    Data analytics represents a new chance for medical diagnosis and treatment to make it more effective and successful. This expectation is not so easy to achieve as it may look like at a first glance. The medical experts, doctors or general practitioners have their own vocabulary, they use specific terms and type of speaking. On the other side, data analysts have to understand the task and to select the right algorithms. The applicability of the results depends on the effectiveness of the interactions between those two worlds. This paper presents our experiences with various medical data samples in form of SWOT analysis. We identified the most important input attributes for the target diagnosis or extracted decision rules and analysed their interestingness with cooperating doctors, for most promising new cut-off values or an investigation of possible important relations hidden in data sample. In general, this type of knowledge can be used for clinical decision support, but it has to be evaluated on different samples, conditions and ideally in long-term studies. Sometimes, the interaction needed much more time than we expected at the beginning but our experiences are mostly positive.

  11. Writing analytic element programs in Python.

    Science.gov (United States)

    Bakker, Mark; Kelson, Victor A

    2009-01-01

    The analytic element method is a mesh-free approach for modeling ground water flow at both the local and the regional scale. With the advent of the Python object-oriented programming language, it has become relatively easy to write analytic element programs. In this article, an introduction is given of the basic principles of the analytic element method and of the Python programming language. A simple, yet flexible, object-oriented design is presented for analytic element codes using multiple inheritance. New types of analytic elements may be added without the need for any changes in the existing part of the code. The presented code may be used to model flow to wells (with either a specified discharge or drawdown) and streams (with a specified head). The code may be extended by any hydrogeologist with a healthy appetite for writing computer code to solve more complicated ground water flow problems. Copyright © 2009 The Author(s). Journal Compilation © 2009 National Ground Water Association.

  12. Use of probabilistic methods for estimating failure probabilities and directing ISI-efforts

    Energy Technology Data Exchange (ETDEWEB)

    Nilsson, F; Brickstad, B [University of Uppsala, (Switzerland)

    1988-12-31

    Some general aspects of the role of Non Destructive Testing (NDT) efforts on the resulting probability of core damage is discussed. A simple model for the estimation of the pipe break probability due to IGSCC is discussed. It is partly based on analytical procedures, partly on service experience from the Swedish BWR program. Estimates of the break probabilities indicate that further studies are urgently needed. It is found that the uncertainties about the initial crack configuration are large contributors to the total uncertainty. Some effects of the inservice inspection are studied and it is found that the detection probabilities influence the failure probabilities. (authors).

  13. A simple model to estimate the impact of sea-level rise on platform beaches

    Science.gov (United States)

    Taborda, Rui; Ribeiro, Mónica Afonso

    2015-04-01

    Estimates of future beach evolution in response to sea-level rise are needed to assess coastal vulnerability. A research gap is identified in providing adequate predictive methods to use for platform beaches. This work describes a simple model to evaluate the effects of sea-level rise on platform beaches that relies on the conservation of beach sand volume and assumes an invariant beach profile shape. In closed systems, when compared with the Inundation Model, results show larger retreats; the differences are higher for beaches with wide berms and when the shore platform develops at shallow depths. The application of the proposed model to Cascais (Portugal) beaches, using 21st century sea-level rise scenarios, shows that there will be a significant reduction in beach width.

  14. Multi-analytical assessment of iron and steel slag characteristics to estimate the removal of metalloids from contaminated water.

    Science.gov (United States)

    Mercado-Borrayo, B M; Schouwenaars, R; González-Chávez, J L; Ramírez-Zamora, R M

    2013-01-01

    A multi-analytical approach was used to develop a mathematical regression model to calculate the residual concentration of borate ions in water present at high initial content, as a function of the main physicochemical, mineralogical and electrokinetic characteristics after adsorption on five different types of iron and steel slag. The analytical techniques applied and slag properties obtained in this work were: X-ray Fluorescence for the identification of the main chemical compounds, X-ray Diffraction to determine crystalline phases, physical adsorption of nitrogen for the quantification of textural properties and zeta-potential for electrokinetic measurements of slag particles. Adsorption tests were carried out using the bottle-point technique and a highly concentrated borate solution (700 mg B/L) at pH 10, with a slag dose of 10 g/L. An excellent correlation between the residual concentration of boron and three independent variables (content of magnesium oxide, zeta potential and specific surface area) was established for the five types of slag tested in this work. This shows that the methodology based on a multi-analytical approach is a very strong and useful tool to estimate the performance of iron and steel slag as adsorbent of metalloids.

  15. Calculation of the time resolution of the J-PET tomograph using kernel density estimation

    Science.gov (United States)

    Raczyński, L.; Wiślicki, W.; Krzemień, W.; Kowalski, P.; Alfs, D.; Bednarski, T.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Rundel, O.; Sharma, N. G.; Silarski, M.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.

    2017-06-01

    In this paper we estimate the time resolution of the J-PET scanner built from plastic scintillators. We incorporate the method of signal processing using the Tikhonov regularization framework and the kernel density estimation method. We obtain simple, closed-form analytical formulae for time resolution. The proposed method is validated using signals registered by means of the single detection unit of the J-PET tomograph built from a 30 cm long plastic scintillator strip. It is shown that the experimental and theoretical results obtained for the J-PET scanner equipped with vacuum tube photomultipliers are consistent.

  16. Automatic estimation of aquifer parameters using long-term water supply pumping and injection records

    Science.gov (United States)

    Luo, Ning; Illman, Walter A.

    2016-09-01

    Analyses are presented of long-term hydrographs perturbed by variable pumping/injection events in a confined aquifer at a municipal water-supply well field in the Region of Waterloo, Ontario (Canada). Such records are typically not considered for aquifer test analysis. Here, the water-level variations are fingerprinted to pumping/injection rate changes using the Theis model implemented in the WELLS code coupled with PEST. Analyses of these records yield a set of transmissivity ( T) and storativity ( S) estimates between each monitoring and production borehole. These individual estimates are found to poorly predict water-level variations at nearby monitoring boreholes not used in the calibration effort. On the other hand, the geometric means of the individual T and S estimates are similar to those obtained from previous pumping tests conducted at the same site and adequately predict water-level variations in other boreholes. The analyses reveal that long-term municipal water-level records are amenable to analyses using a simple analytical solution to estimate aquifer parameters. However, uniform parameters estimated with analytical solutions should be considered as first rough estimates. More accurate hydraulic parameters should be obtained by calibrating a three-dimensional numerical model that rigorously captures the complexities of the site with these data.

  17. MÉTODO SIMPLES PARA ESTIMAR ENCURTAMENTO PELO FRIO EM CARNE BOVINA A SIMPLE METHOD TO ESTIMATE COLD SHORTENING IN BEEF

    Directory of Open Access Journals (Sweden)

    Riana Jordão Barrozo Heinemann

    2002-04-01

    Full Text Available É bem conhecido o fato de que o encurtamento pelo frio pode influenciar negativamente a textura da carne. Por isso, a determinação do grau de contração do tecido muscular é um recurso analítico de grande importância quando se estuda a otimização dos procedimentos industriais. Neste trabalho, foram avaliadas comparativamente duas metodologias de microscopia. Para isso, os músculos Biceps femoris, Longissimus dorsi e Semimembranosus obtidos de nove carcaças bovinas com três diferentes graus de acabamento foram analisados de forma pareada por ambos os métodos. O músculo Longissimus dorsi apresentou menor comprimento de sarcômero e o m. Semimembranosus o maior (p0,05, revelando a possibilidade de emprego do método mais simples.The negative influence of cold shortening on meat texture is well known. Because of that, the determination of the muscle contraction extent represents an important analytical tool for the optimization of the industrial procedures. In this work, two methodologies to evaluate cold shortening were compared. Biceps femoris, Longissimus dorsi and Semimembranosus muscles from 9 cattle carcasses with three different fat thickness grades were paired analyzed by both methodologies. Longissimus dorsi muscle showed the shortest sarcomere length while Semimembranosus m. showed the longest one (p0.05, which suggests the possibility of using the simpler method for cold shortening evaluation.

  18. A simple score for estimating the long-term risk of fracture in patients with multiple sclerosis

    DEFF Research Database (Denmark)

    Bazelier, M. T.; van Staa, T. P.; Uitdehaag, B. M. J.

    2012-01-01

    was converted into integer risk scores. Results: In comparison with the FRAX calculator, our risk score contains several new risk factors that have been linked with fracture, which include MS, use of antidepressants, use of anticonvulsants, history of falling, and history of fatigue. We estimated the 5- and 10......Objective: To derive a simple score for estimating the long-term risk of osteoporotic and hip fracture in individual patients with MS. Methods: Using the UK General Practice Research Database linked to the National Hospital Registry (1997-2008), we identified patients with incident MS (n = 5......,494). They were matched 1:6 by year of birth, sex, and practice with patients without MS (control subjects). Cox proportional hazards models were used to calculate the long-term risk of osteoporotic and hip fracture. We fitted the regression model with general and specific risk factors, and the final Cox model...

  19. A Simple Estimation of Coupling Loss Factors for Two Flexible Subsystems Connected via Discrete Interfaces

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    2016-01-01

    Full Text Available A simple formula is proposed to estimate the Statistical Energy Analysis (SEA coupling loss factors (CLFs for two flexible subsystems connected via discrete interfaces. First, the dynamic interactions between two discretely connected subsystems are described as a set of intermodal coupling stiffness terms. It is then found that if both subsystems are of high modal density and meanwhile the interface points all act independently, the intermodal dynamic couplings become dominated by only those between different subsystem mode sets. If ensemble- and frequency-averaged, the intermodal coupling stiffness terms can simply reduce to a function of the characteristic dynamic properties of each subsystem and the subsystem mass, as well as the number of interface points. The results can thus be accommodated within the theoretical frame of conventional SEA theory to yield a simple CLF formula. Meanwhile, the approach allows the weak coupling region between the two SEA subsystems to be distinguished simply and explicitly. The consistency and difference of the present technique with and from the traditional wave-based SEA solutions are discussed. Finally, numerical examples are given to illustrate the good performance of the present technique.

  20. Size-specific dose estimate (SSDE) provides a simple method to calculate organ dose for pediatric CT examinations

    Energy Technology Data Exchange (ETDEWEB)

    Moore, Bria M.; Brady, Samuel L., E-mail: samuel.brady@stjude.org; Kaufman, Robert A. [Department of Radiological Sciences, St Jude Children' s Research Hospital, Memphis, Tennessee 38105 (United States); Mirro, Amy E. [Department of Biomedical Engineering, Washington University, St Louis, Missouri 63130 (United States)

    2014-07-15

    Purpose: To investigate the correlation of size-specific dose estimate (SSDE) with absorbed organ dose, and to develop a simple methodology for estimating patient organ dose in a pediatric population (5–55 kg). Methods: Four physical anthropomorphic phantoms representing a range of pediatric body habitus were scanned with metal oxide semiconductor field effect transistor (MOSFET) dosimeters placed at 23 organ locations to determine absolute organ dose. Phantom absolute organ dose was divided by phantom SSDE to determine correlation between organ dose and SSDE. Organ dose correlation factors (CF{sub SSDE}{sup organ}) were then multiplied by patient-specific SSDE to estimate patient organ dose. The CF{sub SSDE}{sup organ} were used to retrospectively estimate individual organ doses from 352 chest and 241 abdominopelvic pediatric CT examinations, where mean patient weight was 22 kg ± 15 (range 5–55 kg), and mean patient age was 6 yrs ± 5 (range 4 months to 23 yrs). Patient organ dose estimates were compared to published pediatric Monte Carlo study results. Results: Phantom effective diameters were matched with patient population effective diameters to within 4 cm; thus, showing appropriate scalability of the phantoms across the entire pediatric population in this study. IndividualCF{sub SSDE}{sup organ} were determined for a total of 23 organs in the chest and abdominopelvic region across nine weight subcategories. For organs fully covered by the scan volume, correlation in the chest (average 1.1; range 0.7–1.4) and abdominopelvic region (average 0.9; range 0.7–1.3) was near unity. For organ/tissue that extended beyond the scan volume (i.e., skin, bone marrow, and bone surface), correlation was determined to be poor (average 0.3; range: 0.1–0.4) for both the chest and abdominopelvic regions, respectively. A means to estimate patient organ dose was demonstrated. Calculated patient organ dose, using patient SSDE and CF{sub SSDE}{sup organ}, was compared to

  1. A simple analytic treatment of rescattering effects in the Deck model

    International Nuclear Information System (INIS)

    Bowler, M.G.

    1979-01-01

    A simple application of old-fashioned final-state interaction theory is shown to give the result that rescattering the Deck model of diffraction dissociation is well represented by multiplying the bare amplitude by esup(idelta)cosdelta. The physical reasons for this result emerge particularly clearly in this formulation. (author)

  2. Competing on talent analytics.

    Science.gov (United States)

    Davenport, Thomas H; Harris, Jeanne; Shapiro, Jeremy

    2010-10-01

    Do investments in your employees actually affect workforce performance? Who are your top performers? How can you empower and motivate other employees to excel? Leading-edge companies such as Google, Best Buy, Procter & Gamble, and Sysco use sophisticated data-collection technology and analysis to answer these questions, leveraging a range of analytics to improve the way they attract and retain talent, connect their employee data to business performance, differentiate themselves from competitors, and more. The authors present the six key ways in which companies track, analyze, and use data about their people-ranging from a simple baseline of metrics to monitor the organization's overall health to custom modeling for predicting future head count depending on various "what if" scenarios. They go on to show that companies competing on talent analytics manage data and technology at an enterprise level, support what analytical leaders do, choose realistic targets for analysis, and hire analysts with strong interpersonal skills as well as broad expertise.

  3. Approximate effect of parameter pseudonoise intensity on rate of convergence for EKF parameter estimators. [Extended Kalman Filter

    Science.gov (United States)

    Hill, Bryon K.; Walker, Bruce K.

    1991-01-01

    When using parameter estimation methods based on extended Kalman filter (EKF) theory, it is common practice to assume that the unknown parameter values behave like a random process, such as a random walk, in order to guarantee their identifiability by the filter. The present work is the result of an ongoing effort to quantitatively describe the effect that the assumption of a fictitious noise (called pseudonoise) driving the unknown parameter values has on the parameter estimate convergence rate in filter-based parameter estimators. The initial approach is to examine a first-order system described by one state variable with one parameter to be estimated. The intent is to derive analytical results for this simple system that might offer insight into the effect of the pseudonoise assumption for more complex systems. Such results would make it possible to predict the estimator error convergence behavior as a function of the assumed pseudonoise intensity, and this leads to the natural application of the results to the design of filter-based parameter estimators. The results obtained show that the analytical description of the convergence behavior is very difficult.

  4. Contaminant ingress into multizone buildings: An analytical state-space approach

    KAUST Repository

    Parker, Simon

    2013-08-13

    The ingress of exterior contaminants into buildings is often assessed by treating the building interior as a single well-mixed space. Multizone modelling provides an alternative way of representing buildings that can estimate concentration time series in different internal locations. A state-space approach is adopted to represent the concentration dynamics within multizone buildings. Analysis based on this approach is used to demonstrate that the exposure in every interior location is limited to the exterior exposure in the absence of removal mechanisms. Estimates are also developed for the short term maximum concentration and exposure in a multizone building in response to a step-change in concentration. These have considerable potential for practical use. The analytical development is demonstrated using a simple two-zone building with an inner zone and a range of existing multizone models of residential buildings. Quantitative measures are provided of the standard deviation of concentration and exposure within a range of residential multizone buildings. Ratios of the maximum short term concentrations and exposures to single zone building estimates are also provided for the same buildings. © 2013 Tsinghua University Press and Springer-Verlag Berlin Heidelberg.

  5. Ultra trace analysis of PAHs by designing simple injection of large amounts of analytes through the sample reconcentration on SPME fiber after magnetic solid phase extraction.

    Science.gov (United States)

    Khodaee, Nader; Mehdinia, Ali; Esfandiarnejad, Reyhaneh; Jabbari, Ali

    2016-01-15

    A simple solventless injection method was introduced based on the using of a solid-phase microextraction (SPME) fiber for injection of large amounts of the analytes extracted by the magnetic solid phase extraction (MSPE) procedure. The resulted extract from MSPE procedure was loaded on a G-coated SPME fiber, and then the fiber was injected into the gas chromatography (GC) injection port. This method combines the advantages of exhaustive extraction property of MSPE and the solvent-less injection of SPME to improve the sensitivity of the analysis. In addition, the analytes were re-concentrated prior to inject into the gas chromatography (GC) inlet because of the organic solvent removing from the remaining extract of MSPE technique. Injection of the large amounts of analytes was made possible by using the introduced procedure. Fourteen polycyclic aromatic hydrocarbons (PAHs) with different volatility were used as model compounds to investigate the method performance for volatile and semi-volatile compounds. The introduced method resulted in the higher enhancement factors (5097-59376), lower detection limits (0.29-3.3pgmL(-1)), and higher sensitivity for the semi-volatile compounds compared with the conventional direct injection method. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Analytical approximations for wide and narrow resonances

    International Nuclear Information System (INIS)

    Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da

    2005-01-01

    This paper aims at developing analytical expressions for the adjoint neutron spectrum in the resonance energy region, taking into account both narrow and wide resonance approximations, in order to reduce the numerical computations involved. These analytical expressions, besides reducing computing time, are very simple from a mathematical point of view. The results obtained with this analytical formulation were compared to a reference solution obtained with a numerical method previously developed to solve the neutron balance adjoint equations. Narrow and wide resonances of U 238 were treated and the analytical procedure gave satisfactory results as compared with the reference solution, for the resonance energy range. The adjoint neutron spectrum is useful to determine the neutron resonance absorption, so that multigroup adjoint cross sections used by the adjoint diffusion equation can be obtained. (author)

  7. Analytical approximations for wide and narrow resonances

    Energy Technology Data Exchange (ETDEWEB)

    Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear]. E-mail: aquilino@lmp.ufrj.br

    2005-07-01

    This paper aims at developing analytical expressions for the adjoint neutron spectrum in the resonance energy region, taking into account both narrow and wide resonance approximations, in order to reduce the numerical computations involved. These analytical expressions, besides reducing computing time, are very simple from a mathematical point of view. The results obtained with this analytical formulation were compared to a reference solution obtained with a numerical method previously developed to solve the neutron balance adjoint equations. Narrow and wide resonances of U{sup 238} were treated and the analytical procedure gave satisfactory results as compared with the reference solution, for the resonance energy range. The adjoint neutron spectrum is useful to determine the neutron resonance absorption, so that multigroup adjoint cross sections used by the adjoint diffusion equation can be obtained. (author)

  8. The simple solutions concept: a useful approach to estimate deviation from ideality in solvent extraction

    International Nuclear Information System (INIS)

    Sorel, C.; Pacary, V.

    2010-01-01

    The solvent extraction systems devoted to uranium purification from crude ore to spent fuel involve concentrated solutions in which deviation from ideality can not be neglected. The Simple Solution Concept based on the behaviour of isopiestic solutions has been applied to quantify the activity coefficients of metals and acids in the aqueous phase in equilibrium with the organic phase. This approach has been validated on various solvent extraction systems such as trialkylphosphates, malonamides or acidic extracting agents both on batch experiments and counter-current tests. Moreover, this concept has been successfully used to estimate the aqueous density which is useful to quantify the variation of volume and to assess critical parameters such as the number density of nuclides. (author)

  9. Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors

    Science.gov (United States)

    Erkmen, Baris I.; Moision, Bruce E.

    2010-01-01

    Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.

  10. Smoothed Spectra, Ogives, and Error Estimates for Atmospheric Turbulence Data

    Science.gov (United States)

    Dias, Nelson Luís

    2018-01-01

    A systematic evaluation is conducted of the smoothed spectrum, which is a spectral estimate obtained by averaging over a window of contiguous frequencies. The technique is extended to the ogive, as well as to the cross-spectrum. It is shown that, combined with existing variance estimates for the periodogram, the variance—and therefore the random error—associated with these estimates can be calculated in a straightforward way. The smoothed spectra and ogives are biased estimates; with simple power-law analytical models, correction procedures are devised, as well as a global constraint that enforces Parseval's identity. Several new results are thus obtained: (1) The analytical variance estimates compare well with the sample variance calculated for the Bartlett spectrum and the variance of the inertial subrange of the cospectrum is shown to be relatively much larger than that of the spectrum. (2) Ogives and spectra estimates with reduced bias are calculated. (3) The bias of the smoothed spectrum and ogive is shown to be negligible at the higher frequencies. (4) The ogives and spectra thus calculated have better frequency resolution than the Bartlett spectrum, with (5) gradually increasing variance and relative error towards the low frequencies. (6) Power-law identification and extraction of the rate of dissipation of turbulence kinetic energy are possible directly from the ogive. (7) The smoothed cross-spectrum is a valid inner product and therefore an acceptable candidate for coherence and spectral correlation coefficient estimation by means of the Cauchy-Schwarz inequality. The quadrature, phase function, coherence function and spectral correlation function obtained from the smoothed spectral estimates compare well with the classical ones derived from the Bartlett spectrum.

  11. A simple daily soil-water balance model for estimating the spatial and temporal distribution of groundwater recharge in temperate humid areas

    Science.gov (United States)

    Dripps, W.R.; Bradbury, K.R.

    2007-01-01

    Quantifying the spatial and temporal distribution of natural groundwater recharge is usually a prerequisite for effective groundwater modeling and management. As flow models become increasingly utilized for management decisions, there is an increased need for simple, practical methods to delineate recharge zones and quantify recharge rates. Existing models for estimating recharge distributions are data intensive, require extensive parameterization, and take a significant investment of time in order to establish. The Wisconsin Geological and Natural History Survey (WGNHS) has developed a simple daily soil-water balance (SWB) model that uses readily available soil, land cover, topographic, and climatic data in conjunction with a geographic information system (GIS) to estimate the temporal and spatial distribution of groundwater recharge at the watershed scale for temperate humid areas. To demonstrate the methodology and the applicability and performance of the model, two case studies are presented: one for the forested Trout Lake watershed of north central Wisconsin, USA and the other for the urban-agricultural Pheasant Branch Creek watershed of south central Wisconsin, USA. Overall, the SWB model performs well and presents modelers and planners with a practical tool for providing recharge estimates for modeling and water resource planning purposes in humid areas. ?? Springer-Verlag 2007.

  12. Fast and Simple Analytical Method for Direct Determination of Total Chlorine Content in Polyglycerol by ICP-MS.

    Science.gov (United States)

    Jakóbik-Kolon, Agata; Milewski, Andrzej; Dydo, Piotr; Witczak, Magdalena; Bok-Badura, Joanna

    2018-02-23

    The fast and simple method for total chlorine determination in polyglycerols using low resolution inductively coupled plasma mass spectrometry (ICP-MS) without the need for additional equipment and time-consuming sample decomposition was evaluated. Linear calibration curve for 35 Cl isotope in the concentration range 20-800 µg/L was observed. Limits of detection and quantification equaled to 15 µg/L and 44 µg/L, respectively. This corresponds to possibility of detection 3 µg/g and determination 9 µg/g of chlorine in polyglycerol using studied conditions (0.5% matrix-polyglycerol samples diluted or dissolved with water to an overall concentration of 0.5%). Matrix effects as well as the effect of chlorine origin have been evaluated. The presence of 0.5% (m/m) of matrix species similar to polyglycerol (polyethylene glycol-PEG) did not influence the chlorine determination for PEGs with average molecular weights (MW) up to 2000 Da. Good precision and accuracy of the chlorine content determination was achieved regardless on its origin (inorganic/organic). High analyte recovery level and low relative standard deviation values were observed for real polyglycerol samples spiked with chloride. Additionally, the Combustion Ion Chromatography System was used as a reference method. The results confirmed high accuracy and precision of the tested method.

  13. Modulational estimate for the maximal Lyapunov exponent in Fermi-Pasta-Ulam chains

    Science.gov (United States)

    Dauxois, Thierry; Ruffo, Stefano; Torcini, Alessandro

    1997-12-01

    In the framework of the Fermi-Pasta-Ulam (FPU) model, we show a simple method to give an accurate analytical estimation of the maximal Lyapunov exponent at high energy density. The method is based on the computation of the mean value of the modulational instability growth rates associated to unstable modes. Moreover, we show that the strong stochasticity threshold found in the β-FPU system is closely related to a transition in tangent space, the Lyapunov eigenvector being more localized in space at high energy.

  14. Analytic model of the radiation-dominated decay of a compact toroid

    International Nuclear Information System (INIS)

    Auerbach, S.P.

    1981-01-01

    The coaxial-gun, compact-torus experiments at LLNL and LASNL are believed to be radiation-dominated, in the sense that most or all of the input energy is lost by impurity radiation. This paper presents a simple analytic model of the radiation-dominated decay of a compact torus, and demonstrates that several striking features of the experiment (finite lifetime, linear current decay, insensitivity of the lifetime to density or stored magnetic energy) may also be explained by the hypothesis that impurity radiation dominates the energy loss. The model incorporates the essential features of the more elaborate 1 1/2-D simulations of Shumaker et al., yet is simple enough to be solved exactly. Based on the analytic results, a simple criterion is given for the maximum tolerable impurity density

  15. A simple formula for estimating global solar radiation in central arid deserts of Iran

    International Nuclear Information System (INIS)

    Sabziparvar, Ali A.

    2008-01-01

    Over the last two decades, using simple radiation models has been an interesting task to estimate daily solar radiation in arid and semi-arid deserts such as those in Iran, where the number of solar observation sites is poor. In Iran, most of the models used so far, have been validated for a few specific locations based on short-term solar observations. In this work, three different radiation models (Sabbagh, Paltridge, Daneshyar) have been revised to predict the climatology of monthly average daily solar radiation on horizontal surfaces in various cities in central arid deserts of Iran. The modifications are made by the inclusion of altitude, monthly total number of dusty days and seasonal variation of Sun-Earth distance. A new height-dependent formula is proposed based on MBE, MABE, MPE and RMSE statistical analysis. It is shown that the revised Sabbagh method can be a good estimator for the prediction of global solar radiation in arid and semi-arid deserts with an average error of less than 2%, that performs a more accurate prediction than those in the previous studies. The required data for the suggested method are usually available in most meteorological sites. For the locations, where some of the input data are not reported, an alternative approach is presented. (author)

  16. Accounting for Uncertainty in Decision Analytic Models Using Rank Preserving Structural Failure Time Modeling: Application to Parametric Survival Models.

    Science.gov (United States)

    Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua

    2018-01-01

    Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  17. A Simple Analytical Model for Predicting the Detectable Ion Current in Ion Mobility Spectrometry Using Corona Discharge Ionization Sources

    Science.gov (United States)

    Kirk, Ansgar Thomas; Kobelt, Tim; Spehlbrink, Hauke; Zimmermann, Stefan

    2018-05-01

    Corona discharge ionization sources are often used in ion mobility spectrometers (IMS) when a non-radioactive ion source with high ion currents is required. Typically, the corona discharge is followed by a reaction region where analyte ions are formed from the reactant ions. In this work, we present a simple yet sufficiently accurate model for predicting the ion current available at the end of this reaction region when operating at reduced pressure as in High Kinetic Energy Ion Mobility Spectrometers (HiKE-IMS) or most IMS-MS instruments. It yields excellent qualitative agreement with measurement results and is even able to calculate the ion current within an error of 15%. Additional interesting findings of this model are the ion current at the end of the reaction region being independent from the ion current generated by the corona discharge and the ion current in High Kinetic Energy Ion Mobility Spectrometers (HiKE-IMS) growing quadratically when scaling down the length of the reaction region. [Figure not available: see fulltext.

  18. A test on analytic continuation of thermal imaginary-time data

    International Nuclear Information System (INIS)

    Burnier, Y.; Laine, M.; Mether, L.

    2011-01-01

    Some time ago, Cuniberti et al. have proposed a novel method for analytically continuing thermal imaginary-time correlators to real time, which requires no model input and should be applicable with finite-precision data as well. Given that these assertions go against common wisdom, we report on a naive test of the method with an idealized example. We do encounter two problems, which we spell out in detail; this implies that systematic errors are difficult to quantify. On a more positive note, the method is simple to implement and allows for an empirical recipe by which a reasonable qualitative estimate for some transport coefficient may be obtained, if statistical errors of an ultraviolet-subtracted imaginary-time measurement can be reduced to roughly below the per mille level. (orig.)

  19. Fall in hematocrit per 1000 parasites cleared from peripheral blood: a simple method for estimating drug-related fall in hematocrit after treatment of malaria infections.

    Science.gov (United States)

    Gbotosho, Grace Olusola; Okuboyejo, Titilope; Happi, Christian Tientcha; Sowunmi, Akintunde

    2014-01-01

    A simple method to estimate antimalarial drug-related fall in hematocrit (FIH) after treatment of Plasmodium falciparum infections in the field is described. The method involves numeric estimation of the relative difference in hematocrit at baseline (pretreatment) and the first 1 or 2 days after treatment begun as numerator and the corresponding relative difference in parasitemia as the denominator, and expressing it per 1000 parasites cleared from peripheral blood. Using the method showed that FIH/1000 parasites cleared from peripheral blood (cpb) at 24 or 48 hours were similar in artemether-lumefantrine and artesunate-amodiaquine-treated children (0.09; 95% confidence interval, 0.052-0.138 vs 0.10; 95% confidence interval, 0.069-0.139%; P = 0.75) FIH/1000 parasites cpb in patients with higher parasitemias were significantly (P 1000 parasites cpb were similar in anemic and nonanemic children. Estimation of FIH/1000 parasites cpb is simple, allows estimation of relatively conserved hematocrit during treatment, and can be used in both observational studies and clinical trials involving antimalarial drugs.

  20. Better Fire Emissions Estimates for Tricky Species Illustrated with a Simple Empirical Burn-to-Sample Plume Mode

    Science.gov (United States)

    Chatfield, R. B.; Andreae, M. O.; Lareau, N.

    2017-12-01

    Methodologies for estimating emission factors (EFs) and broader emission relationship (ERs) (for e.g., O3 production or aerosol absorption) have been difficult to make accurate and convincing; this is largely due to non-fire effects on both CO2 and also fire-emitted trace species. We present a new view of these multiple effects as they affect downwind tracer samples observed by aircraft in NASA's ARCTAS and SEAC4RS airborne missions. This view leads to our method for estimates of ERs and EFs that allow spatially detailed views focusing on individual samples, a Mixed Effects Emission Ratio Technique (MERET). We concentrate on presenting a generalized viewpoint: a simple idealized model of a fire plume entraining air from near-flames upward and then outward to a sampling point, a view base on observations of typical situations. Actual evolution of a plume can depend intricately on the fully history of entrainment, entraining concentration levels of CO2 and tracer species, and mixing. Observations suggest that our simple plume model with just two (analyzed) values for entrained CO2 and one or potentially two values for environmental concentrations for each tracer can serve surprisingly well for mixed-effects regression estimates. Such details appears imperative for long-lived gases like CH4, CO, and N2O. In particular, it is difficult to distinguish fire-sourced emissions from air entrained near the flames, entrained in a way proportional to fire intensity. These entraining concentrations may vary significantly from those later in plume evolution. In addition, such detail also highlights behavior of emissions that react on the path to sampling, e.g. fire-sourced or entrained urban NOx. Some caveats regarding poor sampling situations, and some warning signs, based on this empirical plume description and on MERET analyses, are demonstrated. Some information is available when multiple tracers are analyzed. MERET estimates for ERs of short and these long-lived species are

  1. Liquid-liquid critical point in a simple analytical model of water

    Science.gov (United States)

    Urbic, Tomaz

    2016-10-01

    A statistical model for a simple three-dimensional Mercedes-Benz model of water was used to study phase diagrams. This model on a simple level describes the thermal and volumetric properties of waterlike molecules. A molecule is presented as a soft sphere with four directions in which hydrogen bonds can be formed. Two neighboring waters can interact through a van der Waals interaction or an orientation-dependent hydrogen-bonding interaction. For pure water, we explored properties such as molar volume, density, heat capacity, thermal expansion coefficient, and isothermal compressibility and found that the volumetric and thermal properties follow the same trends with temperature as in real water and are in good general agreement with Monte Carlo simulations. The model exhibits also two critical points for liquid-gas transition and transition between low-density and high-density fluid. Coexistence curves and a Widom line for the maximum and minimum in thermal expansion coefficient divides the phase space of the model into three parts: in one part we have gas region, in the second a high-density liquid, and the third region contains low-density liquid.

  2. A simple, analytical model of collisionless magnetic reconnection in a pair plasma

    International Nuclear Information System (INIS)

    Hesse, Michael; Zenitani, Seiji; Kuznetsova, Masha; Klimas, Alex

    2009-01-01

    A set of conservation equations is utilized to derive balance equations in the reconnection diffusion region of a symmetric pair plasma. The reconnection electric field is assumed to have the function to maintain the current density in the diffusion region and to impart thermal energy to the plasma by means of quasiviscous dissipation. Using these assumptions it is possible to derive a simple set of equations for diffusion region parameters in dependence on inflow conditions and on plasma compressibility. These equations are solved by means of a simple, iterative procedure. The solutions show expected features such as dominance of enthalpy flux in the reconnection outflow, as well as combination of adiabatic and quasiviscous heating. Furthermore, the model predicts a maximum reconnection electric field of E * =0.4, normalized to the parameters at the inflow edge of the diffusion region.

  3. A Simple, Analytical Model of Collisionless Magnetic Reconnection in a Pair Plasma

    Science.gov (United States)

    Hesse, Michael; Zenitani, Seiji; Kuznetova, Masha; Klimas, Alex

    2011-01-01

    A set of conservation equations is utilized to derive balance equations in the reconnection diffusion region of a symmetric pair plasma. The reconnection electric field is assumed to have the function to maintain the current density in the diffusion region, and to impart thermal energy to the plasma by means of quasi-viscous dissipation. Using these assumptions it is possible to derive a simple set of equations for diffusion region parameters in dependence on inflow conditions and on plasma compressibility. These equations are solved by means of a simple, iterative, procedure. The solutions show expected features such as dominance of enthalpy flux in the reconnection outflow, as well as combination of adiabatic and quasi-viscous heating. Furthermore, the model predicts a maximum reconnection electric field of E(sup *)=0.4, normalized to the parameters at the inflow edge of the diffusion region.

  4. A simple electron plasma wave

    International Nuclear Information System (INIS)

    Brodin, G.; Stenflo, L.

    2017-01-01

    Considering a class of solutions where the density perturbations are functions of time, but not of space, we derive a new exact large amplitude wave solution for a cold uniform electron plasma. This result illustrates that most simple analytical solutions can appear even if the density perturbations are large. - Highlights: • The influence of large amplitude electromagnetic waves on electrostatic oscillations is found. • A generalized Mathieu equation is derived. • Anharmonic wave profiles are computed numerically.

  5. A simple electron plasma wave

    Energy Technology Data Exchange (ETDEWEB)

    Brodin, G., E-mail: gert.brodin@physics.umu.se [Department of Physics, Umeå University, SE-901 87 Umeå (Sweden); Stenflo, L. [Department of Physics, Linköping University, SE-581 83 Linköping (Sweden)

    2017-03-18

    Considering a class of solutions where the density perturbations are functions of time, but not of space, we derive a new exact large amplitude wave solution for a cold uniform electron plasma. This result illustrates that most simple analytical solutions can appear even if the density perturbations are large. - Highlights: • The influence of large amplitude electromagnetic waves on electrostatic oscillations is found. • A generalized Mathieu equation is derived. • Anharmonic wave profiles are computed numerically.

  6. MASCOTTE: analytical model of eddy current signals

    International Nuclear Information System (INIS)

    Delsarte, G.; Levy, R.

    1992-01-01

    Tube examination is a major application of the eddy current technique in the nuclear and petrochemical industries. Such examination configurations being specially adapted to analytical modes, a physical model is developed on portable computers. It includes simple approximations made possible by the effective conditions of the examinations. The eddy current signal is described by an analytical formulation that takes into account the tube dimensions, the sensor conception, the physical characteristics of the defect and the examination parameters. Moreover, the model makes it possible to associate real signals and simulated signals

  7. Comparison between the SIMPLE and ENERGY mixing models

    International Nuclear Information System (INIS)

    Burns, K.J.; Todreas, N.E.

    1980-07-01

    The SIMPLE and ENERGY mixing models were compared in order to investigate the limitations of SIMPLE's analytically formulated mixing parameter, relative to the experimentally calibrated ENERGY mixing parameters. For interior subchannels, it was shown that when the SIMPLE and ENERGY parameters are reduced to a common form, there is good agreement between the two models for a typical fuel geometry. However, large discrepancies exist for typical blanket (lower P/D) geometries. Furthermore, the discrepancies between the mixing parameters result in significant differences in terms of the temperature profiles generated by the ENERGY code utilizing these mixing parameters as input. For edge subchannels, the assumptions made in the development of the SIMPLE model were extended to the rectangular edge subchannel geometry used in ENERGY. The resulting effective eddy diffusivities (used by the ENERGY code) associated with the SIMPLE model are again closest to those of the ENERGY model for the fuel assembly geometry. Finally, the SIMPLE model's neglect of a net swirl effect in the edge region is most limiting for assemblies exhibiting relatively large radial power skews

  8. Analytical solutions for one-dimensional advection–dispersion ...

    Indian Academy of Sciences (India)

    We present simple analytical solutions for the unsteady advection–dispersion equations describing the pollutant concentration (, ) in one dimension. The solutions are obtained by using Laplace transformation technique. In this study we divided the river into two regions ≤ 0 and ≥0 and the origin at = 0.

  9. Analytical Model for Estimating Terrestrial Cosmic Ray Fluxes Nearly Anytime and Anywhere in the World: Extension of PARMA/EXPACS.

    Directory of Open Access Journals (Sweden)

    Tatsuhiko Sato

    Full Text Available By extending our previously established model, here we present a new model called "PHITS-based Analytical Radiation Model in the Atmosphere (PARMA version 3.0," which can instantaneously estimate terrestrial cosmic ray fluxes of neutrons, protons, ions with charge up to 28 (Ni, muons, electrons, positrons, and photons nearly anytime and anywhere in the Earth's atmosphere. The model comprises numerous analytical functions with parameters whose numerical values were fitted to reproduce the results of the extensive air shower (EAS simulation performed by Particle and Heavy Ion Transport code System (PHITS. The accuracy of the EAS simulation was well verified using various experimental data, while that of PARMA3.0 was confirmed by the high R2 values of the fit. The models to be used for estimating radiation doses due to cosmic ray exposure, cosmic ray induced ionization rates, and count rates of neutron monitors were validated by investigating their capability to reproduce those quantities measured under various conditions. PARMA3.0 is available freely and is easy to use, as implemented in an open-access software program EXcel-based Program for Calculating Atmospheric Cosmic ray Spectrum (EXPACS. Because of these features, the new version of PARMA/EXPACS can be an important tool in various research fields such as geosciences, cosmic ray physics, and radiation research.

  10. Computer controlled quality of analytical measurements

    International Nuclear Information System (INIS)

    Clark, J.P.; Huff, G.A.

    1979-01-01

    A PDP 11/35 computer system is used in evaluating analytical chemistry measurements quality control data at the Barnwell Nuclear Fuel Plant. This computerized measurement quality control system has several features which are not available in manual systems, such as real-time measurement control, computer calculated bias corrections and standard deviation estimates, surveillance applications, evaluaton of measurement system variables, records storage, immediate analyst recertificaton, and the elimination of routine analysis of known bench standards. The effectiveness of the Barnwell computer system has been demonstrated in gathering and assimilating the measurements of over 1100 quality control samples obtained during a recent plant demonstration run. These data were used to determine equaitons for predicting measurement reliability estimates (bias and precision); to evaluate the measurement system; and to provide direction for modification of chemistry methods. The analytical chemistry measurement quality control activities represented 10% of the total analytical chemistry effort

  11. Identification of clinical biomarkers for pre-analytical quality control of blood samples.

    Science.gov (United States)

    Kang, Hyun Ju; Jeon, Soon Young; Park, Jae-Sun; Yun, Ji Young; Kil, Han Na; Hong, Won Kyung; Lee, Mee-Hee; Kim, Jun-Woo; Jeon, Jae-Pil; Han, Bok Ghee

    2013-04-01

    Pre-analytical conditions are key factors in maintaining the high quality of biospecimens. They are necessary for accurate reproducibility of experiments in the field of biomarker discovery as well as achieving optimal specificity of laboratory tests for clinical diagnosis. In research at the National Biobank of Korea, we evaluated the impact of pre-analytical conditions on the stability of biobanked blood samples by measuring biochemical analytes commonly used in clinical laboratory tests. We measured 10 routine laboratory analytes in serum and plasma samples from healthy donors (n = 50) with a chemistry autoanalyzer (Hitachi 7600-110). The analyte measurements were made at different time courses based on delay of blood fractionation, freezing delay of fractionated serum and plasma samples, and at different cycles (0, 1, 3, 6, 9) of freeze-thawing. Statistically significant changes from the reference sample mean were determined using the repeated-measures ANOVA and the significant change limit (SCL). The serum levels of GGT and LDH were changed significantly depending on both the time interval between blood collection and fractionation and the time interval between fractionation and freezing of serum and plasma samples. The glucose level was most sensitive only to the elapsed time between blood collection and centrifugation for blood fractionation. Based on these findings, a simple formula (glucose decrease by 1.387 mg/dL per hour) was derived to estimate the length of time delay after blood collection. In addition, AST, BUN, GGT, and LDH showed sensitive responses to repeated freeze-thaw cycles of serum and plasma samples. These results suggest that GGT and LDH measurements can be used as quality control markers for certain pre-analytical conditions (eg, delayed processing or repeated freeze-thawing) of blood samples which are either directly used in the laboratory tests or stored for future research in the biobank.

  12. The Role of Nanoparticle Design in Determining Analytical Performance of Lateral Flow Immunoassays.

    Science.gov (United States)

    Zhan, Li; Guo, Shuang-Zhuang; Song, Fayi; Gong, Yan; Xu, Feng; Boulware, David R; McAlpine, Michael C; Chan, Warren C W; Bischof, John C

    2017-12-13

    Rapid, simple, and cost-effective diagnostics are needed to improve healthcare at the point of care (POC). However, the most widely used POC diagnostic, the lateral flow immunoassay (LFA), is ∼1000-times less sensitive and has a smaller analytical range than laboratory tests, requiring a confirmatory test to establish truly negative results. Here, a rational and systematic strategy is used to design the LFA contrast label (i.e., gold nanoparticles) to improve the analytical sensitivity, analytical detection range, and antigen quantification of LFAs. Specifically, we discovered that the size (30, 60, or 100 nm) of the gold nanoparticles is a main contributor to the LFA analytical performance through both the degree of receptor interaction and the ultimate visual or thermal contrast signals. Using the optimal LFA design, we demonstrated the ability to improve the analytical sensitivity by 256-fold and expand the analytical detection range from 3 log 10 to 6 log 10 for diagnosing patients with inflammatory conditions by measuring C-reactive protein. This work demonstrates that, with appropriate design of the contrast label, a simple and commonly used diagnostic technology can compete with more expensive state-of-the-art laboratory tests.

  13. Estimation of Lifetime Duration for a Lever Pin of Runner Blade Operating Mechanism using a Graphic – analytic Method

    Directory of Open Access Journals (Sweden)

    Ana-Maria Budai

    2015-09-01

    Full Text Available In this paper are presented a graphic - analytic method that can be used to estimate the fatigue lifetime duration for an operating mechanism lever pin to a Kaplan turbine. The presented calculus algorithm is adapted from the one used by Fuji Electric to made strength calculus in order to refurbish a Romanian hydropower plant, equipped with a Kaplan turbine. The graphic part includes a 3D fatigue diagram for rotating bending stress designed by Fuji Electric specialists.

  14. Application of a simple parameter estimation method to predict effluent transport in the Savannah River

    International Nuclear Information System (INIS)

    Hensel, S.J.; Hayes, D.W.

    1993-01-01

    A simple parameter estimation method has been developed to determine the dispersion and velocity parameters associated with stream/river transport. The unsteady one dimensional Burgers' equation was chosen as the model equation, and the method has been applied to recent Savannah River dye tracer studies. The computed Savannah River transport coefficients compare favorably with documented values, and the time/concentration curves calculated from these coefficients compare well with the actual tracer data. The coefficients were used as a predictive capability and applied to Savannah River tritium concentration data obtained during the December 1991 accidental tritium discharge from the Savannah River Site. The peak tritium concentration at the intersection of Highway 301 and the Savannah River was underpredicted by only 5% using the coefficients computed from the dye data

  15. Simple waves in a two-component Bose-Einstein condensate

    Science.gov (United States)

    Ivanov, S. K.; Kamchatnov, A. M.

    2018-04-01

    We study the dynamics of so-called simple waves in a two-component Bose-Einstein condensate. The evolution of the condensate is described by Gross-Pitaevskii equations which can be reduced for these simple wave solutions to a system of ordinary differential equations which coincide with those derived by Ovsyannikov for the two-layer fluid dynamics. We solve the Ovsyannikov system for two typical situations of large and small difference between interspecies and intraspecies nonlinear interaction constants. Our analytic results are confirmed by numerical simulations.

  16. A simple model for low energy ion-solid interactions

    International Nuclear Information System (INIS)

    Mohajerzadeh, S.; Selvakumar, C.R.

    1997-01-01

    A simple analytical model for ion-solid interactions, suitable for low energy beam depositions, is reported. An approximation for the nuclear stopping power is used to obtain the analytic solution for the deposited energy in the solid. The ratio of the deposited energy in the bulk to the energy deposited in the surface yields a ceiling for the beam energy above which more defects are generated in the bulk resulting in defective films. The numerical evaluations agree with the existing results in the literature. copyright 1997 American Institute of Physics

  17. Valid analytical performance specifications for combined analytical bias and imprecision for the use of common reference intervals.

    Science.gov (United States)

    Hyltoft Petersen, Per; Lund, Flemming; Fraser, Callum G; Sandberg, Sverre; Sölétormos, György

    2018-01-01

    Background Many clinical decisions are based on comparison of patient results with reference intervals. Therefore, an estimation of the analytical performance specifications for the quality that would be required to allow sharing common reference intervals is needed. The International Federation of Clinical Chemistry (IFCC) recommended a minimum of 120 reference individuals to establish reference intervals. This number implies a certain level of quality, which could then be used for defining analytical performance specifications as the maximum combination of analytical bias and imprecision required for sharing common reference intervals, the aim of this investigation. Methods Two methods were investigated for defining the maximum combination of analytical bias and imprecision that would give the same quality of common reference intervals as the IFCC recommendation. Method 1 is based on a formula for the combination of analytical bias and imprecision and Method 2 is based on the Microsoft Excel formula NORMINV including the fractional probability of reference individuals outside each limit and the Gaussian variables of mean and standard deviation. The combinations of normalized bias and imprecision are illustrated for both methods. The formulae are identical for Gaussian and log-Gaussian distributions. Results Method 2 gives the correct results with a constant percentage of 4.4% for all combinations of bias and imprecision. Conclusion The Microsoft Excel formula NORMINV is useful for the estimation of analytical performance specifications for both Gaussian and log-Gaussian distributions of reference intervals.

  18. Customized Steady-State Constraints for Parameter Estimation in Non-Linear Ordinary Differential Equation Models.

    Science.gov (United States)

    Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel

    2016-01-01

    Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization.

  19. Universal behaviour of interoccurrence times between losses in financial markets: An analytical description

    Science.gov (United States)

    Ludescher, J.; Tsallis, C.; Bunde, A.

    2011-09-01

    We consider 16 representative financial records (stocks, indices, commodities, and exchange rates) and study the distribution PQ(r) of the interoccurrence times r between daily losses below negative thresholds -Q, for fixed mean interoccurrence time RQ. We find that in all cases, PQ(r) follows the form PQ(r)~1/[(1+(q- 1)βr]1/(q-1), where β and q are universal constants that depend only on RQ, but not on a specific asset. While β depends only slightly on RQ, the q-value increases logarithmically with RQ, q=1+q0 ln(RQ/2), such that for RQ→2, PQ(r) approaches a simple exponential, PQ(r)cong2-r. The fact that PQ does not scale with RQ is due to the multifractality of the financial markets. The analytic form of PQ allows also to estimate both the risk function and the Value-at-Risk, and thus to improve the estimation of the financial risk.

  20. A simple analytical scaling method for a scaled-down test facility simulating SB-LOCAs in a passive PWR

    International Nuclear Information System (INIS)

    Lee, Sang Il

    1992-02-01

    A Simple analytical scaling method is developed for a scaled-down test facility simulating SB-LOCAs in a passive PWR. The whole scenario of a SB-LOCA is divided into two phases on the basis of the pressure trend ; depressurization phase and pot-boiling phase. The pressure and the core mixture level are selected as the most critical parameters to be preserved between the prototype and the scaled-down model. In each phase the high important phenomena having the influence on the critical parameters are identified and the scaling parameters governing the high important phenomena are generated by the present method. To validate the model used, Marviken CFT and 336 rod bundle experiment are simulated. The models overpredict both the pressure and two phase mixture level, but it shows agreement at least qualitatively with experimental results. In order to validate whether the scaled-down model well represents the important phenomena, we simulate the nondimensional pressure response of a cold-leg 4-inch break transient for AP-600 and the scaled-down model. The results of the present method are in excellent agreement with those of AP-600. It can be concluded that the present method is suitable for scaling the test facility simulating SB-LOCAs in a passive PWR

  1. PENDISC: a simple method for constructing a mathematical model from time-series data of metabolite concentrations.

    Science.gov (United States)

    Sriyudthsak, Kansuporn; Iwata, Michio; Hirai, Masami Yokota; Shiraishi, Fumihide

    2014-06-01

    The availability of large-scale datasets has led to more effort being made to understand characteristics of metabolic reaction networks. However, because the large-scale data are semi-quantitative, and may contain biological variations and/or analytical errors, it remains a challenge to construct a mathematical model with precise parameters using only these data. The present work proposes a simple method, referred to as PENDISC (Parameter Estimation in a N on- DImensionalized S-system with Constraints), to assist the complex process of parameter estimation in the construction of a mathematical model for a given metabolic reaction system. The PENDISC method was evaluated using two simple mathematical models: a linear metabolic pathway model with inhibition and a branched metabolic pathway model with inhibition and activation. The results indicate that a smaller number of data points and rate constant parameters enhances the agreement between calculated values and time-series data of metabolite concentrations, and leads to faster convergence when the same initial estimates are used for the fitting. This method is also shown to be applicable to noisy time-series data and to unmeasurable metabolite concentrations in a network, and to have a potential to handle metabolome data of a relatively large-scale metabolic reaction system. Furthermore, it was applied to aspartate-derived amino acid biosynthesis in Arabidopsis thaliana plant. The result provides confirmation that the mathematical model constructed satisfactorily agrees with the time-series datasets of seven metabolite concentrations.

  2. Estimation of Ship Long-term Wave-induced Bending Moment using Closed-Form Expressions

    DEFF Research Database (Denmark)

    Jensen, Jørgen Juncher; Mansour, A. E.

    2002-01-01

    A semi-analytical approach is used to derive frequency response functions and standard deviations for the wave-induced bending moment amidships for mono-hull ships. The results are given as closed-form expressions and the required input information for the procedure is restricted to the main......-empirical closed-form expression for the skewness. The effect of whipping is included by assuming that whipping and wave-induced responses are conditionally independent given Hs. The procedure is simple and can be used to make quick estimates of the design wave bending moment at the conceptual design phase...

  3. Performance Analysis of Blind Subspace-Based Signature Estimation Algorithms for DS-CDMA Systems with Unknown Correlated Noise

    Science.gov (United States)

    Zarifi, Keyvan; Gershman, Alex B.

    2006-12-01

    We analyze the performance of two popular blind subspace-based signature waveform estimation techniques proposed by Wang and Poor and Buzzi and Poor for direct-sequence code division multiple-access (DS-CDMA) systems with unknown correlated noise. Using the first-order perturbation theory, analytical expressions for the mean-square error (MSE) of these algorithms are derived. We also obtain simple high SNR approximations of the MSE expressions which explicitly clarify how the performance of these techniques depends on the environmental parameters and how it is related to that of the conventional techniques that are based on the standard white noise assumption. Numerical examples further verify the consistency of the obtained analytical results with simulation results.

  4. Noninvasive and simple method for the estimation of myocardial metabolic rate of glucose by PET and 18F-FDG

    International Nuclear Information System (INIS)

    Takahashi, Norio; Tamaki, Nagara; Kawamoto, Masahide

    1994-01-01

    To estimate regional myocardial metabolic rate of glucose (rMRGlu) with positron emission tomography (PET) and 2-[ 18 F] fluoro-2-deoxy-D-glucose (FDG), non invasive simple method has been investigated using dynamic PET imaging in 14 patients with ischemic heart disease. This imaging approach uses a blood time-activity curve (TAC) derived from a region of interest (ROI) drawn over dynamic PET images of the left ventricle (LV), left atrium (LA) and aorta. Patlak graphic analysis was used to estimate k 1 k 3 /(k 2 +k 3 ) from serial plasma and myocardial radioactivities. FDG counts ratio between whole blood and plasma was relatively constant (0.91±0.02) both throughout the time and among different patients. Although TACs derived from dynamic PET images gradually increased at later phase due to spill over from the myocardium into the cavity, three were good agreements between the estimated K complex values obtained from arterial blood sampling and dynamic PET imaging (LV r=0.95, LA r=0.96, aorta r=0.98). These results demonstrate the practical usefulness of a simplified and noninvasive method for the estimation of rMRGlu in humans by PET. (author)

  5. Pre-analytical and analytical validations and clinical applications of a miniaturized, simple and cost-effective solid phase extraction combined with LC-MS/MS for the simultaneous determination of catecholamines and metanephrines in spot urine samples.

    Science.gov (United States)

    Li, Xiaoguang Sunny; Li, Shu; Kellermann, Gottfried

    2016-10-01

    It remains a challenge to simultaneously quantify catecholamines and metanephrines in a simple, sensitive and cost-effective manner due to pre-analytical and analytical constraints. Herein, we describe such a method consisting of a miniaturized sample preparation and selective LC-MS/MS detection by the use of second morning spot urine samples. Ten microliters of second morning urine sample were subjected to solid phase extraction on an Oasis HLB microplate upon complexation with phenylboronic acid. The analytes were well-resolved on a Luna PFP column followed by tandem mass spectrometric detection. Full validation and suitability of spot urine sampling and biological variation were investigated. The extraction recovery and matrix effect are 74.1-97.3% and 84.1-119.0%, respectively. The linearity range is 2.5-500, 0.5-500, 2.5-1250, 2.5-1250 and 0.5-1250ng/mL for norepinephrine, epinephrine, dopamine, normetanephrine and metanephrine, respectively. The intra- and inter-assay imprecisions are ≤9.4% for spiked quality control samples, and the respective recoveries are 97.2-112.5% and 95.9-104.0%. The Deming regression slope is 0.90-1.08, and the mean Bland-Altman percentage difference is from -3.29 to 11.85 between a published and proposed method (n=50). A correlation observed for the spot and 24h urine collections is significant (n=20, p<0.0001, r: 0.84-0.95, slope: 0.61-0.98). No statistical differences are found in day-to-day biological variability (n=20). Reference intervals are established for an apparently healthy population (n=88). The developed method, being practical, sensitive, reliable and cost-effective, is expected to set a new stage for routine testing, basic research and clinical applications. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Pencil graphite leads as simple amperometric sensors for microchip electrophoresis.

    Science.gov (United States)

    Natiele Tiago da Silva, Eiva; Marques Petroni, Jacqueline; Gabriel Lucca, Bruno; Souza Ferreira, Valdir

    2017-11-01

    In this work we demonstrate, for the first time, the use of inexpensive commercial pencil graphite leads as simple amperometric sensors for microchip electrophoresis. A PDMS support containing one channel was fabricated through soft lithography and sanded pencil graphite leads were inserted into this channel to be used as working electrodes. The electrochemical and morphological characterization of the sensor was carried out. The graphite electrode was coupled to PDMS microchips in end-channel configuration and electrophoretic experiments were performed using nitrite and ascorbate as probe analytes. The analytes were successfully separated and detected in well-defined peaks with satisfactory resolution using the microfluidic platform proposed. The repeatability of the pencil graphite electrode was satisfactory (RSD values of 1.6% for nitrite and 12.3% for ascorbate, regarding the peak currents) and its lifetime was estimated to be ca. 700 electrophoretic runs over a cost of ca. $ 0.05 per electrode. The limits of detection achieved with this system were 2.8 μM for nitrite and 5.7 μM for ascorbate. For proof of principle, the pencil graphite electrode was employed for the real analysis of well water samples and nitrite was successfully quantified at levels below its maximum contaminant level established in Brazil and US. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. On the accuracy of the simple ocean data assimilation analysis for estimating heat Budgets of the Near-Surface Arabian Sea and Bay of Bengal

    Digital Repository Service at National Institute of Oceanography (India)

    Shenoi, S.S.C.; Shankar, D.; Shetye, S.R.

    The accuracy of data from the Simple Ocean Data Assimilation (SODA) model for estimating the heat budget of the upper ocean is tested in the Arabian Sea and the Bay of Bengal. SODA is able to reproduce the changes in heat content when...

  8. Development of analytical method used for the estimation of potassium amide in liquid ammonia at HWP (Tuticorin)

    International Nuclear Information System (INIS)

    Ramanathan, A.V.

    2007-01-01

    Potassium amide in liquid ammonia is used as a homogeneous catalyst in mono-thermal ammonia-hydrogen isotopic chemical exchange process employed for the manufacture of heavy water. Estimation of concentration of potassium amide in liquid ammonia is vital for checking whether it is sufficient for catalysis in isotopic exchange towers or for purification in purifiers in the Heavy Water Plants. This estimation was carried out earlier by the conventional method involving evaporation of ammonia, decomposition of potassium amide with water and titration of liberated ammonia with sulphuric acid. This method has been replaced by a newly developed method involving direct titration of potassium amide in ammonia with ammonium bromide. This new method is based on the principle that ammonium bromide and potassium amide act as acid and base respectively in the non-aqueous solvent medium, liquid ammonia. This method has not only proved to be an alternative method of estimation of potassium amide in liquid ammonia but also has been serving as a developed analytical method, because it is faster (with fewer steps), more accurate, safer (as it excludes the use of corrosive sulphuric acid needed for the conventional method) and more convenient (as it doesn't need specially designed apparatus and inert gas like dry nitrogen used in the conventional method). (author)

  9. An analytical model of the HINT performance metric

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Q.O.; Gustafson, J.L. [Scalable Computing Lab., Ames, IA (United States)

    1996-10-01

    The HINT benchmark was developed to provide a broad-spectrum metric for computers and to measure performance over the full range of memory sizes and time scales. We have extended our understanding of why HINT performance curves look the way they do and can now predict the curves using an analytical model based on simple hardware specifications as input parameters. Conversely, by fitting the experimental curves with the analytical model, hardware specifications such as memory performance can be inferred to provide insight into the nature of a given computer system.

  10. Prognostic risk estimates of patients with multiple sclerosis and their physicians: comparison to an online analytical risk counseling tool.

    Directory of Open Access Journals (Sweden)

    Christoph Heesen

    Full Text Available BACKGROUND: Prognostic counseling in multiple sclerosis (MS is difficult because of the high variability of disease progression. Simultaneously, patients and physicians are increasingly confronted with making treatment decisions at an early stage, which requires taking individual prognoses into account to strike a good balance between benefits and harms of treatments. It is therefore important to understand how patients and physicians estimate prognostic risk, and whether and how these estimates can be improved. An online analytical processing (OLAP tool based on pooled data from placebo cohorts of clinical trials offers short-term prognostic estimates that can be used for individual risk counseling. OBJECTIVE: The aim of this study was to clarify if personalized prognostic information as presented by the OLAP tool is considered useful and meaningful by patients. Furthermore, we used the OLAP tool to evaluate patients' and physicians' risk estimates. Within this evaluation process we assessed short-time prognostic risk estimates of patients with MS (final n = 110 and their physicians (n = 6 and compared them with the estimates of OLAP. RESULTS: Patients rated the OLAP tool as understandable and acceptable, but to be only of moderate interest. It turned out that patients, physicians, and the OLAP tool ranked patients similarly regarding their risk of disease progression. Both patients' and physicians' estimates correlated most strongly with those disease covariates that the OLAP tool's estimates also correlated with most strongly. Exposure to the OLAP tool did not change patients' risk estimates. CONCLUSION: While the OLAP tool was rated understandable and acceptable, it was only of modest interest and did not change patients' prognostic estimates. The results suggest, however, that patients had some idea regarding their prognosis and which factors were most important in this regard. Future work with OLAP should assess long-term prognostic

  11. Prognostic risk estimates of patients with multiple sclerosis and their physicians: comparison to an online analytical risk counseling tool.

    Science.gov (United States)

    Heesen, Christoph; Gaissmaier, Wolfgang; Nguyen, Franziska; Stellmann, Jan-Patrick; Kasper, Jürgen; Köpke, Sascha; Lederer, Christian; Neuhaus, Anneke; Daumer, Martin

    2013-01-01

    Prognostic counseling in multiple sclerosis (MS) is difficult because of the high variability of disease progression. Simultaneously, patients and physicians are increasingly confronted with making treatment decisions at an early stage, which requires taking individual prognoses into account to strike a good balance between benefits and harms of treatments. It is therefore important to understand how patients and physicians estimate prognostic risk, and whether and how these estimates can be improved. An online analytical processing (OLAP) tool based on pooled data from placebo cohorts of clinical trials offers short-term prognostic estimates that can be used for individual risk counseling. The aim of this study was to clarify if personalized prognostic information as presented by the OLAP tool is considered useful and meaningful by patients. Furthermore, we used the OLAP tool to evaluate patients' and physicians' risk estimates. Within this evaluation process we assessed short-time prognostic risk estimates of patients with MS (final n = 110) and their physicians (n = 6) and compared them with the estimates of OLAP. Patients rated the OLAP tool as understandable and acceptable, but to be only of moderate interest. It turned out that patients, physicians, and the OLAP tool ranked patients similarly regarding their risk of disease progression. Both patients' and physicians' estimates correlated most strongly with those disease covariates that the OLAP tool's estimates also correlated with most strongly. Exposure to the OLAP tool did not change patients' risk estimates. While the OLAP tool was rated understandable and acceptable, it was only of modest interest and did not change patients' prognostic estimates. The results suggest, however, that patients had some idea regarding their prognosis and which factors were most important in this regard. Future work with OLAP should assess long-term prognostic estimates and clarify its usefulness for patients and physicians

  12. Characterization of dilation analytic integral kernels

    Energy Technology Data Exchange (ETDEWEB)

    Vici, A D [Rome Univ. (Italy). Ist. di Matematica

    1979-11-01

    The author characterises integral operators belonging to B(L/sup 2/(R/sup 3/)) which are dilatation analytic in the Cartesian product of two sectors Ssub(a) contains C as analytic functions from Ssub(a) X Ssub(a) into B(L/sup 2/(..cap omega..)), the space of bounded operators on square integrable functions on the unit sphere ..cap omega.., which satisfy certain norm estimates uniformly on every subsector.

  13. Optimization of solar assisted heat pump systems via a simple analytic approach

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, J W

    1980-01-01

    An analytic method for calculating the optimum operating temperature of the collector/storage subsystem in a solar assisted heat pump is presented. A tradeoff exists between rising heat pump coefficient of performance and falling collector efficiency as this temperature is increased, resulting in an optimum temperature whose value increases with increasing efficiency of the auxiliary energy source. Electric resistance is shown to be a poor backup to such systems. A number of options for thermally coupling the system to the ground are analyzed and compared.

  14. Median of patient results as a tool for assessment of analytical stability

    DEFF Research Database (Denmark)

    Jørgensen, Lars Mønster; Hansen, Steen Ingemann; Petersen, Per Hyltoft

    2015-01-01

    BACKGROUND: In spite of the well-established external quality assessment and proficiency testing surveys of analytical quality performance in laboratory medicine, a simple tool to monitor the long-term analytical stability as a supplement to the internal control procedures is often needed. METHOD......: Patient data from daily internal control schemes was used for monthly appraisal of the analytical stability. This was accomplished by using the monthly medians of patient results to disclose deviations from analytical stability, and by comparing divergences with the quality specifications for allowable...... analytical bias based on biological variation. RESULTS: Seventy five percent of the twenty analytes achieved on two COBASs INTEGRA 800 instruments performed in accordance with the optimum and with the desirable specifications for bias. DISCUSSION: Patient results applied in analytical quality performance...

  15. A new way of obtaining analytic approximations of Chandrasekhar's H function

    International Nuclear Information System (INIS)

    Vukanic, J.; Arsenovic, D.; Davidovic, D.

    2007-01-01

    Applying the mean value theorem for definite integrals in the non-linear integral equation for Chandrasekhar's H function describing conservative isotropic scattering, we have derived a new, simple analytic approximation for it, with a maximal relative error below 2.5%. With this new function as a starting-point, after a single iteration in the corresponding integral equation, we have obtained a new, highly accurate analytic approximation for the H function. As its maximal relative error is below 0.07%, it significantly surpasses the accuracy of other analytic approximations

  16. An Analytic Model for the Success Rate of a Robotic Actuator System in Hitting Random Targets.

    Science.gov (United States)

    Bradley, Stuart

    2015-11-20

    Autonomous robotic systems are increasingly being used in a wide range of applications such as precision agriculture, medicine, and the military. These systems have common features which often includes an action by an "actuator" interacting with a target. While simulations and measurements exist for the success rate of hitting targets by some systems, there is a dearth of analytic models which can give insight into, and guidance on optimization, of new robotic systems. The present paper develops a simple model for estimation of the success rate for hitting random targets from a moving platform. The model has two main dimensionless parameters: the ratio of actuator spacing to target diameter; and the ratio of platform distance moved (between actuator "firings") to the target diameter. It is found that regions of parameter space having specified high success are described by simple equations, providing guidance on design. The role of a "cost function" is introduced which, when minimized, provides optimization of design, operating, and risk mitigation costs.

  17. SU-E-T-631: Preliminary Results for Analytical Investigation Into Effects of ArcCHECK Setup Errors

    International Nuclear Information System (INIS)

    Kar, S; Tien, C

    2015-01-01

    Purpose: As three-dimensional diode arrays increase in popularity for patient-specific quality assurance for intensity-modulated radiation therapy (IMRT), it is important to evaluate an array’s susceptibility to setup errors. The ArcCHECK phantom is set up by manually aligning its outside marks with the linear accelerator’s lasers and light-field. If done correctly, this aligns the ArcCHECK cylinder’s central axis (CAX) with the linear accelerator’s axis of rotation. However, this process is prone to error. This project has developed an analytical expression including a perturbation factor to quantify the effect of shifts. Methods: The ArcCHECK is set up by aligning its machine marks with either the sagittal room lasers or the light-field of the linear accelerator at gantry zero (IEC). ArcCHECK has sixty-six evenly-spaced SunPoint diodes aligned radially in a ring 14.4 cm from CAX. The detector response function (DRF) was measured and combined with inverse-square correction to develop an analytical expression for output. The output was calculated using shifts of 0 (perfect alignment), +/−1, +/−2 and +/−5 mm. The effect on a series of simple inputs was determined: unity, 1-D ramp, steps, and hat-function to represent uniform field, wedge, evenly-spaced modulation, and single sharp modulation, respectively. Results: Geometric expressions were developed with perturbation factor included to represent shifts. DRF was modeled using sixth-degree polynomials with correlation coefficient 0.9997. The output was calculated using simple inputs such as unity, 1-D ramp, steps, and hat-function, with perturbation factors of: 0, +/−1, +/−2 and +/−5 mm. Discrepancies have been observed, but large fluctuations have been somewhat mitigated by aliasing arising from discrete diode placement. Conclusion: An analytical expression with perturbation factors was developed to estimate the impact of setup errors on an ArcCHECK phantom. Presently, this has been applied to

  18. Analytical models of optical refraction in the troposphere.

    Science.gov (United States)

    Nener, Brett D; Fowkes, Neville; Borredon, Laurent

    2003-05-01

    An extremely accurate but simple asymptotic description (with known error) is obtained for the path of a ray propagating over a curved Earth with radial variations in refractive index. The result is sufficiently simple that analytic solutions for the path can be obtained for linear and quadratic index profiles. As well as rendering the inverse problem trivial for these profiles, this formulation shows that images are uniformly magnified in the vertical direction when viewed through a quadratic refractive-index profile. Nonuniform vertical distortions occur for higher-order refractive-index profiles.

  19. Complexity is simple!

    Science.gov (United States)

    Cottrell, William; Montero, Miguel

    2018-02-01

    In this note we investigate the role of Lloyd's computational bound in holographic complexity. Our goal is to translate the assumptions behind Lloyd's proof into the bulk language. In particular, we discuss the distinction between orthogonalizing and `simple' gates and argue that these notions are useful for diagnosing holographic complexity. We show that large black holes constructed from series circuits necessarily employ simple gates, and thus do not satisfy Lloyd's assumptions. We also estimate the degree of parallel processing required in this case for elementary gates to orthogonalize. Finally, we show that for small black holes at fixed chemical potential, the orthogonalization condition is satisfied near the phase transition, supporting a possible argument for the Weak Gravity Conjecture first advocated in [1].

  20. Application of covariance clouds for estimating the anisotropy ellipsoid eigenvectors, with case study in uranium deposit

    International Nuclear Information System (INIS)

    Jamali Esfahlan, D.; Madani, H.; Tahmaseb Nazemi, M. T.; Mahdavi, F.; Ghaderi, M. R.; Najafi, M.

    2010-01-01

    Various methods of Kriging and nonlinear geostatistical methods considered as acceptable methods for resource and reserve estimations have characters such as the least estimation variance in their nature, and accurate results in the acceptable confidence levels range could be achieved if the required parameters for the estimation are determined accurately. If the determined parameters don't have the sufficient accuracy, 3-D geostatistical estimations will not be reliable any more, and by this, all the quantitative parameters of the mineral deposit (e.g. grade-tonnage variations) will be misinterpreted. One of the most significant parameters for 3-D geostatistical estimation is the anisotropy ellipsoid. The anisotropy ellipsoid is important for geostatistical estimations because it determines the samples in different directions required for accomplishing the estimation. The aim of this paper is to illustrate a more simple and time preserving analytical method that can apply geophysical or geochemical analysis data from the core-length of boreholes for modeling the anisotropy ellipsoid. By this method which is based on the distribution of covariance clouds in a 3-D sampling space of a deposit, quantities, ratios, azimuth and plunge of the major-axis, semi-major axis and the minor-axis determine the ore-grade continuity within the deposit and finally the anisotropy ellipsoid of the deposit will be constructed. A case study of an uranium deposit is also analytically discussed for illustrating the application of this method.

  1. A simple analytical model for electronic conductance in a one dimensional atomic chain across a defect

    International Nuclear Information System (INIS)

    Khater, Antoine; Szczesniak, Dominik

    2011-01-01

    An analytical model is presented for the electronic conductance in a one dimensional atomic chain across an isolated defect. The model system consists of two semi infinite lead atomic chains with the defect atom making the junction between the two leads. The calculation is based on a linear combination of atomic orbitals in the tight-binding approximation, with a single atomic one s-like orbital chosen in the present case. The matching method is used to derive analytical expressions for the scattering cross sections for the reflection and transmission processes across the defect, in the Landauer-Buttiker representation. These analytical results verify the known limits for an infinite atomic chain with no defects. The model can be applied numerically for one dimensional atomic systems supported by appropriate templates. It is also of interest since it would help establish efficient procedures for ensemble averages over a field of impurity configurations in real physical systems.

  2. Analytic formulation of neutrino oscillation probability in constant matter

    International Nuclear Information System (INIS)

    Kimura, Keiichi; Takamura, Akira; Yokomakura, Hidekazu

    2003-01-01

    In this paper, based on the work (Kimura K et al 2002 Phys. Lett. B 537 86) we present the simple derivation of an exact and analytic formula for neutrino oscillation probability. We consider three flavour neutrino oscillations in matter with constant density

  3. Estimation of cloud optical thickness by processing SEVIRI images and implementing a semi analytical cloud property retrieval algorithm

    Science.gov (United States)

    Pandey, P.; De Ridder, K.; van Lipzig, N.

    2009-04-01

    Clouds play a very important role in the Earth's climate system, as they form an intermediate layer between Sun and the Earth. Satellite remote sensing systems are the only means to provide information about clouds on large scales. The geostationary satellite, Meteosat Second Generation (MSG) has onboard an imaging radiometer, the Spinning Enhanced Visible and Infrared Imager (SEVIRI). SEVIRI is a 12 channel imager, with 11 channels observing the earth's full disk with a temporal resolution of 15 min and spatial resolution of 3 km at nadir, and a high resolution visible (HRV) channel. The visible channels (0.6 µm and 0.81 µm) and near infrared channel (1.6µm) of SEVIRI are being used to retrieve the cloud optical thickness (COT). The study domain is over Europe covering the region between 35°N - 70°N and 10°W - 30°E. SEVIRI level 1.5 images over this domain are being acquired from the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) archive. The processing of this imagery, involves a number of steps before estimating the COT. The steps involved in pre-processing are as follows. First, the digital count number is acquired from the imagery. Image geo-coding is performed in order to relate the pixel positions to the corresponding longitude and latitude. Solar zenith angle is determined as a function of latitude and time. The radiometric conversion is done using the values of offsets and slopes of each band. The values of radiance obtained are then used to calculate the reflectance for channels in the visible spectrum using the information of solar zenith angle. An attempt is made to estimate the COT from the observed radiances. A semi analytical algorithm [Kokhanovsky et al., 2003] is implemented for the estimation of cloud optical thickness from the visible spectrum of light intensity reflected from clouds. The asymptotical solution of the radiative transfer equation, for clouds with large optical thickness, is the basis of

  4. A recommended procedure for estimating the cosmic-ray spectral parameter of a simple power law

    CERN Document Server

    Howell, L W

    2002-01-01

    A simple power law model with single spectral index alpha sub 1 is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10 sup 1 sup 3 eV. Two procedures for estimating alpha sub 1 --the method of moments and maximum likelihood (ML)--are developed and their statistical performance are compared. The ML procedure is shown to be the superior approach and is then generalized for application to real cosmic-ray data sets. Several other important results, such as the relationship between collecting power and detector energy resolution and inclusion of a non-Gaussian detector response function, are presented. These results have many practical benefits in the design phase of a cosmic-ray detector as they permit instrument developers to make important trade studies in design parameters as a function of one of the science objectives.

  5. Simple Synthesis Method for Alumina Nanoparticle

    Directory of Open Access Journals (Sweden)

    Daniel Damian

    2017-11-01

    Full Text Available Globally, the human population steady increase, expansion of urban areas, excessive industrialization including in agriculture, caused not only decrease to depletion of non-renewable resources, a rapid deterioration of the environment with negative impact on water quality, soil productivity and of course quality of life in general. This paper aims to prepare size controlled nanoparticles of aluminum oxide using a simple synthesis method. The morphology and dimensions of nanomaterial was investigated using modern analytical techniques: SEM/EDAX and XRD spectroscopy.

  6. A simple estimation of the renal plasma flow

    International Nuclear Information System (INIS)

    Shinpo, Takako

    1987-01-01

    The renal plasma flow was determined conventionally by the excretive ratio to urine using a 131 I-Hippuran renogram. In this report, we proposed the renal clearance, the product of the disappearance rate coefficient and the maximum counts of the bladder, for the simple quantitative value of renal plasma flow. The disappearance rate coefficient was calculated by approximating the exponential function of the initial slope from the disappearance curve of the heart. The renal clearances was compared with the renal plasma flow calculated by the conventional method. The results gave a high correlation coefficient of r = 0.91. The renal clearances can be calculated easily and it offers useful renogram information. (author)

  7. Nonlinear ordinary differential equations analytical approximation and numerical methods

    CERN Document Server

    Hermann, Martin

    2016-01-01

    The book discusses the solutions to nonlinear ordinary differential equations (ODEs) using analytical and numerical approximation methods. Recently, analytical approximation methods have been largely used in solving linear and nonlinear lower-order ODEs. It also discusses using these methods to solve some strong nonlinear ODEs. There are two chapters devoted to solving nonlinear ODEs using numerical methods, as in practice high-dimensional systems of nonlinear ODEs that cannot be solved by analytical approximate methods are common. Moreover, it studies analytical and numerical techniques for the treatment of parameter-depending ODEs. The book explains various methods for solving nonlinear-oscillator and structural-system problems, including the energy balance method, harmonic balance method, amplitude frequency formulation, variational iteration method, homotopy perturbation method, iteration perturbation method, homotopy analysis method, simple and multiple shooting method, and the nonlinear stabilized march...

  8. Sample diagnosis using indicator elements and non-analyte signals for inductively coupled plasma mass spectrometry

    International Nuclear Information System (INIS)

    Antler, Margaret; Ying Hai; Burns, David H.; Salin, Eric D.

    2003-01-01

    A sample diagnosis procedure that uses both non-analyte and analyte signals to estimate matrix effects in inductively coupled plasma-mass spectrometry is presented. Non-analyte signals are those of background species in the plasma (e.g. N + , ArO + ), and changes in these signals can indicate changes in plasma conditions. Matrix effects of Al, Ba, Cs, K and Na on 19 non-analyte signals and 15 element signals were monitored. Multiple linear regression was used to build the prediction models, using a genetic algorithm for objective feature selection. Non-analyte elemental signals and non-analyte signals were compared for diagnosing matrix effects, and both were found to be suitable for estimating matrix effects. Individual analyte matrix effect estimation was compared with the overall matrix effect prediction, and models used to diagnose overall matrix effects were more accurate than individual analyte models. In previous work [Spectrochim. Acta Part B 57 (2002) 277], we tested models for analytical decision making. The current models were tested in the same way, and were able to successfully diagnose matrix effects with at least an 80% success rate

  9. Pre-analytical and analytical variation of drug determination in segmented hair using ultra-performance liquid chromatography-tandem mass spectrometry

    DEFF Research Database (Denmark)

    Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian

    2014-01-01

    variation was estimated to be less than 15% for almost all compounds. The method was successfully applied to 25 segmented hair specimens from deceased drug addicts showing a broad pattern of poly-drug use. The pre-analytical sampling variation was estimated from the genuine duplicate measurements of two...

  10. The analytic renormalization group

    Directory of Open Access Journals (Sweden)

    Frank Ferrari

    2016-08-01

    Full Text Available Finite temperature Euclidean two-point functions in quantum mechanics or quantum field theory are characterized by a discrete set of Fourier coefficients Gk, k∈Z, associated with the Matsubara frequencies νk=2πk/β. We show that analyticity implies that the coefficients Gk must satisfy an infinite number of model-independent linear equations that we write down explicitly. In particular, we construct “Analytic Renormalization Group” linear maps Aμ which, for any choice of cut-off μ, allow to express the low energy Fourier coefficients for |νk|<μ (with the possible exception of the zero mode G0, together with the real-time correlators and spectral functions, in terms of the high energy Fourier coefficients for |νk|≥μ. Operating a simple numerical algorithm, we show that the exact universal linear constraints on Gk can be used to systematically improve any random approximate data set obtained, for example, from Monte-Carlo simulations. Our results are illustrated on several explicit examples.

  11. Analytic confidence level calculations using the likelihood ratio and fourier transform

    International Nuclear Information System (INIS)

    Hu Hongbo; Nielsen, J.

    2000-01-01

    The interpretation of new particle search results involves a confidence level calculation on either the discovery hypothesis or the background-only ('null') hypothesis. A typical approach uses toy Monte Carlo experiments to build an expected experiment estimator distribution against which an observed experiment's estimator may be compared. In this note, a new approach is presented which calculates analytically the experiment estimator distribution via a Fourier transform, using the likelihood ratio as an ordering estimator. The analytic approach enjoys an enormous speed advantage over the toy Monte Carlo method, making it possible to quickly and precisely calculate confidence level results

  12. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.

    Science.gov (United States)

    Saccà, Alessandro

    2016-01-01

    Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices.

  13. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.

    Directory of Open Access Journals (Sweden)

    Alessandro Saccà

    Full Text Available Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices.

  14. A simple nomogram for sample size for estimating sensitivity and specificity of medical tests

    Directory of Open Access Journals (Sweden)

    Malhotra Rajeev

    2010-01-01

    Full Text Available Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size for anticipated sensitivity or anticipated specificity. MS Excel version 2007 was used to derive the values required to plot the nomogram using varying absolute precision, known prevalence of disease, and 95% confidence level using the formula already available in the literature. The nomogram plot was obtained by suitably arranging the lines and distances to conform to this formula. This nomogram could be easily used to determine the sample size for estimating the sensitivity or specificity of a diagnostic test with required precision and 95% confidence level. Sample size at 90% and 99% confidence level, respectively, can also be obtained by just multiplying 0.70 and 1.75 with the number obtained for the 95% confidence level. A nomogram instantly provides the required number of subjects by just moving the ruler and can be repeatedly used without redoing the calculations. This can also be applied for reverse calculations. This nomogram is not applicable for testing of the hypothesis set-up and is applicable only when both diagnostic test and gold standard results have a dichotomous category.

  15. Simple Parametric Model for Airfoil Shape Description

    Science.gov (United States)

    Ziemkiewicz, David

    2017-12-01

    We show a simple, analytic equation describing a class of two-dimensional shapes well suited for representation of aircraft airfoil profiles. Our goal was to create a description characterized by a small number of parameters with easily understandable meaning, providing a tool to alter the shape with optimization procedures as well as manual tweaks by the designer. The generated shapes are well suited for numerical analysis with 2D flow solving software such as XFOIL.

  16. A simple analytical model for dynamics of time-varying target leverage ratios

    Science.gov (United States)

    Lo, C. F.; Hui, C. H.

    2012-03-01

    In this paper we have formulated a simple theoretical model for the dynamics of the time-varying target leverage ratio of a firm under some assumptions based upon empirical observations. In our theoretical model the time evolution of the target leverage ratio of a firm can be derived self-consistently from a set of coupled Ito's stochastic differential equations governing the leverage ratios of an ensemble of firms by the nonlinear Fokker-Planck equation approach. The theoretically derived time paths of the target leverage ratio bear great resemblance to those used in the time-dependent stationary-leverage (TDSL) model [Hui et al., Int. Rev. Financ. Analy. 15, 220 (2006)]. Thus, our simple model is able to provide a theoretical foundation for the selected time paths of the target leverage ratio in the TDSL model. We also examine how the pace of the adjustment of a firm's target ratio, the volatility of the leverage ratio and the current leverage ratio affect the dynamics of the time-varying target leverage ratio. Hence, with the proposed dynamics of the time-dependent target leverage ratio, the TDSL model can be readily applied to generate the default probabilities of individual firms and to assess the default risk of the firms.

  17. On the plurality of (methodological worlds: Estimating the analytic flexibility of fMRI experiments.

    Directory of Open Access Journals (Sweden)

    Joshua eCarp

    2012-10-01

    Full Text Available How likely are published findings in the functional neuroimaging literature to be false? According to a recent mathematical model, the potential for false positives increases with the flexibility of analysis methods. Functional MRI (fMRI experiments can be analyzed using a large number of commonly used tools, with little consensus on how, when, or whether to apply each one. This situation may lead to substantial variability in analysis outcomes. Thus, the present study sought to estimate the flexibility of neuroimaging analysis by submitting a single event-related fMRI experiment to a large number of unique analysis procedures. Ten analysis steps for which multiple strategies appear in the literature were identified, and two to four strategies were enumerated for each step. Considering all possible combinations of these strategies yielded 6,912 unique analysis pipelines. Activation maps from each pipeline were corrected for multiple comparisons using five thresholding approaches, yielding 34,560 significance maps. While some outcomes were relatively consistent across pipelines, others showed substantial methods-related variability in activation strength, location, and extent. Some analysis decisions contributed to this variability more than others, and different decisions were associated with distinct patterns of variability across the brain. Qualitative outcomes also varied with analysis parameters: many contrasts yielded significant activation under some pipelines but not others. Altogether, these results reveal considerable flexibility in the analysis of fMRI experiments. This observation, when combined with mathematical simulations linking analytic flexibility with elevated false positive rates, suggests that false positive results may be more prevalent than expected in the literature. This risk of inflated false positive rates may be mitigated by constraining the flexibility of analytic choices or by abstaining from selective analysis

  18. Least-squares variance component estimation

    NARCIS (Netherlands)

    Teunissen, P.J.G.; Amiri-Simkooei, A.R.

    2007-01-01

    Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight

  19. Analytical method for optimization of maintenance policy based on available system failure data

    International Nuclear Information System (INIS)

    Coria, V.H.; Maximov, S.; Rivas-Dávalos, F.; Melchor, C.L.; Guardado, J.L.

    2015-01-01

    An analytical optimization method for preventive maintenance (PM) policy with minimal repair at failure, periodic maintenance, and replacement is proposed for systems with historical failure time data influenced by a current PM policy. The method includes a new imperfect PM model based on Weibull distribution and incorporates the current maintenance interval T 0 and the optimal maintenance interval T to be found. The Weibull parameters are analytically estimated using maximum likelihood estimation. Based on this model, the optimal number of PM and the optimal maintenance interval for minimizing the expected cost over an infinite time horizon are also analytically determined. A number of examples are presented involving different failure time data and current maintenance intervals to analyze how the proposed analytical optimization method for periodic PM policy performances in response to changes in the distribution of the failure data and the current maintenance interval. - Highlights: • An analytical optimization method for preventive maintenance (PM) policy is proposed. • A new imperfect PM model is developed. • The Weibull parameters are analytically estimated using maximum likelihood. • The optimal maintenance interval and number of PM are also analytically determined. • The model is validated by several numerical examples

  20. Model dependencies of risk aversion and working interest estimates

    International Nuclear Information System (INIS)

    Lerche, I.

    1996-01-01

    Working interest, W, and risk adjusted value, RAV, are evaluated using both Cozzolino's formula for exponential dependence of risk aversion and also for a hyperbolic tangent dependence. In addition, the general method is given of constructing an RAV formula for any functional choice of risk aversion dependence. Two examples are given to illustrate how the model dependencies influence choices of working interest and risk adjusted value depending on whether the expected value of the project is positive or negative. In general the Cozzolino formula provides a more conservative position for risk than does the hyperbolic tangent formula, reflecting the difference in corporate attitudes to risk aversion. The commonly used Cozzolino formula is shown to have simple exact arithmetic expressions for maximum working interest and maximum RAV; the hyperbolic tangent formula has approximate analytic expressions. Both formulae also yield approximate analytical expressions for the working interest yielding a risk neutral RAV of zero. These arithmetic results are useful for making quick estimates of working interest ranges and risk adjusted values. (Author)

  1. A simple method for estimation of phosphorous in urine

    International Nuclear Information System (INIS)

    Chaudhary, Seema; Gondane, Sonali; Sawant, Pramilla D.; Rao, D.D.

    2016-01-01

    Following internal contamination of 32 P, it is preferentially eliminated from the body in urine. It is estimated by in-situ precipitation of ammonium molybdo-phosphate (AMP) in urine followed by gross beta counting. The amount of AMP formed in-situ depends on the amount of stable phosphorous (P) present in the urine and hence, it was essential to generate information regarding urinary excretion of stable P. If amount of P excreted is significant then the amount of AMP formed would correspondingly increase leading to absorption of some of the β particles. The present study was taken up for the estimation of daily urinary excretion of P using the phospho-molybdate spectrophotometry method. Few urine samples received from radiation workers were analyzed and based on the observed range of stable P in urine; volume of sample required for 32 P estimation was finalized

  2. AN ANALYTIC MODEL OF DUSTY, STRATIFIED, SPHERICAL H ii REGIONS

    Energy Technology Data Exchange (ETDEWEB)

    Rodríguez-Ramírez, J. C.; Raga, A. C. [Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de México, Ap. 70-543, 04510 D.F., México (Mexico); Lora, V. [Astronomisches Rechen-Institut, Zentrum für Astronomie der Universität, Mönchhofstr. 12-14, D-69120 Heidelberg (Germany); Cantó, J., E-mail: juan.rodriguez@nucleares.unam.mx [Instituto de Astronomía, Universidad Nacional Autónoma de México, Ap. 70-468, 04510 D. F., México (Mexico)

    2016-12-20

    We study analytically the effect of radiation pressure (associated with photoionization processes and with dust absorption) on spherical, hydrostatic H ii regions. We consider two basic equations, one for the hydrostatic balance between the radiation-pressure components and the gas pressure, and another for the balance among the recombination rate, the dust absorption, and the ionizing photon rate. Based on appropriate mathematical approximations, we find a simple analytic solution for the density stratification of the nebula, which is defined by specifying the radius of the external boundary, the cross section of dust absorption, and the luminosity of the central star. We compare the analytic solution with numerical integrations of the model equations of Draine, and find a wide range of the physical parameters for which the analytic solution is accurate.

  3. A modified approach to estimating sample size for simple logistic regression with one continuous covariate.

    Science.gov (United States)

    Novikov, I; Fund, N; Freedman, L S

    2010-01-15

    Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.

  4. REML/BLUP and sequential path analysis in estimating genotypic values and interrelationships among simple maize grain yield-related traits.

    Science.gov (United States)

    Olivoto, T; Nardino, M; Carvalho, I R; Follmann, D N; Ferrari, M; Szareski, V J; de Pelegrin, A J; de Souza, V Q

    2017-03-22

    Methodologies using restricted maximum likelihood/best linear unbiased prediction (REML/BLUP) in combination with sequential path analysis in maize are still limited in the literature. Therefore, the aims of this study were: i) to use REML/BLUP-based procedures in order to estimate variance components, genetic parameters, and genotypic values of simple maize hybrids, and ii) to fit stepwise regressions considering genotypic values to form a path diagram with multi-order predictors and minimum multicollinearity that explains the relationships of cause and effect among grain yield-related traits. Fifteen commercial simple maize hybrids were evaluated in multi-environment trials in a randomized complete block design with four replications. The environmental variance (78.80%) and genotype-vs-environment variance (20.83%) accounted for more than 99% of the phenotypic variance of grain yield, which difficult the direct selection of breeders for this trait. The sequential path analysis model allowed the selection of traits with high explanatory power and minimum multicollinearity, resulting in models with elevated fit (R 2 > 0.9 and ε analysis is effective in the evaluation of maize-breeding trials.

  5. Analytic Reflected Lightcurves for Exoplanets

    Science.gov (United States)

    Haggard, Hal M.; Cowan, Nicolas B.

    2018-04-01

    The disk-integrated reflected brightness of an exoplanet changes as a function of time due to orbital and rotational motion coupled with an inhomogeneous albedo map. We have previously derived analytic reflected lightcurves for spherical harmonic albedo maps in the special case of a synchronously-rotating planet on an edge-on orbit (Cowan, Fuentes & Haggard 2013). In this letter, we present analytic reflected lightcurves for the general case of a planet on an inclined orbit, with arbitrary spin period and non-zero obliquity. We do so for two different albedo basis maps: bright points (δ-maps), and spherical harmonics (Y_l^m-maps). In particular, we use Wigner D-matrices to express an harmonic lightcurve for an arbitrary viewing geometry as a non-linear combination of harmonic lightcurves for the simpler edge-on, synchronously rotating geometry. These solutions will enable future exploration of the degeneracies and information content of reflected lightcurves, as well as fast calculation of lightcurves for mapping exoplanets based on time-resolved photometry. To these ends we make available Exoplanet Analytic Reflected Lightcurves (EARL), a simple open-source code that allows rapid computation of reflected lightcurves.

  6. Estimating Cloud optical thickness from SEVIRI, for air quality research, by implementing a semi-analytical cloud retrieval algorithm

    Science.gov (United States)

    Pandey, Praveen; De Ridder, Koen; van Looy, Stijn; van Lipzig, Nicole

    2010-05-01

    Clouds play an important role in Earth's climate system. As they affect radiation hence photolysis rate coefficients (ozone formation),they also affect the air quality at the surface of the earth. Thus, a satellite remote sensing technique is used to retrieve the cloud properties for air quality research. The geostationary satellite, Meteosat Second Generation (MSG) has onboard, the Spinning Enhanced Visible and Infrared Imager (SEVIRI). The channels in the wavelength 0.6 µm and 1.64 µm are used to retrieve cloud optical thickness (COT). The study domain is over Europe covering a region between 35°N-70°N and 5°W-30°E, centred over Belgium. The steps involved in pre-processing the EUMETSAT level 1.5 images are described, which includes, acquisition of digital count number, radiometric conversion using offsets and slopes, estimation of radiance and calculation of reflectance. The Sun-earth-satellite geometry also plays an important role. A semi-analytical cloud retrieval algorithm (Kokhanovsky et al., 2003) is implemented for the estimation of COT. This approach doesn't involve the conventional look-up table approach, hence it makes the retrieval independent of numerical radiative transfer solutions. The semi-analytical algorithm is implemented on a monthly dataset of SEVIRI level 1.5 images. Minimum reflectance in the visible channel, at each pixel, during the month is accounted as the surface albedo of the pixel. Thus, monthly variation of COT over the study domain is prepared. The result so obtained, is compared with the COT products of Satellite Application Facility on Climate Monitoring (CM SAF). Henceforth, an approach to assimilate the COT for air quality research is presented. Address of corresponding author: Praveen Pandey, VITO- Flemish Institute for Technological Research, Boeretang 200, B 2400, Mol, Belgium E-mail: praveen.pandey@vito.be

  7. Estimating true evolutionary distances under the DCJ model.

    Science.gov (United States)

    Lin, Yu; Moret, Bernard M E

    2008-07-01

    Modern techniques can yield the ordering and strandedness of genes on each chromosome of a genome; such data already exists for hundreds of organisms. The evolutionary mechanisms through which the set of the genes of an organism is altered and reordered are of great interest to systematists, evolutionary biologists, comparative genomicists and biomedical researchers. Perhaps the most basic concept in this area is that of evolutionary distance between two genomes: under a given model of genomic evolution, how many events most likely took place to account for the difference between the two genomes? We present a method to estimate the true evolutionary distance between two genomes under the 'double-cut-and-join' (DCJ) model of genome rearrangement, a model under which a single multichromosomal operation accounts for all genomic rearrangement events: inversion, transposition, translocation, block interchange and chromosomal fusion and fission. Our method relies on a simple structural characterization of a genome pair and is both analytically and computationally tractable. We provide analytical results to describe the asymptotic behavior of genomes under the DCJ model, as well as experimental results on a wide variety of genome structures to exemplify the very high accuracy (and low variance) of our estimator. Our results provide a tool for accurate phylogenetic reconstruction from multichromosomal gene rearrangement data as well as a theoretical basis for refinements of the DCJ model to account for biological constraints. All of our software is available in source form under GPL at http://lcbb.epfl.ch.

  8. Pharmaceutical supply chain risk assessment in Iran using analytic hierarchy process (AHP) and simple additive weighting (SAW) methods.

    Science.gov (United States)

    Jaberidoost, Mona; Olfat, Laya; Hosseini, Alireza; Kebriaeezadeh, Abbas; Abdollahi, Mohammad; Alaeddini, Mahdi; Dinarvand, Rassoul

    2015-01-01

    Pharmaceutical supply chain is a significant component of the health system in supplying medicines, particularly in countries where main drugs are provided by local pharmaceutical companies. No previous studies exist assessing risks and disruptions in pharmaceutical companies while assessing the pharmaceutical supply chain. Any risks affecting the pharmaceutical companies could disrupt supply medicines and health system efficiency. The goal of this study was the risk assessment in pharmaceutical industry in Iran considering process's priority, hazard and probability of risks. The study was carried out in 4 phases; risk identification through literature review, risk identification in Iranian pharmaceutical companies through interview with experts, risk analysis through a questionnaire and consultation with experts using group analytic hierarchy process (AHP) method and rating scale (RS) and risk evaluation of simple additive weighting (SAW) method. In total, 86 main risks were identified in the pharmaceutical supply chain with perspective of pharmaceutical companies classified in 11 classes. The majority of risks described in this study were related to the financial and economic category. Also financial management was found to be the most important factor for consideration. Although pharmaceutical industry and supply chain were affected by current political conditions in Iran during the study time, but half of total risks in the pharmaceutical supply chain were found to be internal risks which could be fixed by companies, internally. Likewise, political status and related risks forced companies to focus more on financial and supply management resulting in less attention to quality management.

  9. A Semi-Analytical Method for Rapid Estimation of Near-Well Saturation, Temperature, Pressure and Stress in Non-Isothermal CO2 Injection

    Science.gov (United States)

    LaForce, T.; Ennis-King, J.; Paterson, L.

    2015-12-01

    Reservoir cooling near the wellbore is expected when fluids are injected into a reservoir or aquifer in CO2 storage, enhanced oil or gas recovery, enhanced geothermal systems, and water injection for disposal. Ignoring thermal effects near the well can lead to under-prediction of changes in reservoir pressure and stress due to competition between increased pressure and contraction of the rock in the cooled near-well region. In this work a previously developed semi-analytical model for immiscible, nonisothermal fluid injection is generalised to include partitioning of components between two phases. Advection-dominated radial flow is assumed so that the coupled two-phase flow and thermal conservation laws can be solved analytically. The temperature and saturation profiles are used to find the increase in reservoir pressure, tangential, and radial stress near the wellbore in a semi-analytical, forward-coupled model. Saturation, temperature, pressure, and stress profiles are found for parameters representative of several CO2 storage demonstration projects around the world. General results on maximum injection rates vs depth for common reservoir parameters are also presented. Prior to drilling an injection well there is often little information about the properties that will determine the injection rate that can be achieved without exceeding fracture pressure, yet injection rate and pressure are key parameters in well design and placement decisions. Analytical solutions to simplified models such as these can quickly provide order of magnitude estimates for flow and stress near the well based on a range of likely parameters.

  10. Assessment of Westinghouse Hanford Company methods for estimating radionuclide release from ground disposal of waste water at the N Reactor sites

    International Nuclear Information System (INIS)

    1988-09-01

    This report summarizes the results of an independent assessment by Golder Associates, Inc. of the methods used by Westinghouse Hanford Company (Westinghouse Hanford) and its predecessors to estimate the annual offsite release of radionuclides from ground disposal of cooling and other process waters from the N Reactor at the Hanford Site. This assessment was performed by evaluating the present and past disposal practices and radionuclide migration data within the context of the hydrology, geology, and physical layout of the N Reactor disposal site. The conclusions and recommendations are based upon the available data and simple analytical calculations. Recommendations are provided for conducting more refined analyses and for continued field data collection in support of estimating annual offsite releases. Recommendations are also provided for simple operational and structural measures that should reduce the quantities of radionuclides leaving the site. 5 refs., 9 figs., 1 tab

  11. Evaluation methodology for comparing memory and communication of analytic processes in visual analytics

    Energy Technology Data Exchange (ETDEWEB)

    Ragan, Eric D [ORNL; Goodall, John R [ORNL

    2014-01-01

    Provenance tools can help capture and represent the history of analytic processes. In addition to supporting analytic performance, provenance tools can be used to support memory of the process and communication of the steps to others. Objective evaluation methods are needed to evaluate how well provenance tools support analyst s memory and communication of analytic processes. In this paper, we present several methods for the evaluation of process memory, and we discuss the advantages and limitations of each. We discuss methods for determining a baseline process for comparison, and we describe various methods that can be used to elicit process recall, step ordering, and time estimations. Additionally, we discuss methods for conducting quantitative and qualitative analyses of process memory. By organizing possible memory evaluation methods and providing a meta-analysis of the potential benefits and drawbacks of different approaches, this paper can inform study design and encourage objective evaluation of process memory and communication.

  12. Estimation of creep life of thick welded joints using a simple model. Creep characteristics in thick welded joint and their improvements. 2

    International Nuclear Information System (INIS)

    Nakacho, Keiji; Yamazaki, Masayoshi

    2001-01-01

    The information of the creep behavior of the thick welded joint is very important to secure the safety of the elevated temperature vessels like the nuclear reactors. The creep behavior of the thick welded point is very complex, thence it is difficult to practice the experiment or the theoretical analysis. A simple accurate model for theoretical analysis was developed in the first study. The simple model is constructed of several one-dimensional finite elements which can analyze three-dimensional creep behavior under a assumption. The model is easy to treat, and needs only a little labor and computation time to simulate the creep curve and local strain of the thick welded joint. In this second study, the capability of the model is expanded to estimate the creep life of the thick welded joint. New model can easily estimate the time of the rupture of the thick welded joint. It is verified comparing the result with the experimental one that the model can accurately predict the creep life. The histories of the local strains to the rupture time may be observed in the simulation by using the model. The information will be useful to improve the creep characteristics of the joints. (author)

  13. Analytical Model for Estimating the Zenith Angle Dependence of Terrestrial Cosmic Ray Fluxes.

    Directory of Open Access Journals (Sweden)

    Tatsuhiko Sato

    Full Text Available A new model called "PHITS-based Analytical Radiation Model in the Atmosphere (PARMA version 4.0" was developed to facilitate instantaneous estimation of not only omnidirectional but also angular differential energy spectra of cosmic ray fluxes anywhere in Earth's atmosphere at nearly any given time. It consists of its previous version, PARMA3.0, for calculating the omnidirectional fluxes and several mathematical functions proposed in this study for expressing their zenith-angle dependences. The numerical values of the parameters used in these functions were fitted to reproduce the results of the extensive air shower simulation performed by Particle and Heavy Ion Transport code System (PHITS. The angular distributions of ground-level muons at large zenith angles were specially determined by introducing an optional function developed on the basis of experimental data. The accuracy of PARMA4.0 was closely verified using multiple sets of experimental data obtained under various global conditions. This extension enlarges the model's applicability to more areas of research, including design of cosmic-ray detectors, muon radiography, soil moisture monitoring, and cosmic-ray shielding calculation. PARMA4.0 is available freely and is easy to use, as implemented in the open-access EXcel-based Program for Calculating Atmospheric Cosmic-ray Spectrum (EXPACS.

  14. Theoretical, analytical, and statistical interpretation of environmental data

    International Nuclear Information System (INIS)

    Lombard, S.M.

    1974-01-01

    The reliability of data from radiochemical analyses of environmental samples cannot be determined from nuclear counting statistics alone. The rigorous application of the principles of propagation of errors, an understanding of the physics and chemistry of the species of interest in the environment, and the application of information from research on the analytical procedure are all necessary for a valid estimation of the errors associated with analytical results. The specific case of the determination of plutonium in soil is considered in terms of analytical problems and data reliability. (U.S.)

  15. Analytical estimation of emission zone mean position and width in organic light-emitting diodes from emission pattern image-source interference fringes

    International Nuclear Information System (INIS)

    Epstein, Ariel; Tessler, Nir; Einziger, Pinchas D.; Roberts, Matthew

    2014-01-01

    We present an analytical method for evaluating the first and second moments of the effective exciton spatial distribution in organic light-emitting diodes (OLED) from measured emission patterns. Specifically, the suggested algorithm estimates the emission zone mean position and width, respectively, from two distinct features of the pattern produced by interference between the emission sources and their images (induced by the reflective cathode): the angles in which interference extrema are observed, and the prominence of interference fringes. The relations between these parameters are derived rigorously for a general OLED structure, indicating that extrema angles are related to the mean position of the radiating excitons via Bragg's condition, and the spatial broadening is related to the attenuation of the image-source interference prominence due to an averaging effect. The method is applied successfully both on simulated emission patterns and on experimental data, exhibiting a very good agreement with the results obtained by numerical techniques. We investigate the method performance in detail, showing that it is capable of producing accurate estimations for a wide range of source-cathode separation distances, provided that the measured spectral interval is large enough; guidelines for achieving reliable evaluations are deduced from these results as well. As opposed to numerical fitting tools employed to perform similar tasks to date, our approximate method explicitly utilizes physical intuition and requires far less computational effort (no fitting is involved). Hence, applications that do not require highly resolved estimations, e.g., preliminary design and production-line verification, can benefit substantially from the analytical algorithm, when applicable. This introduces a novel set of efficient tools for OLED engineering, highly important in the view of the crucial role the exciton distribution plays in determining the device performance.

  16. Analytical estimation of emission zone mean position and width in organic light-emitting diodes from emission pattern image-source interference fringes

    Energy Technology Data Exchange (ETDEWEB)

    Epstein, Ariel, E-mail: ariel.epstein@utoronto.ca; Tessler, Nir, E-mail: nir@ee.technion.ac.il; Einziger, Pinchas D. [Department of Electrical Engineering, Technion-Israel Institute of Technology, Haifa 32000 (Israel); Roberts, Matthew, E-mail: mroberts@cdtltd.co.uk [Cambridge Display Technology Ltd, Building 2020, Cambourne Business Park, Cambourne, Cambridgeshire CB23 6DW (United Kingdom)

    2014-06-14

    We present an analytical method for evaluating the first and second moments of the effective exciton spatial distribution in organic light-emitting diodes (OLED) from measured emission patterns. Specifically, the suggested algorithm estimates the emission zone mean position and width, respectively, from two distinct features of the pattern produced by interference between the emission sources and their images (induced by the reflective cathode): the angles in which interference extrema are observed, and the prominence of interference fringes. The relations between these parameters are derived rigorously for a general OLED structure, indicating that extrema angles are related to the mean position of the radiating excitons via Bragg's condition, and the spatial broadening is related to the attenuation of the image-source interference prominence due to an averaging effect. The method is applied successfully both on simulated emission patterns and on experimental data, exhibiting a very good agreement with the results obtained by numerical techniques. We investigate the method performance in detail, showing that it is capable of producing accurate estimations for a wide range of source-cathode separation distances, provided that the measured spectral interval is large enough; guidelines for achieving reliable evaluations are deduced from these results as well. As opposed to numerical fitting tools employed to perform similar tasks to date, our approximate method explicitly utilizes physical intuition and requires far less computational effort (no fitting is involved). Hence, applications that do not require highly resolved estimations, e.g., preliminary design and production-line verification, can benefit substantially from the analytical algorithm, when applicable. This introduces a novel set of efficient tools for OLED engineering, highly important in the view of the crucial role the exciton distribution plays in determining the device performance.

  17. Analytical modeling of worldwide medical radiation use

    International Nuclear Information System (INIS)

    Mettler, F.A. Jr.; Davis, M.; Kelsey, C.A.; Rosenberg, R.; Williams, A.

    1987-01-01

    An analytical model was developed to estimate the availability and frequency of medical radiation use on a worldwide basis. This model includes medical and dental x-ray, nuclear medicine, and radiation therapy. The development of an analytical model is necessary as the first step in estimating the radiation dose to the world's population from this source. Since there is no data about the frequency of medical radiation use in more than half the countries in the world and only fragmentary data in an additional one-fourth of the world's countries, such a model can be used to predict the uses of medical radiation in these countries. The model indicates that there are approximately 400,000 medical x-ray machines worldwide and that approximately 1.2 billion diagnostic medical x-ray examinations are performed annually. Dental x-ray examinations are estimated at 315 million annually and approximately 22 million in-vivo diagnostic nuclear medicine examinations. Approximately 4 million radiation therapy procedures or courses of treatment are undertaken annually

  18. Test of a potential link between analytic and nonanalytic category learning and automatic, effortful processing.

    Science.gov (United States)

    Tracy, J I; Pinsk, M; Helverson, J; Urban, G; Dietz, T; Smith, D J

    2001-08-01

    The link between automatic and effortful processing and nonanalytic and analytic category learning was evaluated in a sample of 29 college undergraduates using declarative memory, semantic category search, and pseudoword categorization tasks. Automatic and effortful processing measures were hypothesized to be associated with nonanalytic and analytic categorization, respectively. Results suggested that contrary to prediction strong criterion-attribute (analytic) responding on the pseudoword categorization task was associated with strong automatic, implicit memory encoding of frequency-of-occurrence information. Data are discussed in terms of the possibility that criterion-attribute category knowledge, once established, may be expressed with few attentional resources. The data indicate that attention resource requirements, even for the same stimuli and task, vary depending on the category rule system utilized. Also, the automaticity emerging from familiarity with analytic category exemplars is very different from the automaticity arising from extensive practice on a semantic category search task. The data do not support any simple mapping of analytic and nonanalytic forms of category learning onto the automatic and effortful processing dichotomy and challenge simple models of brain asymmetries for such procedures. Copyright 2001 Academic Press.

  19. A Unified Channel Charges Expression for Analytic MOSFET Modeling

    Directory of Open Access Journals (Sweden)

    Hugues Murray

    2012-01-01

    Full Text Available Based on a 1D Poissons equation resolution, we present an analytic model of inversion charges allowing calculation of the drain current and transconductance in the Metal Oxide Semiconductor Field Effect Transistor. The drain current and transconductance are described by analytical functions including mobility corrections and short channel effects (CLM, DIBL. The comparison with the Pao-Sah integral shows excellent accuracy of the model in all inversion modes from strong to weak inversion in submicronics MOSFET. All calculations are encoded with a simple C program and give instantaneous results that provide an efficient tool for microelectronics users.

  20. Validation of Analytical Damping Ratio by Fatigue Stress Limit

    Science.gov (United States)

    Foong, Faruq Muhammad; Chung Ket, Thein; Beng Lee, Ooi; Aziz, Abdul Rashid Abdul

    2018-03-01

    The optimisation process of a vibration energy harvester is usually restricted to experimental approaches due to the lack of an analytical equation to describe the damping of a system. This study derives an analytical equation, which describes the first mode damping ratio of a clamp-free cantilever beam under harmonic base excitation by combining the transverse equation of motion of the beam with the damping-stress equation. This equation, as opposed to other common damping determination methods, is independent of experimental inputs or finite element simulations and can be solved using a simple iterative convergence method. The derived equation was determined to be correct for cases when the maximum bending stress in the beam is below the fatigue limit stress of the beam. However, an increasing trend in the error between the experiment and the analytical results were observed at high stress levels. Hence, the fatigue limit stress was used as a parameter to define the validity of the analytical equation.

  1. Analytical Computation of Energy-Energy Correlation at Next-to-Leading Order in QCD.

    Science.gov (United States)

    Dixon, Lance J; Luo, Ming-Xing; Shtabovenko, Vladyslav; Yang, Tong-Zhi; Zhu, Hua Xing

    2018-03-09

    The energy-energy correlation (EEC) between two detectors in e^{+}e^{-} annihilation was computed analytically at leading order in QCD almost 40 years ago, and numerically at next-to-leading order (NLO) starting in the 1980s. We present the first analytical result for the EEC at NLO, which is remarkably simple, and facilitates analytical study of the perturbative structure of the EEC. We provide the expansion of the EEC in the collinear and back-to-back regions through next-to-leading power, information which should aid resummation in these regions.

  2. Analytic number theory an introductory course

    CERN Document Server

    Bateman, Paul T

    2004-01-01

    This valuable book focuses on a collection of powerful methods ofanalysis that yield deep number-theoretical estimates. Particularattention is given to counting functions of prime numbers andmultiplicative arithmetic functions. Both real variable ("elementary")and complex variable ("analytic") methods are employed.

  3. Analytical model of the optical vortex microscope.

    Science.gov (United States)

    Płocinniczak, Łukasz; Popiołek-Masajada, Agnieszka; Masajada, Jan; Szatkowski, Mateusz

    2016-04-20

    This paper presents an analytical model of the optical vortex scanning microscope. In this microscope the Gaussian beam with an embedded optical vortex is focused into the sample plane. Additionally, the optical vortex can be moved inside the beam, which allows fine scanning of the sample. We provide an analytical solution of the whole path of the beam in the system (within paraxial approximation)-from the vortex lens to the observation plane situated on the CCD camera. The calculations are performed step by step from one optical element to the next. We show that at each step, the expression for light complex amplitude has the same form with only four coefficients modified. We also derive a simple expression for the vortex trajectory of small vortex displacements.

  4. Validation of a simple and inexpensive method for the quantitation of infarct in the rat brain

    Directory of Open Access Journals (Sweden)

    C.L.R. Schilichting

    2004-04-01

    Full Text Available A gravimetric method was evaluated as a simple, sensitive, reproducible, low-cost alternative to quantify the extent of brain infarct after occlusion of the medial cerebral artery in rats. In ether-anesthetized rats, the left medial cerebral artery was occluded for 1, 1.5 or 2 h by inserting a 4-0 nylon monofilament suture into the internal carotid artery. Twenty-four hours later, the brains were processed for histochemical triphenyltetrazolium chloride (TTC staining and quantitation of the schemic infarct. In each TTC-stained brain section, the ischemic tissue was dissected with a scalpel and fixed in 10% formalin at 0ºC until its total mass could be estimated. The mass (mg of the ischemic tissue was weighed on an analytical balance and compared to its volume (mm³, estimated either by plethysmometry using platinum electrodes or by computer-assisted image analysis. Infarct size as measured by the weighing method (mg, and reported as a percent (% of the affected (left hemisphere, correlated closely with volume (mm³, also reported as % estimated by computerized image analysis (r = 0.88; P < 0.001; N = 10 or by plethysmography (r = 0.97-0.98; P < 0.0001; N = 41. This degree of correlation was maintained between different experimenters. The method was also sensitive for detecting the effect of different ischemia durations on infarct size (P < 0.005; N = 23, and the effect of drug treatments in reducing the extent of brain damage (P < 0.005; N = 24. The data suggest that, in addition to being simple and low cost, the weighing method is a reliable alternative for quantifying brain infarct in animal models of stroke.

  5. Analytical treatment of the relationships between soil heat flux/net radiation ratio and vegetation indices

    International Nuclear Information System (INIS)

    Kustas, W.P.; Daughtry, C.S.T.; Oevelen, P.J. van

    1993-01-01

    Relationships between leaf area index (LAI) and midday soil heat flux/net radiation ratio (G/R n ) and two more commonly used vegetation indices (VIs) were used to analytically derive formulas describing the relationship between G/R n and VI. Use of VI for estimating G/R n may be useful in operational remote sensing models that evaluate the spatial variation in the surface energy balance over large areas. While previous experimental data have shown that linear equations can adequately describe the relationship between G/Rn and VI, this analytical treatment indicated that nonlinear relationships are more appropriate. Data over bare soil and soybeans under a range of canopy cover conditions from a humid climate and data collected over bare soil, alfalfa, and cotton fields in an arid climate were used to evaluate model formulations derived for LAI and G/R n , LAI and VI, and VI and G/R n . In general, equations describing LAI-G/R n and LAI-VI relationships agreed with the data and supported the analytical result of a nonlinear relationship between VI and G/R n . With the simple ratio (NIR/Red) as the VI, the nonlinear relationship with G/R n was confirmed qualitatively. But with the normalized difference vegetation index (NDVI), a nonlinear relationship did not appear to fit the data. (author)

  6. An analytical method of estimating Value-at-Risk on the Belgrade Stock Exchange

    Directory of Open Access Journals (Sweden)

    Obadović Milica D.

    2009-01-01

    Full Text Available This paper presents market risk evaluation for a portfolio consisting of shares that are continuously traded on the Belgrade Stock Exchange, by applying the Value-at-Risk model - the analytical method. It describes the manner of analytical method application and compares the results obtained by implementing this method at different confidence levels. Method verification was carried out on the basis of the failure rate that demonstrated the confidence level for which this method was acceptable in view of the given conditions.

  7. Comment on 'Approximation for a large-angle simple pendulum period'

    International Nuclear Information System (INIS)

    Yuan Qingxin; Ding Pei

    2009-01-01

    In a recent letter, Belendez et al (2009 Eur. J. Phys. 30 L25-8) proposed an alternative of approximation for the period of a simple pendulum suggested earlier by Hite (2005 Phys. Teach. 43 290-2) who set out to improve on the Kidd and Fogg formula (2002 Phys. Teach. 40 81-3). As a response to the approximation scheme, we obtain another analytical approximation for the large-angle pendulum period, which owns the simplicity and accuracy in evaluating the exact period, and moreover, for amplitudes less than 144 deg. the analytical approximate expression is more accurate than others in the literature. (letters and comments)

  8. A simple free energy for the isotropic-nematic phase transition of rods

    NARCIS (Netherlands)

    Tuinier, R.

    2016-01-01

    A free energy expression is proposed that describes the isotropic-nematic binodal concentrations of hard rods. A simple analytical form for this free energy was yet only available using a Gaussian trial function for the orientation distribution function (ODF), leading, however, to a significant

  9. Linear circuit transfer functions an introduction to fast analytical techniques

    CERN Document Server

    Basso, Christophe P

    2016-01-01

    Linear Circuit Transfer Functions: An introduction to Fast Analytical Techniques teaches readers how to determine transfer functions of linear passive and active circuits by applying Fast Analytical Circuits Techniques. Building on their existing knowledge of classical loop/nodal analysis, the book improves and expands their skills to unveil transfer functions in a swift and efficient manner. Starting with simple examples, the author explains step-by-step how expressing circuits time constants in different configurations leads to writing transfer functions in a compact and insightful way. By learning how to organize numerators and denominators in the fastest possible way, readers will speed-up analysis and predict the frequency resp nse of simple to complex circuits. In some cases, they will be able to derive the final expression by inspection, without writing a line of algebra. Key features: * Emphasizes analysis through employing time constant-based methods discussed in other text books but not widely us...

  10. A semi-analytical modelling of multistage bunch compression with collective effects

    International Nuclear Information System (INIS)

    Zagorodnov, Igor; Dohlus, Martin

    2010-07-01

    In this paper we introduce an analytical solution (up to the third order) for a multistage bunch compression and acceleration system without collective effects. The solution for the system with collective effects is found by an iterative procedure based on this analytical result. The developed formalism is applied to the FLASH facility at DESY. Analytical estimations of RF tolerances are given. (orig.)

  11. A semi-analytical modelling of multistage bunch compression with collective effects

    Energy Technology Data Exchange (ETDEWEB)

    Zagorodnov, Igor; Dohlus, Martin

    2010-07-15

    In this paper we introduce an analytical solution (up to the third order) for a multistage bunch compression and acceleration system without collective effects. The solution for the system with collective effects is found by an iterative procedure based on this analytical result. The developed formalism is applied to the FLASH facility at DESY. Analytical estimations of RF tolerances are given. (orig.)

  12. An approach to estimate spatial distribution of analyte within cells using spectrally-resolved fluorescence microscopy

    Science.gov (United States)

    Sharma, Dharmendar Kumar; Irfanullah, Mir; Basu, Santanu Kumar; Madhu, Sheri; De, Suman; Jadhav, Sameer; Ravikanth, Mangalampalli; Chowdhury, Arindam

    2017-03-01

    While fluorescence microscopy has become an essential tool amongst chemists and biologists for the detection of various analyte within cellular environments, non-uniform spatial distribution of sensors within cells often restricts extraction of reliable information on relative abundance of analytes in different subcellular regions. As an alternative to existing sensing methodologies such as ratiometric or FRET imaging, where relative proportion of analyte with respect to the sensor can be obtained within cells, we propose a methodology using spectrally-resolved fluorescence microscopy, via which both the relative abundance of sensor as well as their relative proportion with respect to the analyte can be simultaneously extracted for local subcellular regions. This method is exemplified using a BODIPY sensor, capable of detecting mercury ions within cellular environments, characterized by spectral blue-shift and concurrent enhancement of emission intensity. Spectral emission envelopes collected from sub-microscopic regions allowed us to compare the shift in transition energies as well as integrated emission intensities within various intracellular regions. Construction of a 2D scatter plot using spectral shifts and emission intensities, which depend on the relative amount of analyte with respect to sensor and the approximate local amounts of the probe, respectively, enabled qualitative extraction of relative abundance of analyte in various local regions within a single cell as well as amongst different cells. Although the comparisons remain semi-quantitative, this approach involving analysis of multiple spectral parameters opens up an alternative way to extract spatial distribution of analyte in heterogeneous systems. The proposed method would be especially relevant for fluorescent probes that undergo relatively nominal shift in transition energies compared to their emission bandwidths, which often restricts their usage for quantitative ratiometric imaging in

  13. On the General Analytical Solution of the Kinematic Cosserat Equations

    KAUST Repository

    Michels, Dominik L.

    2016-09-01

    Based on a Lie symmetry analysis, we construct a closed form solution to the kinematic part of the (partial differential) Cosserat equations describing the mechanical behavior of elastic rods. The solution depends on two arbitrary analytical vector functions and is analytical everywhere except a certain domain of the independent variables in which one of the arbitrary vector functions satisfies a simple explicitly given algebraic relation. As our main theoretical result, in addition to the construction of the solution, we proof its generality. Based on this observation, a hybrid semi-analytical solver for highly viscous two-way coupled fluid-rod problems is developed which allows for the interactive high-fidelity simulations of flagellated microswimmers as a result of a substantial reduction of the numerical stiffness.

  14. On the General Analytical Solution of the Kinematic Cosserat Equations

    KAUST Repository

    Michels, Dominik L.; Lyakhov, Dmitry; Gerdt, Vladimir P.; Hossain, Zahid; Riedel-Kruse, Ingmar H.; Weber, Andreas G.

    2016-01-01

    Based on a Lie symmetry analysis, we construct a closed form solution to the kinematic part of the (partial differential) Cosserat equations describing the mechanical behavior of elastic rods. The solution depends on two arbitrary analytical vector functions and is analytical everywhere except a certain domain of the independent variables in which one of the arbitrary vector functions satisfies a simple explicitly given algebraic relation. As our main theoretical result, in addition to the construction of the solution, we proof its generality. Based on this observation, a hybrid semi-analytical solver for highly viscous two-way coupled fluid-rod problems is developed which allows for the interactive high-fidelity simulations of flagellated microswimmers as a result of a substantial reduction of the numerical stiffness.

  15. On the analyticity of Laguerre series

    International Nuclear Information System (INIS)

    Weniger, Ernst Joachim

    2008-01-01

    The transformation of a Laguerre series f(z) = Σ ∞ n=0 λ (α) n L (α) n (z) to a power series f(z) = Σ ∞ n=0 γ n z n is discussed. Since many nonanalytic functions can be expanded in terms of generalized Laguerre polynomials, success is not guaranteed and such a transformation can easily lead to a mathematically meaningless expansion containing power series coefficients that are infinite in magnitude. Simple sufficient conditions based on the decay rates and sign patterns of the Laguerre series coefficients λ (α) n as n → ∞ can be formulated which guarantee that the resulting power series represents an analytic function. The transformation produces a mathematically meaningful result if the coefficients λ (α) n either decay exponentially or factorially as n → ∞. The situation is much more complicated-but also much more interesting-if the λ (α) n decay only algebraically as n → ∞. If the λ (α) n ultimately have the same sign, the series expansions for the power series coefficients diverge, and the corresponding function is not analytic at the origin. If the λ (α) n ultimately have strictly alternating signs, the series expansions for the power series coefficients still diverge, but are summable to something finite, and the resulting power series represents an analytic function. If algebraically decaying and ultimately alternating Laguerre series coefficients λ (α) n possess sufficiently simple explicit analytical expressions, the summation of the divergent series for the power series coefficients can often be accomplished with the help of analytic continuation formulae for hypergeometric series p+1 F p , but if the λ (α) n have a complicated structure or if only their numerical values are available, numerical summation techniques have to be employed. It is shown that certain nonlinear sequence transformations-in particular the so-called delta transformation (Weniger 1989 Comput. Phys. Rep. 10 189-371 (equation (8.4-4)))-are able to

  16. Analytical applications for delayed neutrons

    International Nuclear Information System (INIS)

    Eccleston, G.W.

    1983-01-01

    Analytical formulations that describe the time dependence of neutron populations in nuclear materials contain delayed-neutron dependent terms. These terms are important because the delayed neutrons, even though their yields in fission are small, permit control of the fission chain reaction process. Analytical applications that use delayed neutrons range from simple problems that can be solved with the point reactor kinetics equations to complex problems that can only be solved with large codes that couple fluid calculations with the neutron dynamics. Reactor safety codes, such as SIMMER, model transients of the entire reactor core using coupled space-time neutronics and comprehensive thermal-fluid dynamics. Nondestructive delayed-neutron assay instruments are designed and modeled using a three-dimensional continuous-energy Monte Carlo code. Calculations on high-burnup spent fuels and other materials that contain a mix of uranium and plutonium isotopes require accurate and complete information on the delayed-neutron periods, yields, and energy spectra. A continuing need exists for delayed-neutron parameters for all the fissioning isotopes

  17. Simple future weather files for estimating heating and cooling demand

    DEFF Research Database (Denmark)

    Cox, Rimante Andrasiunaite; Drews, Martin; Rode, Carsten

    2015-01-01

    useful estimates of future energy demand of a building. Experimental results based on both the degree-day method and dynamic simulations suggest that this is indeed the case. Specifically, heating demand estimates were found to be within a few per cent of one another, while estimates of cooling demand...... were slightly more varied. This variation was primarily due to the very few hours of cooling that were required in the region examined. Errors were found to be most likely when the air temperatures were close to the heating or cooling balance points, where the energy demand was modest and even...... relatively large errors might thus result in only modest absolute errors in energy demand....

  18. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    Science.gov (United States)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  19. Self-adaptive numerical integrator for analytic functions

    International Nuclear Information System (INIS)

    Garribba, S.; Quartapelle, L.; Reina, G.

    1978-01-01

    A new adaptive algorithm for the integration of analytical functions is presented. The algorithm processes the integration interval by generating local subintervals whose length is controlled through a feedback loop. The control is obtained by means of a relation derived on an analytical basis and valid for an arbitrary integration rule: two different estimates of an integral are used to compute the interval length necessary to obtain an integral estimate with accuracy within the assigned error bounds. The implied method for local generation of subintervals and an effective assumption of error partition among subintervals give rise to an adaptive algorithm provided with a highly accurate and very efficient integration procedure. The particular algorithm obtained by choosing the 6-point Gauss-Legendre integration rule is considered and extensive comparisons are made with other outstanding integration algorithms

  20. On the estimation of the wake potential for an ultrarelativistic charge in an accelerating structure

    International Nuclear Information System (INIS)

    Novokhatskij, A.V.

    1988-01-01

    The method to derive the analytic estimations for wake fields of an ultrarelativistic charge in an accelerating structure, that are valid in the range of distances smalller or compared to the effective structure dimensions. The method is based on the approximate space-time domain integrating of the maxwell equations in the Kirchhoff formulation. the method is demonstrated on the examples of obtaining the wake potentials for energy loss of a bunch traversing a scraper, a cavity or periodic iris-loaded structure. Likewise formulae are derived for Green functions that describe transverse force action of wake fields. Simple formulae for the total energy loss evaluation of a bunch with the Gaussian charge density distribution are derived as well. The derived estimations are compared with the computer results and predictions of other models

  1. Fast and Simple Method for Evaluation of Polarization Correction to Propagation Constant of Arbitrary Order Guided Modes in Optical Fibers with Arbitrary Refractive Index Profile

    Directory of Open Access Journals (Sweden)

    Anton Bourdine

    2015-01-01

    Full Text Available This work presents fast and simple method for evaluation of polarization correction to scalar propagation constant of arbitrary order guided modes propagating over weakly guiding optical fibers. Proposed solution is based on earlier on developed modified Gaussian approximation extended for analysis of weakly guiding optical fibers with arbitrary refractive index profile in the core region bounded by single solid outer cladding. Some results are presented that illustrate the decreasing of computational error during the estimation of propagation constant when polarization corrections are taken into account. Analytical expressions for the first and second derivatives of polarization correction are derived and presented.

  2. Analytical method for solving radioactive transformations

    International Nuclear Information System (INIS)

    Vudakin, Z.

    1999-01-01

    Analytical method for solving radioactive transformations is presented in this paper. High accuracy series expansion of the depletion function and nonsingular Bateman coefficients are used to overcome numerical difficulties when applying well-known Bateman solution of a simple radioactive decay. Generality and simplicity of the method are found to be useful in evaluating nuclide chains with one hundred or more nuclides in the chain. Method enables evaluation of complete chain, without elimination of short-lives nuclides. It is efficient and accurate

  3. A simple red-ox titrimetric method for the evaluation of photo ...

    Indian Academy of Sciences (India)

    Unknown

    tal conditions in a relatively short duration in R&D labora- tories having basic analytical facilities. The method suggested here could also be adopted to study the photo- catalytic activity of other transition metal oxide based catalysts. For establishing this technique, we have moni- tored a simple one-electron transfer red-ox ...

  4. Simple Analytic Collisional Rates for non-LTE Vibrational Populations in Astrophysical Environments: the Cases of Circumstellar SiO Masers and Shocked H2

    Science.gov (United States)

    Bieniek, Ronald

    2008-05-01

    Rates for collisionally induced transitions between molecular vibrational levels are important in modeling a variety of non-LTE processes in astrophysical environments. Two examples are SiO masering in circumstellar envelopes in certain late-type stars [1] and the vibrational populations of molecular hydrogen in shocked interstellar medium [cf 2]. A simple exponential-potential model of molecular collisions leads to a two-parameter analytic expression for state-to-state and thermally averaged rates for collisionally induced vibrational-translational (VT) transitions in diatomic molecules [3,4]. The thermally averaged rates predicted by this formula have been shown to be in excellent numerical agreement with absolute experimental and quantum mechanical rates over large temperature ranges and initial vibrational excitation levels in a variety of species, e.g., OH, O2, N2 [3] and even for the rate of H2(v=1)+H2, which changes by five orders of magnitude in the temperature range 50-2000 K [4]. Analogous analytic rates will be reported for vibrational transitions in SiO due to collisions with H2 and compared to the numerical fit of quantum-mechanical rates calculated by Bieniek and Green [5]. [1] Palov, A.P., Gray, M.D., Field, D., & Balint-Kurti, G.G. 2006, ApJ, 639, 204. [2] Flower, D. 2007, Molecular Collisions in the Interstellar Medium (Cambridge: Cambridge Univ. Press) [3] Bieniek, R.J. & Lipson, S.J. 1996, Chem. Phys. Lett. 263, 276. [4] Bieniek, R.J. 2006, Proc. NASA LAW (Lab. Astrophys. Workshop) 2006, 299; http://www.physics.unlv.edu/labastro/nasalaw2006proceedings.pdf. [5] Bieniek, R.J., & Green, S. 1983, ApJ, 265, L29 and 1983, ApJ, 270, L101.

  5. Vertical and pitching resonance of train cars moving over a series of simple beams

    Science.gov (United States)

    Yang, Y. B.; Yau, J. D.

    2015-02-01

    The resonant response, including both vertical and pitching motions, of an undamped sprung mass unit moving over a series of simple beams is studied by a semi-analytical approach. For a sprung mass that is very small compared with the beam, we first simplify the sprung mass as a constant moving force and obtain the response of the beam in closed form. With this, we then solve for the response of the sprung mass passing over a series of simple beams, and validate the solution by an independent finite element analysis. To evaluate the pitching resonance, we consider the cases of a two-axle model and a coach model traveling over rough rails supported by a series of simple beams. The resonance of a train car is characterized by the fact that its response continues to build up, as it travels over more and more beams. For train cars with long axle intervals, the vertical acceleration induced by pitching resonance dominates the peak response of the train traveling over a series of simple beams. The present semi-analytical study allows us to grasp the key parameters involved in the primary/sub-resonant responses. Other phenomena of resonance are also discussed in the exemplar study.

  6. Simplified semi-analytical model for mass transport simulation in unsaturated zone

    International Nuclear Information System (INIS)

    Sa, Bernadete L. Vieira de; Hiromoto, Goro

    2001-01-01

    This paper describes a simple model to determine the flux of radionuclides released from a concrete vault repository and its implementation through the development of a computer program. The radionuclide leach rate from waste is calculated using a model based on simple first order kinetics and the transport through porous media bellow the waste is determined using a semi-analytical solution of the mass transport equation. Results obtained in the IAEA intercomparison program are also related in this communication. (author)

  7. An analytic solution to the homogeneous EIT problem on the 2D disk and its application to estimation of electrode contact impedances

    International Nuclear Information System (INIS)

    Demidenko, Eugene

    2011-01-01

    An analytic solution of the potential distribution on a 2D homogeneous disk for electrical impedance tomography under the complete electrode model is expressed via an infinite system of linear equations. For the shunt electrode model with two electrodes, our solution coincides with the previously derived solution expressed via elliptic integral (Pidcock et al 1995 Physiol. Meas. 16 77–90). The Dirichlet-to-Neumann map is derived for statistical estimation via nonlinear least squares. The solution is validated in phantom experiments and applied for breast contact impedance estimation in vivo. Statistical hypothesis testing is used to test whether the contact impedances are the same across electrodes or all equal zero. Our solution can be especially useful for a rapid real-time test for bad surface contact in clinical setting

  8. Analytic result for the two-loop six-point NMHV amplitude in N=4 super Yang-Mills theory

    CERN Document Server

    Dixon, Lance J.; Henn, Johannes M.

    2012-01-01

    We provide a simple analytic formula for the two-loop six-point ratio function of planar N = 4 super Yang-Mills theory. This result extends the analytic knowledge of multi-loop six-point amplitudes beyond those with maximal helicity violation. We make a natural ansatz for the symbols of the relevant functions appearing in the two-loop amplitude, and impose various consistency conditions, including symmetry, the absence of spurious poles, the correct collinear behaviour, and agreement with the operator product expansion for light-like (super) Wilson loops. This information reduces the ansatz to a small number of relatively simple functions. In order to fix these parameters uniquely, we utilize an explicit representation of the amplitude in terms of loop integrals that can be evaluated analytically in various kinematic limits. The final compact analytic result is expressed in terms of classical polylogarithms, whose arguments are rational functions of the dual conformal cross-ratios, plus precisely two function...

  9. Galaxy-galaxy lensing estimators and their covariance properties

    Science.gov (United States)

    Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uroš; Slosar, Anže; Vazquez Gonzalez, Jose

    2017-11-01

    We study the covariance properties of real space correlation function estimators - primarily galaxy-shear correlations, or galaxy-galaxy lensing - using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens density field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.

  10. Galaxy–galaxy lensing estimators and their covariance properties

    International Nuclear Information System (INIS)

    Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uros; Slosar, Anze; Gonzalez, Jose Vazquez

    2017-01-01

    Here, we study the covariance properties of real space correlation function estimators – primarily galaxy–shear correlations, or galaxy–galaxy lensing – using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens density field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.

  11. Multifractal rainfall extremes: Theoretical analysis and practical estimation

    International Nuclear Information System (INIS)

    Langousis, Andreas; Veneziano, Daniele; Furcolo, Pierluigi; Lepore, Chiara

    2009-01-01

    We study the extremes generated by a multifractal model of temporal rainfall and propose a practical method to estimate the Intensity-Duration-Frequency (IDF) curves. The model assumes that rainfall is a sequence of independent and identically distributed multiplicative cascades of the beta-lognormal type, with common duration D. When properly fitted to data, this simple model was found to produce accurate IDF results [Langousis A, Veneziano D. Intensity-duration-frequency curves from scaling representations of rainfall. Water Resour Res 2007;43. (doi:10.1029/2006WR005245)]. Previous studies also showed that the IDF values from multifractal representations of rainfall scale with duration d and return period T under either d → 0 or T → ∞, with different scaling exponents in the two cases. We determine the regions of the (d, T)-plane in which each asymptotic scaling behavior applies in good approximation, find expressions for the IDF values in the scaling and non-scaling regimes, and quantify the bias when estimating the asymptotic power-law tail of rainfall intensity from finite-duration records, as was often done in the past. Numerically calculated exact IDF curves are compared to several analytic approximations. The approximations are found to be accurate and are used to propose a practical IDF estimation procedure.

  12. Simple models of the thermal structure of the Venusian ionosphere

    International Nuclear Information System (INIS)

    Whitten, R.C.; Knudsen, W.C.

    1980-01-01

    Analytical and numerical models of plasma temperatures in the Venusian ionosphere are proposed. The magnitudes of plasma thermal parameters are calculated using thermal-structure data obtained by the Pioneer Venus Orbiter. The simple models are found to be in good agreement with the more detailed models of thermal balance. Daytime and nighttime temperature data along with corresponding temperature profiles are provided

  13. Simple quasi-analytical holonomic homogenization model for the non-linear analysis of in-plane loaded masonry panels: Part 2, structural implementation and validation

    Science.gov (United States)

    Milani, G.; Bertolesi, E.

    2017-07-01

    The simple quasi analytical holonomic homogenization approach for the non-linear analysis of in-plane loaded masonry presented in Part 1 is here implemented at a structural leveland validated. For such implementation, a Rigid Body and Spring Mass model (RBSM) is adopted, relying into a numerical modelling constituted by rigid elements interconnected by homogenized inelastic normal and shear springs placed at the interfaces between adjoining elements. Such approach is also known as HRBSM. The inherit advantage is that it is not necessary to solve a homogenization problem at each load step in each Gauss point, and a direct implementation into a commercial software by means of an external user supplied subroutine is straightforward. In order to have an insight into the capabilities of the present approach to reasonably reproduce masonry behavior at a structural level, non-linear static analyses are conducted on a shear wall, for which experimental and numerical data are available in the technical literature. Quite accurate results are obtained with a very limited computational effort.

  14. A Simple Plasma Retinol Isotope Ratio Method for Estimating β-Carotene Relative Bioefficacy in Humans: Validation with the Use of Model-Based Compartmental Analysis.

    Science.gov (United States)

    Ford, Jennifer Lynn; Green, Joanne Balmer; Lietz, Georg; Oxley, Anthony; Green, Michael H

    2017-09-01

    Background: Provitamin A carotenoids are an important source of dietary vitamin A for many populations. Thus, accurate and simple methods for estimating carotenoid bioefficacy are needed to evaluate the vitamin A value of test solutions and plant sources. β-Carotene bioefficacy is often estimated from the ratio of the areas under plasma isotope response curves after subjects ingest labeled β-carotene and a labeled retinyl acetate reference dose [isotope reference method (IRM)], but to our knowledge, the method has not yet been evaluated for accuracy. Objectives: Our objectives were to develop and test a physiologically based compartmental model that includes both absorptive and postabsorptive β-carotene bioconversion and to use the model to evaluate the accuracy of the IRM and a simple plasma retinol isotope ratio [(RIR), labeled β-carotene-derived retinol/labeled reference-dose-derived retinol in one plasma sample] for estimating relative bioefficacy. Methods: We used model-based compartmental analysis (Simulation, Analysis and Modeling software) to develop and apply a model that provided known values for β-carotene bioefficacy. Theoretical data for 10 subjects were generated by the model and used to determine bioefficacy by RIR and IRM; predictions were compared with known values. We also applied RIR and IRM to previously published data. Results: Plasma RIR accurately predicted β-carotene relative bioefficacy at 14 d or later. IRM also accurately predicted bioefficacy by 14 d, except that, when there was substantial postabsorptive bioconversion, IRM underestimated bioefficacy. Based on our model, 1-d predictions of relative bioefficacy include absorptive plus a portion of early postabsorptive conversion. Conclusion: The plasma RIR is a simple tracer method that accurately predicts β-carotene relative bioefficacy based on analysis of one blood sample obtained at ≥14 d after co-ingestion of labeled β-carotene and retinyl acetate. The method also provides

  15. NDE errors and their propagation in sizing and growth estimates

    International Nuclear Information System (INIS)

    Horn, D.; Obrutsky, L.; Lakhan, R.

    2009-01-01

    The accuracy attributed to eddy current flaw sizing determines the amount of conservativism required in setting tube-plugging limits. Several sources of error contribute to the uncertainty of the measurements, and the way in which these errors propagate and interact affects the overall accuracy of the flaw size and flaw growth estimates. An example of this calculation is the determination of an upper limit on flaw growth over one operating period, based on the difference between two measurements. Signal-to-signal comparison involves a variety of human, instrumental, and environmental error sources; of these, some propagate additively and some multiplicatively. In a difference calculation, specific errors in the first measurement may be correlated with the corresponding errors in the second; others may be independent. Each of the error sources needs to be identified and quantified individually, as does its distribution in the field data. A mathematical framework for the propagation of the errors can then be used to assess the sensitivity of the overall uncertainty to each individual error component. This paper quantifies error sources affecting eddy current sizing estimates and presents analytical expressions developed for their effect on depth estimates. A simple case study is used to model the analysis process. For each error source, the distribution of the field data was assessed and propagated through the analytical expressions. While the sizing error obtained was consistent with earlier estimates and with deviations from ultrasonic depth measurements, the error on growth was calculated as significantly smaller than that obtained assuming uncorrelated errors. An interesting result of the sensitivity analysis in the present case study is the quantification of the error reduction available from post-measurement compensation of magnetite effects. With the absolute and difference error equations, variance-covariance matrices, and partial derivatives developed in

  16. Simple estimate of fission rate during JCO criticality accident

    Energy Technology Data Exchange (ETDEWEB)

    Oyamatsu, Kazuhiro [Faculty of Studies on Contemporary Society, Aichi Shukutoku Univ., Nagakute, Aichi (Japan)

    2000-03-01

    The fission rate during JCO criticality accident is estimated from fission-product (FP) radioactivities in a uranium solution sample taken from the preparation basin 20 days after the accident. The FP radioactivity data are taken from a report by JAERI released in the Accident Investigation Committee. The total fission number is found quite dependent on the FP radioactivities and estimated to be about 4x10{sup 16} per liter, or 2x10{sup 18} per 16 kgU (assuming uranium concentration 278.9 g/liter). On the contrary, the time dependence of the fission rate is rather insensitive to the FP radioactivities. Hence, it is difficult to determine the fission number in the initial burst from the radioactivity data. (author)

  17. Simple estimate of fission rate during JCO criticality accident

    International Nuclear Information System (INIS)

    Oyamatsu, Kazuhiro

    2000-01-01

    The fission rate during JCO criticality accident is estimated from fission-product (FP) radioactivities in a uranium solution sample taken from the preparation basin 20 days after the accident. The FP radioactivity data are taken from a report by JAERI released in the Accident Investigation Committee. The total fission number is found quite dependent on the FP radioactivities and estimated to be about 4x10 16 per liter, or 2x10 18 per 16 kgU (assuming uranium concentration 278.9 g/liter). On the contrary, the time dependence of the fission rate is rather insensitive to the FP radioactivities. Hence, it is difficult to determine the fission number in the initial burst from the radioactivity data. (author)

  18. Analytical errors in measuring radioactivity in cell proteins and their effect on estimates of protein turnover in L cells

    International Nuclear Information System (INIS)

    Silverman, J.A.; Mehta, J.; Brocher, S.; Amenta, J.S.

    1985-01-01

    Previous studies on protein turnover in 3 H-labelled L-cell cultures have shown recovery of total 3 H at the end of a three-day experiment to be always significantly in excess of the 3 H recovered at the beginning of the experiment. A number of possible sources for this error in measuring radioactivity in cell proteins has been reviewed. 3 H-labelled proteins, when dissolved in NaOH and counted for radioactivity in a liquid-scintillation spectrometer, showed losses of 30-40% of the radioactivity; neither external or internal standardization compensated for this loss. Hydrolysis of these proteins with either Pronase or concentrated HCl significantly increased the measured radioactivity. In addition, 5-10% of the cell protein is left on the plastic culture dish when cells are recovered in phosphate-buffered saline. Furthermore, this surface-adherent protein, after pulse labelling, contains proteins of high radioactivity that turn over rapidly and make a major contribution to the accumulating radioactivity in the medium. These combined errors can account for up to 60% of the total radioactivity in the cell culture. Similar analytical errors have been found in studies of other cell cultures. The effect of these analytical errors on estimates of protein turnover in cell cultures is discussed. (author)

  19. Illustration of an analytical method for quantification of the safety of technical appliances

    International Nuclear Information System (INIS)

    Tegel, M.

    1981-01-01

    The safety analysis of technical products will in future be required more and more also for simple technical systems. The fault-tree analysis is a method for safety judgement used in particular in aviation and space engineering as well as in energy engineering. This analytical method can also be applied to simple technical constructions, as the article shows, using as example an acially rotatable load hook. (orig.) [de

  20. Analytic Bayesian solution of the two-stage poisson-type problem in probabilistic risk analysis

    International Nuclear Information System (INIS)

    Frohner, F.H.

    1985-01-01

    The basic purpose of probabilistic risk analysis is to make inferences about the probabilities of various postulated events, with an account of all relevant information such as prior knowledge and operating experience with the specific system under study, as well as experience with other similar systems. Estimation of the failure rate of a Poisson-type system leads to an especially simple Bayesian solution in closed form if the prior probabilty implied by the invariance properties of the problem is properly taken into account. This basic simplicity persists if a more realistic prior, representing order of magnitude knowledge of the rate parameter, is employed instead. Moreover, the more realistic prior allows direct incorporation of experience gained from other similar systems, without need to postulate a statistical model for an underlying ensemble. The analytic formalism is applied to actual nuclear reactor data

  1. Capacity Calculation of Shunt Active Power Filters for Electric Vehicle Charging Stations Based on Harmonic Parameter Estimation and Analytical Modeling

    Directory of Open Access Journals (Sweden)

    Niancheng Zhou

    2014-08-01

    Full Text Available The influence of electric vehicle charging stations on power grid harmonics is becoming increasingly significant as their presence continues to grow. This paper studies the operational principles of the charging current in the continuous and discontinuous modes for a three-phase uncontrolled rectification charger with a passive power factor correction link, which is affected by the charging power. A parameter estimation method is proposed for the equivalent circuit of the charger by using the measured characteristic AC (Alternating Current voltage and current data combined with the charging circuit constraints in the conduction process, and this method is verified using an experimental platform. The sensitivity of the current harmonics to the changes in the parameters is analyzed. An analytical harmonic model of the charging station is created by separating the chargers into groups by type. Then, the harmonic current amplification caused by the shunt active power filter is researched, and the analytical formula for the overload factor is derived to further correct the capacity of the shunt active power filter. Finally, this method is validated through a field test of a charging station.

  2. Solution of a simple inelastic scattering problem

    International Nuclear Information System (INIS)

    Knudson, S.K.

    1975-01-01

    Simple examples of elastic scattering, typically from square wells, serve as important pedagogical tools in discussion of the concepts and processes involved in elastic scattering events. An analytic solution of a model inelastic scattering system is presented here to serve in this role for inelastic events. The model and its solution are simple enough to be of pedagogical utility, but also retain enough of the important physical features to include most of the special characteristics of inelastic systems. The specific model chosen is the collision of an atom with a harmonic oscillator, interacting via a repulsive square well potential. Pedagogically important features of inelastic scattering, including its multistate character, convergence behavior, and dependence on an ''inelastic potential'' are emphasized as the solution is determined. Results are presented for various energies and strengths of inelastic scattering, which show that the model is capable of providing an elementary representation of vibrationally inelastic scattering

  3. THE ESTIMATION OF STAR FORMATION RATES AND STELLAR POPULATION AGES OF HIGH-REDSHIFT GALAXIES FROM BROADBAND PHOTOMETRY

    International Nuclear Information System (INIS)

    Lee, Seong-Kook; Ferguson, Henry C.; Somerville, Rachel S.; Wiklind, Tommy; Giavalisco, Mauro

    2010-01-01

    We explore methods to improve the estimates of star formation rates and mean stellar population ages from broadband photometry of high-redshift star-forming galaxies. We use synthetic spectral templates with a variety of simple parametric star formation histories to fit broadband spectral energy distributions. These parametric models are used to infer ages, star formation rates, and stellar masses for a mock data set drawn from a hierarchical semi-analytic model of galaxy evolution. Traditional parametric models generally assume an exponentially declining rate of star formation after an initial instantaneous rise. Our results show that star formation histories with a much more gradual rise in the star formation rate are likely to be better templates, and are likely to give better overall estimates of the age distribution and star formation rate distribution of Lyman break galaxies (LBGs). For B- and V-dropouts, we find the best simple parametric model to be one where the star formation rate increases linearly with time. The exponentially declining model overpredicts the age by 100% and 120% for B- and V-dropouts, on average, while for a linearly increasing model, the age is overpredicted by 9% and 16%, respectively. Similarly, the exponential model underpredicts star formation rates by 56% and 60%, while the linearly increasing model underpredicts by 15% and 22%, respectively. For U-dropouts, the models where the star formation rate has a peak (near z ∼ 3) provide the best match for age-overprediction is reduced from 110% to 26%-and star formation rate-underprediction is reduced from 58% to 22%. We classify different types of star formation histories in the semi-analytic models and show how the biases behave for the different classes. We also provide two-band calibration formulae for stellar mass and star formation rate estimations.

  4. Optimal design under uncertainty of a passive defense structure against snow avalanches: from a general Bayesian framework to a simple analytical model

    Directory of Open Access Journals (Sweden)

    N. Eckert

    2008-10-01

    Full Text Available For snow avalanches, passive defense structures are generally designed by considering high return period events. In this paper, taking inspiration from other natural hazards, an alternative method based on the maximization of the economic benefit of the defense structure is proposed. A general Bayesian framework is described first. Special attention is given to the problem of taking the poor local information into account in the decision-making process. Therefore, simplifying assumptions are made. The avalanche hazard is represented by a Peak Over Threshold (POT model. The influence of the dam is quantified in terms of runout distance reduction with a simple relation derived from small-scale experiments using granular media. The costs corresponding to dam construction and the damage to the element at risk are roughly evaluated for each dam height-hazard value pair, with damage evaluation corresponding to the maximal expected loss. Both the classical and the Bayesian risk functions can then be computed analytically. The results are illustrated with a case study from the French avalanche database. A sensitivity analysis is performed and modelling assumptions are discussed in addition to possible further developments.

  5. Development and validation of simple RP-HPLC-PDA analytical protocol for zileuton assisted with Design of Experiments for robustness determination

    Directory of Open Access Journals (Sweden)

    Saurabh B. Ganorkar

    2017-02-01

    Full Text Available A simple, rapid, sensitive, robust, stability-indicating RP-HPLC-PDA analytical protocol was developed and validated for the analysis of zileuton racemate in bulk and in tablet formulation. Development of method and resolution of degradation products from forced; hydrolytic (acidic, basic, neutral, oxidative, photolytic (acidic, basic, neutral, solid state and thermal (dry heat degradation was achieved on a LC – GC Qualisil BDS C18 column (250 mm × 4.6 mm × 5 μm by isocratic mode at ambient temperature, employing a mobile phase methanol and (0.2%, v/v orthophosphoric acid in ratio of (80:20, v/v at a flow rate of 1.0 mL min−1 and detection at 260 nm. ‘Design of Experiments’ (DOE employing ‘Central Composite Design’ (CCD and ‘Response Surface Methodology’ (RSM were applied as an advancement to traditional ‘One Variable at Time’ (OVAT approach to evaluate the effects of variations in selected factors (methanol content, flow rate, concentration of orthophosphoric acid as graphical interpretation for robustness and statistical interpretation was achieved with Multiple Linear Regression (MLR and ANOVA. The method succeeded over the validation parameters: linearity, precision, accuracy, limit of detection and limit of quantitation, and robustness. The method was applied effectively for analysis of in-house zileuton tablets.

  6. Shielding Characteristics Using an Ultrasonic Configurable Fan Artificial Noise Source to Generate Modes - Experimental Measurements and Analytical Predictions

    Science.gov (United States)

    Sutliff, Daniel L.; Walker, Bruce E.

    2014-01-01

    An Ultrasonic Configurable Fan Artificial Noise Source (UCFANS) was designed, built, and tested in support of the NASA Langley Research Center's 14x22 wind tunnel test of the Hybrid Wing Body (HWB) full 3-D 5.8% scale model. The UCFANS is a 5.8% rapid prototype scale model of a high-bypass turbofan engine that can generate the tonal signature of proposed engines using artificial sources (no flow). The purpose of the program was to provide an estimate of the acoustic shielding benefits possible from mounting an engine on the upper surface of a wing; a flat plate model was used as the shielding surface. Simple analytical simulations were used to preview the radiation patterns - Fresnel knife-edge diffraction was coupled with a dense phased array of point sources to compute shielded and unshielded sound pressure distributions for potential test geometries and excitation modes. Contour plots of sound pressure levels, and integrated power levels, from nacelle alone and shielded configurations for both the experimental measurements and the analytical predictions are presented in this paper.

  7. Atmospheric water vapor transport: Estimation of continental precipitation recycling and parameterization of a simple climate model. M.S. Thesis

    Science.gov (United States)

    Brubaker, Kaye L.; Entekhabi, Dara; Eagleson, Peter S.

    1991-01-01

    The advective transport of atmospheric water vapor and its role in global hydrology and the water balance of continental regions are discussed and explored. The data set consists of ten years of global wind and humidity observations interpolated onto a regular grid by objective analysis. Atmospheric water vapor fluxes across the boundaries of selected continental regions are displayed graphically. The water vapor flux data are used to investigate the sources of continental precipitation. The total amount of water that precipitates on large continental regions is supplied by two mechanisms: (1) advection from surrounding areas external to the region; and (2) evaporation and transpiration from the land surface recycling of precipitation over the continental area. The degree to which regional precipitation is supplied by recycled moisture is a potentially significant climate feedback mechanism and land surface-atmosphere interaction, which may contribute to the persistence and intensification of droughts. A simplified model of the atmospheric moisture over continents and simultaneous estimates of regional precipitation are employed to estimate, for several large continental regions, the fraction of precipitation that is locally derived. In a separate, but related, study estimates of ocean to land water vapor transport are used to parameterize an existing simple climate model, containing both land and ocean surfaces, that is intended to mimic the dynamics of continental climates.

  8. An analytical method for estimating the 14N nuclear quadrupole resonance parameters of organic compounds with complex free induction decays for radiation effects studies

    International Nuclear Information System (INIS)

    Iselin, L.H.

    1992-01-01

    The use of 14 N nuclear quadrupole resonance (NQR) as a radiation dosimetry tool has only recently been explored. An analytical method for analyzing 14 N NQR complex free induction decays is presented with the background necessary to conduct pulsed NQR experiments. The 14 N NQR energy levels and possible transitions are derived in step-by-step detail. The components of a pulsed NQR spectrometer are discussed along with the experimental techniques for conducting radiation effects experiments using the spectrometer. Three data analysis techniques -- the power spectral density Fourier transform, state space singular value decomposition (HSVD), and nonlinear curve fitting (using the downhill simplex method of global optimization and the Levenberg-Marquart method) -- are explained. These three techniques are integrated into an analytical method which uses these numerical techniques in this order to determine the physical NQR parameters. Sample data sets of urea and guanidine sulfate data are used to demonstrate how these methods can be employed to analyze both simple and complex free induction decays. By determining baseline values for biologically significant organics, radiation effects on the NQR parameters can be studied to provide a link between current radiation dosimetry techniques and the biological effects of radiation

  9. Development of simple kinetic models and parameter estimation for ...

    African Journals Online (AJOL)

    PANCHIGA

    2016-09-28

    Sep 28, 2016 ... estimation for simulation of recombinant human serum albumin ... and recombinant protein production by P. pastoris without requiring complex models. Key words: ..... SDS-PAGE and showed the same molecular size as.

  10. Performance of analytical methods for tomographic gamma scanning

    International Nuclear Information System (INIS)

    Prettyman, T.H.; Mercer, D.J.

    1997-01-01

    The use of gamma-ray computerized tomography for nondestructive assay of radioactive materials has led to the development of specialized analytical methods. Over the past few years, Los Alamos has developed and implemented a computer code, called ARC-TGS, for the analysis of data obtained by tomographic gamma scanning (TGS). ARC-TGS reduces TGS transmission and emission tomographic data, providing the user with images of the sample contents, the activity or mass of selected radionuclides, and an estimate of the uncertainty in the measured quantities. The results provided by ARC-TGS can be corrected for self-attenuation when the isotope of interest emits more than one gamma-ray. In addition, ARC-TGS provides information needed to estimate TGS quantification limits and to estimate the scan time needed to screen for small amounts of radioactivity. In this report, an overview of the analytical methods used by ARC-TGS is presented along with an assessment of the performance of these methods for TGS

  11. Varying stiffness and load distributions in defective ball bearings: Analytical formulation and application to defect size estimation

    Science.gov (United States)

    Petersen, Dick; Howard, Carl; Prime, Zebb

    2015-02-01

    This paper presents an analytical formulation of the load distribution and varying effective stiffness of a ball bearing assembly with a raceway defect of varying size, subjected to static loading in the radial, axial and rotational degrees of freedom. The analytical formulation is used to study the effect of the size of the defect on the load distribution and varying stiffness of the bearing assembly. The study considers a square-shaped outer raceway defect centered in the load zone and the bearing is loaded in the radial and axial directions while the moment loads are zero. Analysis of the load distributions shows that as the defect size increases, defect-free raceway sections are subjected to increased static loading when one or more balls completely or partly destress when positioned in the defect zone. The stiffness variations that occur when balls pass through the defect zone are significantly larger and change more rapidly at the defect entrance and exit than the stiffness variations that occur for the defect-free bearing case. These larger, more rapid stiffness variations generate parametric excitations which produce the low frequency defect entrance and exit events typically observed in the vibration response of a bearing with a square-shaped raceway defect. Analysis of the stiffness variations further shows that as the defect size increases, the mean radial stiffness decreases in the loaded radial and axial directions and increases in the unloaded radial direction. The effects of such stiffness changes on the low frequency entrance and exit events in the vibration response are simulated with a multi-body nonlinear dynamic model. Previous work used the time difference between the low frequency entrance event and the high frequency exit event to estimate the size of the defect. However, these previous defect size estimation techniques cannot distinguish between defects that differ in size by an integer number of the ball angular spacing, and a third feature

  12. Two simple ansaetze for obtaining exact solutions of high dispersive nonlinear Schroedinger equations

    International Nuclear Information System (INIS)

    Palacios, Sergio L.

    2004-01-01

    We propose two simple ansaetze that allow us to obtain different analytical solutions of the high dispersive cubic and cubic-quintic nonlinear Schroedinger equations. Among these solutions we can find solitary wave and periodic wave solutions representing the propagation of different waveforms in nonlinear media

  13. Can we estimate the cellular phone RF peak output power with a simple experiment?

    Science.gov (United States)

    Fioreze, Maycon; dos Santos Junior, Sauli; Goncalves Hönnicke, Marcelo

    2016-07-01

    Cellular phones are becoming increasingly useful tools for students. Since cell phones operate in the microwave bandwidth, they can be used to motivate students to demonstrate and better understand the properties of electromagnetic waves. However, since these waves operate at higher frequencies (L-band, from 800 MHz to 2 GHz) it is not simple to detect them. Usually, expensive real-time high frequency oscilloscopes are required. Indirect measurements are also possible through heat-based and diode-detector-based radio-frequency (RF) power sensors. Another didactic and intuitive way is to explore a simple and inexpensive detection system, based on the interference effect caused in the electronic circuit of TV and PC soundspeakers, and to try to investigate different properties of the cell phones’ RF electromagnetic waves, such as its power and modulated frequency. This manuscript proposes a trial to quantify these measurements, based on a simple Friis equation model and the time constant of the circuit used in the detection system, in order to show it didactically to the students and even allow them also to explore such a simple detection system at home.

  14. Can we estimate the cellular phone RF peak output power with a simple experiment?

    International Nuclear Information System (INIS)

    Fioreze, Maycon; Hönnicke, Marcelo Goncalves; Dos Santos Junior, Sauli

    2016-01-01

    Cellular phones are becoming increasingly useful tools for students. Since cell phones operate in the microwave bandwidth, they can be used to motivate students to demonstrate and better understand the properties of electromagnetic waves. However, since these waves operate at higher frequencies (L-band, from 800 MHz to 2 GHz) it is not simple to detect them. Usually, expensive real-time high frequency oscilloscopes are required. Indirect measurements are also possible through heat-based and diode-detector-based radio-frequency (RF) power sensors. Another didactic and intuitive way is to explore a simple and inexpensive detection system, based on the interference effect caused in the electronic circuit of TV and PC soundspeakers, and to try to investigate different properties of the cell phones’ RF electromagnetic waves, such as its power and modulated frequency. This manuscript proposes a trial to quantify these measurements, based on a simple Friis equation model and the time constant of the circuit used in the detection system, in order to show it didactically to the students and even allow them also to explore such a simple detection system at home. (paper)

  15. Quantification of analytes affected by relevant interfering signals under quality controlled conditions

    International Nuclear Information System (INIS)

    Bettencourt da Silva, Ricardo J.N.; Santos, Julia R.; Camoes, M. Filomena G.F.C.

    2006-01-01

    The analysis of organic contaminants or residues in biological samples is frequently affected by the presence of compounds producing interfering instrumental signals. This feature is responsible for the higher complexity and cost of these analyses and/or by a significant reduction of the number of studied analytes in a multi-analyte method. This work presents a methodology to estimate the impact of the interfering compounds on the quality of the analysis of complex samples, based on separative instrumental methods of analysis, aiming at supporting the inclusion of analytes affected by interfering compounds in the list of compounds analysed in the studied samples. The proposed methodology involves the study of the magnitude of the signal produced by the interfering compounds in the analysed matrix, and is applicable to analytical systems affected by interfering compounds with varying concentration in the studied matrix. The proposed methodology is based on the comparison of the signals from a representative number of examples of the studied matrix, in order to estimate the impact of the presence of such compounds on the measurement quality. The treatment of the chromatographic signals necessary to collect these data can be easily performed considering algorithms of subtraction of chromatographic signals available in most of the analytical instrumentation software. The subtraction of the interfering compounds signal from the sample signal allows the compensation of the interfering effect irrespective of the relative magnitude of the interfering and analyte signals, supporting the applicability of the same model of the method performance for a broader concentration range. The quantification of the measurement uncertainty was performed using the differential approach, which allows the estimation of the contribution of the presence of the interfering compounds to the quality of the measurement. The proposed methodology was successfully applied to the analysis of

  16. A simple method for identifying parameter correlations in partially observed linear dynamic models.

    Science.gov (United States)

    Li, Pu; Vu, Quoc Dong

    2015-12-14

    Parameter estimation represents one of the most significant challenges in systems biology. This is because biological models commonly contain a large number of parameters among which there may be functional interrelationships, thus leading to the problem of non-identifiability. Although identifiability analysis has been extensively studied by analytical as well as numerical approaches, systematic methods for remedying practically non-identifiable models have rarely been investigated. We propose a simple method for identifying pairwise correlations and higher order interrelationships of parameters in partially observed linear dynamic models. This is made by derivation of the output sensitivity matrix and analysis of the linear dependencies of its columns. Consequently, analytical relations between the identifiability of the model parameters and the initial conditions as well as the input functions can be achieved. In the case of structural non-identifiability, identifiable combinations can be obtained by solving the resulting homogenous linear equations. In the case of practical non-identifiability, experiment conditions (i.e. initial condition and constant control signals) can be provided which are necessary for remedying the non-identifiability and unique parameter estimation. It is noted that the approach does not consider noisy data. In this way, the practical non-identifiability issue, which is popular for linear biological models, can be remedied. Several linear compartment models including an insulin receptor dynamics model are taken to illustrate the application of the proposed approach. Both structural and practical identifiability of partially observed linear dynamic models can be clarified by the proposed method. The result of this method provides important information for experimental design to remedy the practical non-identifiability if applicable. The derivation of the method is straightforward and thus the algorithm can be easily implemented into a

  17. Simple methodologies to estimate the energy amount stored in a tree due to an explosive seed dispersal mechanism

    Science.gov (United States)

    do Carmo, Eduardo; Goncalves Hönnicke, Marcelo

    2018-05-01

    There are different forms to introduce/illustrate the energy concepts for the basic physics students. The explosive seed dispersal mechanism found in a variety of trees could be one of them. Sibipiruna trees carry out fruits (pods) who show such an explosive mechanism. During the explosion, the pods throw out seeds several meters away. In this manuscript we show simple methodologies to estimate the energy amount stored in the Sibipiruna tree due to such a process. Two different physics approaches were used to carry out this study: by monitoring indoor and in situ the explosive seed dispersal mechanism and by measuring the elastic constant of the pod shell. An energy of the order of kJ was found to be stored in a single tree due to such an explosive mechanism.

  18. A simple method for estimating the convection- dispersion equation ...

    African Journals Online (AJOL)

    Jane

    2011-08-31

    Aug 31, 2011 ... approach of modeling solute transport in porous media uses the deterministic ... Methods of estimating CDE transport parameters can be divided into statistical ..... diffusion-type model for longitudinal mixing of fluids in flow.

  19. Estimation and interpretation of keff confidence intervals in MCNP

    International Nuclear Information System (INIS)

    Urbatsch, T.J.

    1995-01-01

    MCNP has three different, but correlated, estimators for Calculating k eff in nuclear criticality calculations: collision, absorption, and track length estimators. The combination of these three estimators, the three-combined k eff estimator, is shown to be the best k eff estimator available in MCNP for estimating k eff confidence intervals. Theoretically, the Gauss-Markov Theorem provides a solid foundation for MCNP's three-combined estimator. Analytically, a statistical study, where the estimates are drawn using a known covariance matrix, shows that the three-combined estimator is superior to the individual estimator with the smallest variance. The importance of MCNP's batch statistics is demonstrated by an investigation of the effects of individual estimator variance bias on the combination of estimators, both heuristically with the analytical study and emprically with MCNP

  20. A new estimator for vector velocity estimation [medical ultrasonics

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2001-01-01

    A new estimator for determining the two-dimensional velocity vector using a pulsed ultrasound field is derived. The estimator uses a transversely modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation...... be introduced, and the velocity estimation is done at a fixed depth in tissue to reduce the influence of a spatial velocity spread. Examples for different velocity vectors and field conditions are shown using both simple and more complex field simulations. A relative accuracy of 10.1% is obtained...

  1. Quantum decay model with exact explicit analytical solution

    Science.gov (United States)

    Marchewka, Avi; Granot, Er'El

    2009-01-01

    A simple decay model is introduced. The model comprises a point potential well, which experiences an abrupt change. Due to the temporal variation, the initial quantum state can either escape from the well or stay localized as a new bound state. The model allows for an exact analytical solution while having the necessary features of a decay process. The results show that the decay is never exponential, as classical dynamics predicts. Moreover, at short times the decay has a fractional power law, which differs from perturbation quantum method predictions. At long times the decay includes oscillations with an envelope that decays algebraically. This is a model where the final state can be either continuous or localized, and that has an exact analytical solution.

  2. Simple expressions to estimate the consequences of a RIA in a PWR

    International Nuclear Information System (INIS)

    Riverola Gurruchaga, J.

    2010-01-01

    The analysis of the reactivity insertion accidents (RIA) for the current reactor fleet is gaining increasing importance. Due to the reconsideration of the mechanisms of clad failure evidenced in experiments in the past two decades, a significant change in the regulatory environment is expected. The verification of the revised criteria of core coolability and clad integrity taking into consideration PCMI or ballooning phenomena will require the adoption of advanced calculation methods that take advantage of 3D kinetics and more realistic simulation basis than today. However, these methods entail using of relatively complex codes whose results are sometimes difficult to contrast with the results obtained by other authors and methods. In the present paper, we review the most important parameters related to those likely to be the acceptance criteria and presents simple expressions for fuel temperature, pulse width, and fuel enthalpy during the transient. These expressions have been derived from the Nordheim-Fuchs theoretical model, simplified adequately according to their fundamental parameters, such as ejected rod worth, delayed neutron fraction, heat flux peaking factor, and so on, y = f(ρ, β, Fq,..) And finally obtain regressions on the results obtained by the author with a complete conservative RELAP PARCS model and by other authors using advanced codes in the literature. These expressions are generally valid for typical PWR, with three and four loops, 12 and 14 feet active length, and up-to-date fuel design. Because of their simplicity, these expressions are no substitute for a complex analysis, but allow for estimates of expected values and analyze trends. Finally, examples of the application to real Spanish core reloads are provided. (authors)

  3. A New Simple Model for Underwater Wireless Optical Channels in the Presence of Air Bubbles

    KAUST Repository

    Zedini, Emna

    2018-01-15

    A novel statistical model is proposed to characterize turbulence-induced fading in underwater wireless optical channels in the presence of air bubbles for fresh and salty waters, based on experimental data. In this model, the channel irradiance fluctuations are characterized by the mixture Exponential-Gamma distribution. We use the expectation maximization (EM) algorithm to obtain the maximum likelihood parameter estimation of the new model. Interestingly, the proposed model is shown to provide a perfect fit with the measured data under all the channel conditions for both types of water. The major advantage of the new model is that it has a simple mathematical form making it attractive from a performance analysis point of view. Indeed, the application of the Exponential-Gamma model leads to closed-form and analytically tractable expressions for key system performance metrics such as the outage probability and the average bit-error rate.

  4. Introduction, comparison, and validation of Meta‐Essentials: A free and simple tool for meta‐analysis

    Science.gov (United States)

    van Rhee, Henk; Hak, Tony

    2017-01-01

    We present a new tool for meta‐analysis, Meta‐Essentials, which is free of charge and easy to use. In this paper, we introduce the tool and compare its features to other tools for meta‐analysis. We also provide detailed information on the validation of the tool. Although free of charge and simple, Meta‐Essentials automatically calculates effect sizes from a wide range of statistics and can be used for a wide range of meta‐analysis applications, including subgroup analysis, moderator analysis, and publication bias analyses. The confidence interval of the overall effect is automatically based on the Knapp‐Hartung adjustment of the DerSimonian‐Laird estimator. However, more advanced meta‐analysis methods such as meta‐analytical structural equation modelling and meta‐regression with multiple covariates are not available. In summary, Meta‐Essentials may prove a valuable resource for meta‐analysts, including researchers, teachers, and students. PMID:28801932

  5. A New Simple Model for Underwater Wireless Optical Channels in the Presence of Air Bubbles

    KAUST Repository

    Zedini, Emna; Oubei, Hassan M.; Kammoun, Abla; Hamdi, Mounir; Ooi, Boon S.; Alouini, Mohamed-Slim

    2018-01-01

    A novel statistical model is proposed to characterize turbulence-induced fading in underwater wireless optical channels in the presence of air bubbles for fresh and salty waters, based on experimental data. In this model, the channel irradiance fluctuations are characterized by the mixture Exponential-Gamma distribution. We use the expectation maximization (EM) algorithm to obtain the maximum likelihood parameter estimation of the new model. Interestingly, the proposed model is shown to provide a perfect fit with the measured data under all the channel conditions for both types of water. The major advantage of the new model is that it has a simple mathematical form making it attractive from a performance analysis point of view. Indeed, the application of the Exponential-Gamma model leads to closed-form and analytically tractable expressions for key system performance metrics such as the outage probability and the average bit-error rate.

  6. Estimating Aquifer Properties Using Sinusoidal Pumping Tests

    Science.gov (United States)

    Rasmussen, T. C.; Haborak, K. G.; Young, M. H.

    2001-12-01

    We develop the theoretical and applied framework for using sinusoidal pumping tests to estimate aquifer properties for confined, leaky, and partially penetrating conditions. The framework 1) derives analytical solutions for three boundary conditions suitable for many practical applications, 2) validates the analytical solutions against a finite element model, 3) establishes a protocol for conducting sinusoidal pumping tests, and 4) estimates aquifer hydraulic parameters based on the analytical solutions. The analytical solutions to sinusoidal stimuli in radial coordinates are derived for boundary value problems that are analogous to the Theis (1935) confined aquifer solution, the Hantush and Jacob (1955) leaky aquifer solution, and the Hantush (1964) partially penetrated confined aquifer solution. The analytical solutions compare favorably to a finite-element solution of a simulated flow domain, except in the region immediately adjacent to the pumping well where the implicit assumption of zero borehole radius is violated. The procedure is demonstrated in one unconfined and two confined aquifer units near the General Separations Area at the Savannah River Site, a federal nuclear facility located in South Carolina. Aquifer hydraulic parameters estimated using this framework provide independent confirmation of parameters obtained from conventional aquifer tests. The sinusoidal approach also resulted in the elimination of investigation-derived wastes.

  7. Analytical Model for Fictitious Crack Propagation in Concrete Beams

    DEFF Research Database (Denmark)

    Ulfkjær, J. P.; Krenk, S.; Brincker, Rune

    An analytical model for load-displacement curves of unreinforced notched and un-notched concrete beams is presented. The load displacement-curve is obtained by combining two simple models. The fracture is modelled by a fictitious crack in an elastic layer around the mid-section of the beam. Outside...... the elastic layer the deformations are modelled by the Timoshenko beam theory. The state of stress in the elastic layer is assumed to depend bi-lineary on local elongation corresponding to a linear softening relation for the fictitious crack. For different beam size results from the analytical model...... is compared with results from a more accurate model based on numerical methods. The analytical model is shown to be in good agreement with the numerical results if the thickness of the elastic layer is taken as half the beam depth. Several general results are obtained. It is shown that the point on the load...

  8. A simple system for in-droplet incubation and quantification of agglutination assays

    KAUST Repository

    Castro, David

    2013-10-28

    This work reports on a simple system for quantitative sensing of a target analyte based on agglutination in micro-channels. Functionalized microbeads and analyte with no prior incubation are flowed in droplets (~2μL) through a thin silicone tube filled with mineral oil at a flow rate of 150 μL/min. Hydrodynamic forces alone produce a highly efficient mixing of the beads within the droplet, without the need of complex mixing structures or magnetic actuation. The setup allows rapid observation of agglutination (<2 min), which is quantified using image analysis, and has potential application to high-throughput analysis.

  9. A simple system for in-droplet incubation and quantification of agglutination assays

    KAUST Repository

    Castro, David; Kodzius, Rimantas; Foulds, Ian G.

    2013-01-01

    This work reports on a simple system for quantitative sensing of a target analyte based on agglutination in micro-channels. Functionalized microbeads and analyte with no prior incubation are flowed in droplets (~2μL) through a thin silicone tube filled with mineral oil at a flow rate of 150 μL/min. Hydrodynamic forces alone produce a highly efficient mixing of the beads within the droplet, without the need of complex mixing structures or magnetic actuation. The setup allows rapid observation of agglutination (<2 min), which is quantified using image analysis, and has potential application to high-throughput analysis.

  10. Evaluation of analytical performance based on partial order methodology.

    Science.gov (United States)

    Carlsen, Lars; Bruggemann, Rainer; Kenessova, Olga; Erzhigitov, Erkin

    2015-01-01

    Classical measurements of performances are typically based on linear scales. However, in analytical chemistry a simple scale may be not sufficient to analyze the analytical performance appropriately. Here partial order methodology can be helpful. Within the context described here, partial order analysis can be seen as an ordinal analysis of data matrices, especially to simplify the relative comparisons of objects due to their data profile (the ordered set of values an object have). Hence, partial order methodology offers a unique possibility to evaluate analytical performance. In the present data as, e.g., provided by the laboratories through interlaboratory comparisons or proficiency testings is used as an illustrative example. However, the presented scheme is likewise applicable for comparison of analytical methods or simply as a tool for optimization of an analytical method. The methodology can be applied without presumptions or pretreatment of the analytical data provided in order to evaluate the analytical performance taking into account all indicators simultaneously and thus elucidating a "distance" from the true value. In the present illustrative example it is assumed that the laboratories analyze a given sample several times and subsequently report the mean value, the standard deviation and the skewness, which simultaneously are used for the evaluation of the analytical performance. The analyses lead to information concerning (1) a partial ordering of the laboratories, subsequently, (2) a "distance" to the Reference laboratory and (3) a classification due to the concept of "peculiar points". Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Application of Depth-Averaged Velocity Profile for Estimation of Longitudinal Dispersion in Rivers

    Directory of Open Access Journals (Sweden)

    Mohammad Givehchi

    2010-01-01

    Full Text Available River bed profiles and depth-averaged velocities are used as basic data in empirical and analytical equations for estimating the longitudinal dispersion coefficient which has always been a topic of great interest for researchers. The simple model proposed by Maghrebi is capable of predicting the normalized isovel contours in the cross section of rivers and channels as well as the depth-averaged velocity profiles. The required data in Maghrebi’s model are bed profile, shear stress, and roughness distributions. Comparison of depth-averaged velocities and longitudinal dispersion coefficients observed in the field data and those predicted by Maghrebi’s model revealed that Maghrebi’s model had an acceptable accuracy in predicting depth-averaged velocity.

  12. A simple oblique dip model for geomagnetic micropulsations

    Directory of Open Access Journals (Sweden)

    J. A. Lawrie

    Full Text Available It is pointed out that simple models adopted so far have tended to neglect the obliquity of the magnetic field lines entering the Earth's surface. A simple alternative model is presented, in which the ambient field lines are straight, but enter wedge shaped boundaries at half a right-angle. The model is illustrated by assuming an axially symmetric, compressional, impulse type disturbance at the outer boundary, all other boundaries being assumed to be perfectly conducting. The numerical method used is checked from the instant the excitation ceases, by an analytical method. The first harmonic along field lines is found to be of noticeable size, but appears to be mainly due to coupling with the fundamental, and with the first harmonic across field lines.

    Key words. Magnetospheric physics (MHD waves and instabilities.

  13. Comparison of Analytical and Measured Performance Results on Network Coding in IEEE 802.11 Ad-Hoc Networks

    DEFF Research Database (Denmark)

    Zhao, Fang; Médard, Muriel; Hundebøll, Martin

    2012-01-01

    CATWOMAN that can run on standard WiFi hardware. We present an analytical model to evaluate the performance of COPE in simple networks, and our results show the excellent predictive quality of this model. By closely examining the performance in two simple topologies, we observe that the coding gain results...

  14. A simple geometrical model describing shapes of soap films suspended on two rings

    Science.gov (United States)

    Herrmann, Felix J.; Kilvington, Charles D.; Wildenberg, Rebekah L.; Camacho, Franco E.; Walecki, Wojciech J.; Walecki, Peter S.; Walecki, Eve S.

    2016-09-01

    We measured and analysed the stability of two types of soap films suspended on two rings using the simple conical frusta-based model, where we use common definition of conical frustum as a portion of a cone that lies between two parallel planes cutting it. Using frusta-based we reproduced very well-known results for catenoid surfaces with and without a central disk. We present for the first time a simple conical frusta based spreadsheet model of the soap surface. This very simple, elementary, geometrical model produces results surprisingly well matching the experimental data and known exact analytical solutions. The experiment and the spreadsheet model can be used as a powerful teaching tool for pre-calculus and geometry students.

  15. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  16. Parameter estimation in a simple stochastic differential equation for phytoplankton modelling

    DEFF Research Database (Denmark)

    Møller, Jan Kloppenborg; Madsen, Henrik; Carstensen, Jacob

    2011-01-01

    The use of stochastic differential equations (SDEs) for simulation of aquatic ecosystems has attracted increasing attention in recent years. The SDE setting also provides the opportunity for statistical estimation of ecosystem parameters. We present an estimation procedure, based on Kalman...

  17. A simple visual estimation of food consumption in carnivores.

    Directory of Open Access Journals (Sweden)

    Katherine R Potgieter

    Full Text Available Belly-size ratings or belly scores are frequently used in carnivore research as a method of rating whether and how much an animal has eaten. This method provides only a rough ordinal measure of fullness and does not quantify the amount of food an animal has consumed. Here we present a method for estimating the amount of meat consumed by individual African wild dogs Lycaon pictus. We fed 0.5 kg pieces of meat to wild dogs being temporarily held in enclosures and measured the corresponding change in belly size using lateral side photographs taken perpendicular to the animal. The ratio of belly depth to body length was positively related to the mass of meat consumed and provided a useful estimate of the consumption. Similar relationships could be calculated to determine amounts consumed by other carnivores, thus providing a useful tool in the study of feeding behaviour.

  18. Rapid, Simple, and Sensitive Spectrofluorimetric Method for the Estimation of Ganciclovir in Bulk and Pharmaceutical Formulations

    Directory of Open Access Journals (Sweden)

    Garima Balwani

    2013-01-01

    Full Text Available A new, simple, rapid, sensitive, accurate, and affordable spectrofluorimetric method was developed and validated for the estimation of ganciclovir in bulk as well as in marketed formulations. The method was based on measuring the native fluorescence of ganciclovir in 0.2 M hydrochloric acid buffer of pH 1.2 at 374 nm after excitation at 257 nm. The calibration graph was found to be rectilinear in the concentration range of 0.25–2.00 μg mL−1. The limit of quantification and limit of detection were found to be 0.029 μg mL−1 and 0.010 μg mL−1, respectively. The method was fully validated for various parameters according to ICH guidelines. The results demonstrated that the procedure is accurate, precise, and reproducible (relative standard deviation <2% and can be successfully applied for the determination of ganciclovir in its commercial capsules with average percentage recovery of 101.31 ± 0.90.

  19. Constant pressure mode extended simple gradient liquid chromatography system for micro and nanocolumns

    Czech Academy of Sciences Publication Activity Database

    Šesták, Jozef; Kahle, Vladislav

    2014-01-01

    Roč. 1350, Jul (2014), s. 68-71 ISSN 0021-9673 R&D Projects: GA MV VG20102015023 Institutional support: RVO:68081715 Keywords : constant pressure HPLC * gradient elution * simple liquid chromatograph Subject RIV: CB - Analytical Chemistry, Separation Impact factor: 4.169, year: 2014 http://hdl.handle.net/11104/0233990

  20. Study on analytical modelling approaches to the performance of thin film PV modules in sunny inland climates

    International Nuclear Information System (INIS)

    Torres-Ramírez, M.; Nofuentes, G.; Silva, J.P.; Silvestre, S.; Muñoz, J.V.

    2014-01-01

    This work is aimed at verifying that analytical modelling approaches may provide an estimation of the outdoor performance of TF (thin film) PV (photovoltaic) technologies in inland sites with sunny climates with adequate accuracy for engineering purposes. Osterwald's and constant fill factor methods were tried to model the maximum power delivered and the annual energy produced by PV modules corresponding to four TF PV technologies. Only calibrated electrical parameters at STC (standard test conditions), on-plane global irradiance and module temperature are required as inputs. A 12-month experimental campaign carried out in Madrid and Jaén (Spain) provided the necessary data. Modelled maximum power and annual energy values obtained through both methods were statistically compared to the experimental ones. In power terms, the RMSE (root mean square error) stays below 3.8% and 4.5% for CdTe (cadmium telluride) and CIGS (copper indium gallium selenide sulfide) PV modules, respectively, while RMSE exceeds 5.4% for a-Si (amorphous silicon) or a-Si:H/μc-Si PV modules. Regarding energy terms, errors lie below 4.0% in all cases. Thus, the methods tried may be used to model the outdoor behaviour of the a-Si, a-Si:H/μc-Si, CIGS and CdTe PV modules tested – ordered from the lowest to the highest accuracy obtained – in sites with similar spectral characteristics to those of the two sites considered. - Highlights: • Simple analytical methods to model the outdoor behaviour of thin film PV (photovoltaic) technologies. • 8 PV modules were deployed outdoors over a 12-month period in two sunny inland sites. • RMSE (root mean square error) values stay below 3.8% and 4.5% in CdTe (cadmium telluride) and CIGS (copper indium gallium selenide sulfide) PV modules. • Errors remain below 4.0% for all the PV modules and sites in energy terms. • Simple methods: suitable estimation of PV outdoor behaviour for engineering purposes

  1. Approaching near real-time biosensing: microfluidic microsphere based biosensor for real-time analyte detection.

    Science.gov (United States)

    Cohen, Noa; Sabhachandani, Pooja; Golberg, Alexander; Konry, Tania

    2015-04-15

    In this study we describe a simple lab-on-a-chip (LOC) biosensor approach utilizing well mixed microfluidic device and a microsphere-based assay capable of performing near real-time diagnostics of clinically relevant analytes such cytokines and antibodies. We were able to overcome the adsorption kinetics reaction rate-limiting mechanism, which is diffusion-controlled in standard immunoassays, by introducing the microsphere-based assay into well-mixed yet simple microfluidic device with turbulent flow profiles in the reaction regions. The integrated microsphere-based LOC device performs dynamic detection of the analyte in minimal amount of biological specimen by continuously sampling micro-liter volumes of sample per minute to detect dynamic changes in target analyte concentration. Furthermore we developed a mathematical model for the well-mixed reaction to describe the near real time detection mechanism observed in the developed LOC method. To demonstrate the specificity and sensitivity of the developed real time monitoring LOC approach, we applied the device for clinically relevant analytes: Tumor Necrosis Factor (TNF)-α cytokine and its clinically used inhibitor, anti-TNF-α antibody. Based on the reported results herein, the developed LOC device provides continuous sensitive and specific near real-time monitoring method for analytes such as cytokines and antibodies, reduces reagent volumes by nearly three orders of magnitude as well as eliminates the washing steps required by standard immunoassays. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Rigid inclusions-Comparison between analytical and numerical methods

    International Nuclear Information System (INIS)

    Gomez Perez, R.; Melentijevic, S.

    2014-01-01

    This paper compares different analytical methods for analysis of rigid inclusions with finite element modeling. First of all, the load transfer in the distribution layer is analyzed for its different thicknesses and different inclusion grids to define the range between results obtained by analytical and numerical methods. The interaction between the soft soil and the inclusion in the estimation of settlements is studied as well. Considering different stiffness of the soft soil, settlements obtained analytical and numerically are compared. The influence of the soft soil modulus of elasticity on the neutral point depth was also performed by finite elements. This depth has a great importance for the definition of the total length of rigid inclusion. (Author)

  3. A two-dimensional analytical well model with applications to groundwater flow and convective transport modelling in the geosphere

    International Nuclear Information System (INIS)

    Chan, T.; Nakka, B.W.

    1994-12-01

    A two-dimensional analytical well model has been developed to describe steady groundwater flow in an idealized, confined aquifer intersected by a withdrawal well. The aquifer comprises a low-dipping fracture zone. The model is useful for making simple quantitative estimates of the transport of contaminants along groundwater pathways in the fracture zone to the well from an underground source that intercepts the fracture zone. This report documents the mathematical development of the analytical well model. It outlines the assumptions and method used to derive an exact analytical solution, which is verified by two other methods. It presents expressions for calculating quantities such as streamlines (groundwater flow paths), fractional volumetric flow rates, contaminant concentration in well water and minimum convective travel time to the well. In addition, this report presents the results of applying the analytical model to a site-specific conceptual model of the Whiteshell Research Area in southeastern Manitoba, Canada. This hydrogeological model includes the presence of a 20-m-thick, low-dipping (18 deg) fracture zone (LD1) that intercepts the horizon of a hypothetical disposal vault located at a depth of 500 m. A withdrawal well intercepts LD1 between the vault level and the ground surface. Predictions based on parameters and boundary conditions specific to LD1 are presented graphically. The analytical model has specific applications in the SYVAC geosphere model (GEONET) to calculate the fraction of a plume of contaminants moving up the fracture zone that is captured by the well, and to describe the drawdown in the hydraulic head in the fracture zone caused by the withdrawal well. (author). 16 refs., 6 tabs., 35 figs

  4. Analytical approach to the evaluation of nuclide transmutations

    International Nuclear Information System (INIS)

    Vukadin, Z.; Osmokrovic, P.

    1995-01-01

    Analytical approach to the evaluation of nuclide concentrations in a transmutation chain is presented. Non singular Bateman coefficients and depletion functions are used to overcome numerical difficulties when applying well-known Bateman solution of a simple radioactive decay. Method enables evaluation of complete decay chains without elimination of short lived radionuclides. It is efficient and accurate. Practical application of the method is demonstrated by computing the neptunium series inventory in used Candu TM fuel. (author)

  5. Analytical work on local faults in LMFBR subassembly

    International Nuclear Information System (INIS)

    Yoshikawa, H.; Miyaguchi, K.; Hirata, N.; Kasahara, F.

    1979-01-01

    Analytical codes have been developed for evaluating various severe but highly unlikely events of local faults in the LMFBR subassembly (S/A). These include: (1) local flow blockage, (2) two-phase thermohydraulics under fission gas release, and (3) inter-S/A failure propagation. A simple inter-S/A thermal failure propagation analysis code, FUMES, is described that allows an easy parametric study of propagation potential of fuel fog in a S/A. 7 refs

  6. Investigation of clustering in sets of analytical data

    Energy Technology Data Exchange (ETDEWEB)

    Kajfosz, J [Institute of Nuclear Physics, Cracow (Poland)

    1993-04-01

    Foundation of the statistical method of cluster analysis are briefly presented and its usefulness for the examination and evaluation of analytical data obtained from series of samples investigated by PIXE, PIGE or other methods is discussed. A simple program for fast examination of dissimilarities between samples within an investigated series is described. Useful information on clustering for several hundreds of samples can be obtained with minimal time and storage requirements. (author). 5 refs, 10 figs.

  7. Investigation of clustering in sets of analytical data

    International Nuclear Information System (INIS)

    Kajfosz, J.

    1993-04-01

    Foundation of the statistical method of cluster analysis are briefly presented and its usefulness for the examination and evaluation of analytical data obtained from series of samples investigated by PIXE, PIGE or other methods is discussed. A simple program for fast examination of dissimilarities between samples within an investigated series is described. Useful information on clustering for several hundreds of samples can be obtained with minimal time and storage requirements. (author). 5 refs, 10 figs

  8. Analytic analysis on asymmetrical micro arcing in high plasma potential RF plasma systems

    International Nuclear Information System (INIS)

    Yin, Y; McKenzie, D R; Bilek, M M M

    2006-01-01

    We report experimental and analytical results on asymmetrical micro arcing in a RF (radio frequency) plasma. Micro arcing, resulting from high plasma potential, in RF plasma was found to occur only on the grounded electrode for a variety of electrode and surface configurations. The analytic derivation was based on a simple RF time-dependent Child-Langmuir sheath model and electric current continuity. We found that the minimum potential difference in one RF period across the grounded electrode sheath depends on the area ratio of the grounded electrode to the powered electrode. As the area ratio increases, the minimum potential difference across a sheath increases for the grounded electrode but not for the RF powered electrode. We showed that discharge time in micro arcing is more than 100 RF periods; thus the presence of a continuous high electric field in one RF cycle results in micro arcing on the grounded electrode. However, the minimum potential difference in one RF period across the powered electrode sheath is always small so that it prevents micro arcing occurring even though the average sheath voltage can be large. This simple analytic model is consistent with particle-in-cell simulation results

  9. Analytical Model for Fictitious Crack Propagation in Concrete Beams

    DEFF Research Database (Denmark)

    Ulfkjær, J. P.; Krenk, Steen; Brincker, Rune

    1995-01-01

    An analytical model for load-displacement curves of concrete beams is presented. The load-displacement curve is obtained by combining two simple models. The fracture is modeled by a fictitious crack in an elastic layer around the midsection of the beam. Outside the elastic layer the deformations...... are modeled by beam theory. The state of stress in the elastic layer is assumed to depend bilinearly on local elongation corresponding to a linear softening relation for the fictitious crack. Results from the analytical model are compared with results from a more detailed model based on numerical methods...... for different beam sizes. The analytical model is shown to be in agreement with the numerical results if the thickness of the elastic layer is taken as half the beam depth. It is shown that the point on the load-displacement curve where the fictitious crack starts to develop and the point where the real crack...

  10. New methodology for analytical calculation of resonance integrals in an heterogeneous medium

    International Nuclear Information System (INIS)

    Campos, T.P.R. de; Martinez, A.S.

    1986-01-01

    A new methodology for analytical calculation of Resonance Integral in a typical fuel cell is presented. The expression obtained for the Resonance Integral presents the advantage of being analytical. Its constituent terms are combinations of the well known function J(xi,β) with its partial derivatives in regard to β. This is a general expression for all types of resonance. The parameters used in this method depend on the resonance type and are obtained as a function of the parameter lambda. A simple expression, depending on resonance parameters is proposed for this variable. (Author) [pt

  11. Pre-concentration technique for reduction in "Analytical instrument requirement and analysis"

    Science.gov (United States)

    Pal, Sangita; Singha, Mousumi; Meena, Sher Singh

    2018-04-01

    Availability of analytical instruments for a methodical detection of known and unknown effluents imposes a serious hindrance in qualification and quantification. Several analytical instruments such as Elemental analyzer, ICP-MS, ICP-AES, EDXRF, ion chromatography, Electro-analytical instruments which are not only expensive but also time consuming, required maintenance, damaged essential parts replacement which are of serious concern. Move over for field study and instant detection installation of these instruments are not convenient to each and every place. Therefore, technique such as pre-concentration of metal ions especially for lean stream elaborated and justified. Chelation/sequestration is the key of immobilization technique which is simple, user friendly, most effective, least expensive, time efficient; easy to carry (10g - 20g vial) to experimental field/site has been demonstrated.

  12. Multi-Criteria Decision Making For Determining A Simple Model of Supplier Selection

    Science.gov (United States)

    Harwati

    2017-06-01

    Supplier selection is a decision with many criteria. Supplier selection model usually involves more than five main criteria and more than 10 sub-criteria. In fact many model includes more than 20 criteria. Too many criteria involved in supplier selection models sometimes make it difficult to apply in many companies. This research focuses on designing supplier selection that easy and simple to be applied in the company. Analytical Hierarchy Process (AHP) is used to weighting criteria. The analysis results there are four criteria that are easy and simple can be used to select suppliers: Price (weight 0.4) shipment (weight 0.3), quality (weight 0.2) and services (weight 0.1). A real case simulation shows that simple model provides the same decision with a more complex model.

  13. A closed analytic form for p-d elastic scattering at high energy

    International Nuclear Information System (INIS)

    Li, Y.; Lo, S.

    1983-01-01

    Using a simple harmonic oscillator wave function for deuteron it is possible to give an analytic solution in closed form for p-d elastic scattering. It has the advantage of displaying clearly all the contributions separately (D-wave, spin flip etc.). It can also fit experimental data

  14. Eigenvalue estimates of positive integral operators with analytic ...

    Indian Academy of Sciences (India)

    Eigenvalue estimates of positive integral operators. 337 will be used to denote, respectively, the complex line integral of f along γ and the integral of f with respect to arc-length measure. In the first case we assume γ has an orientation. The notation Lp(γ ) will denote the Lp space of normalized arc length measure on γ with.

  15. Developing automated analytical methods for scientific environments using LabVIEW.

    Science.gov (United States)

    Wagner, Christoph; Armenta, Sergio; Lendl, Bernhard

    2010-01-15

    The development of new analytical techniques often requires the building of specially designed devices, each requiring its own dedicated control software. Especially in the research and development phase, LabVIEW has proven to be one highly useful tool for developing this software. Yet, it is still common practice to develop individual solutions for different instruments. In contrast to this, we present here a single LabVIEW-based program that can be directly applied to various analytical tasks without having to change the program code. Driven by a set of simple script commands, it can control a whole range of instruments, from valves and pumps to full-scale spectrometers. Fluid sample (pre-)treatment and separation procedures can thus be flexibly coupled to a wide range of analytical detection methods. Here, the capabilities of the program have been demonstrated by using it for the control of both a sequential injection analysis - capillary electrophoresis (SIA-CE) system with UV detection, and an analytical setup for studying the inhibition of enzymatic reactions using a SIA system with FTIR detection.

  16. Noise Induces Biased Estimation of the Correction Gain.

    Directory of Open Access Journals (Sweden)

    Jooeun Ahn

    Full Text Available The detection of an error in the motor output and the correction in the next movement are critical components of any form of motor learning. Accordingly, a variety of iterative learning models have assumed that a fraction of the error is adjusted in the next trial. This critical fraction, the correction gain, learning rate, or feedback gain, has been frequently estimated via least-square regression of the obtained data set. Such data contain not only the inevitable noise from motor execution, but also noise from measurement. It is generally assumed that this noise averages out with large data sets and does not affect the parameter estimation. This study demonstrates that this is not the case and that in the presence of noise the conventional estimate of the correction gain has a significant bias, even with the simplest model. Furthermore, this bias does not decrease with increasing length of the data set. This study reveals this limitation of current system identification methods and proposes a new method that overcomes this limitation. We derive an analytical form of the bias from a simple regression method (Yule-Walker and develop an improved identification method. This bias is discussed as one of other examples for how the dynamics of noise can introduce significant distortions in data analysis.

  17. A Simple Measure of Price Adjustment Coefficients.

    OpenAIRE

    Damodaran, Aswath

    1993-01-01

    One measure of market efficiency is the speed with which prices adjust to new information. The author develops a simple approach to estimating these price adjustment coefficients by using the information in return processes. This approach is used to estimate t he price adjustment coefficients for firms listed on the NYSE and the A MEX as well as for over-the-counter stocks. The author finds evidence of a lagged adjustment to new information in shorter return intervals for firms in all market ...

  18. PCCE-A Predictive Code for Calorimetric Estimates in actively cooled components affected by pulsed power loads

    International Nuclear Information System (INIS)

    Agostinetti, P.; Palma, M. Dalla; Fantini, F.; Fellin, F.; Pasqualotto, R.

    2011-01-01

    The analytical interpretative models for calorimetric measurements currently available in the literature can consider close systems in steady-state and transient conditions, or open systems but only in steady-state conditions. The PCCE code (Predictive Code for Calorimetric Estimations), here presented, introduces some novelties. In fact, it can simulate with an analytical approach both the heated component and the cooling circuit, evaluating the heat fluxes due to conductive and convective processes both in steady-state and transient conditions. The main goal of this code is to model heating and cooling processes in actively cooled components of fusion experiments affected by high pulsed power loads, that are not easily analyzed with purely numerical approaches (like Finite Element Method or Computational Fluid Dynamics). A dedicated mathematical formulation, based on concentrated parameters, has been developed and is here described in detail. After a comparison and benchmark with the ANSYS commercial code, the PCCE code is applied to predict the calorimetric parameters in simple scenarios of the SPIDER experiment.

  19. A Modified Gash Model for Estimating Rainfall Interception Loss of Forest Using Remote Sensing Observations at Regional Scale

    Directory of Open Access Journals (Sweden)

    Yaokui Cui

    2014-04-01

    Full Text Available Rainfall interception loss of forest is an important component of water balance in a forested ecosystem. The Gash analytical model has been widely used to estimate the forest interception loss at field scale. In this study, we proposed a simple model to estimate rainfall interception loss of heterogeneous forest at regional scale with several reasonable assumptions using remote sensing observations. The model is a modified Gash analytical model using easily measured parameters of forest structure from satellite data and extends the original Gash model from point-scale to the regional scale. Preliminary results, using remote sensing data from Moderate Resolution Imaging Spectroradiometer (MODIS products, field measured rainfall data, and meteorological data of the Automatic Weather Station (AWS over a picea crassifolia forest in the upper reaches of the Heihe River Basin in northwestern China, showed reasonable accuracy in estimating rainfall interception loss at both the Dayekou experimental site (R2 = 0.91, RMSE = 0.34 mm∙d −1 and the Pailugou experimental site (R2 = 0.82, RMSE = 0.6 mm∙d −1, compared with ground measurements based on per unit area of forest. The interception loss map of the study area was shown to be strongly heterogeneous. The modified model has robust physics and is insensitive to the input parameters, according to the sensitivity analysis using numerical simulations. The modified model appears to be stable and easy to be applied for operational estimation of interception loss over large areas.

  20. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles.

    Directory of Open Access Journals (Sweden)

    Yongjun Ahn

    Full Text Available The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive

  1. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles.

    Science.gov (United States)

    Ahn, Yongjun; Yeo, Hwasoo

    2015-01-01

    The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric

  2. Analytical study on the criticality of the stochastic optimal velocity model

    International Nuclear Information System (INIS)

    Kanai, Masahiro; Nishinari, Katsuhiro; Tokihiro, Tetsuji

    2006-01-01

    In recent works, we have proposed a stochastic cellular automaton model of traffic flow connecting two exactly solvable stochastic processes, i.e., the asymmetric simple exclusion process and the zero range process, with an additional parameter. It is also regarded as an extended version of the optimal velocity model, and moreover it shows particularly notable properties. In this paper, we report that when taking optimal velocity function to be a step function, all of the flux-density graph (i.e. the fundamental diagram) can be estimated. We first find that the fundamental diagram consists of two line segments resembling an inversed-λ form, and next identify their end-points from a microscopic behaviour of vehicles. It is notable that by using a microscopic parameter which indicates a driver's sensitivity to the traffic situation, we give an explicit formula for the critical point at which a traffic jam phase arises. We also compare these analytical results with those of the optimal velocity model, and point out the crucial differences between them

  3. The challenge of simple graphics for multimodal studies

    DEFF Research Database (Denmark)

    Johannessen, Christian Mosbæk

    2018-01-01

    This article suggests that a Multimodal Social Semiotics (MSS) approach to graphics is severely challenged by structurally very simple texts. Methodologically, MSS favours the level at which elements from discrete modes are integrated grammatically into texts. Because the tradition has this focus......, the analytical description of the expression plane of many modes is underdeveloped. In the case of graphics, we have no descriptive or explanatory readiness for graphic form. The article aims to remedy this problem by combining (i) a small inventory of formal dichotomies for graphic shape features at a general...

  4. Quantifying the measurement uncertainty of results from environmental analytical methods.

    Science.gov (United States)

    Moser, J; Wegscheider, W; Sperka-Gottlieb, C

    2001-07-01

    The Eurachem-CITAC Guide Quantifying Uncertainty in Analytical Measurement was put into practice in a public laboratory devoted to environmental analytical measurements. In doing so due regard was given to the provisions of ISO 17025 and an attempt was made to base the entire estimation of measurement uncertainty on available data from the literature or from previously performed validation studies. Most environmental analytical procedures laid down in national or international standards are the result of cooperative efforts and put into effect as part of a compromise between all parties involved, public and private, that also encompasses environmental standards and statutory limits. Central to many procedures is the focus on the measurement of environmental effects rather than on individual chemical species. In this situation it is particularly important to understand the measurement process well enough to produce a realistic uncertainty statement. Environmental analytical methods will be examined as far as necessary, but reference will also be made to analytical methods in general and to physical measurement methods where appropriate. This paper describes ways and means of quantifying uncertainty for frequently practised methods of environmental analysis. It will be shown that operationally defined measurands are no obstacle to the estimation process as described in the Eurachem/CITAC Guide if it is accepted that the dominating component of uncertainty comes from the actual practice of the method as a reproducibility standard deviation.

  5. Rethinking Visual Analytics for Streaming Data Applications

    Energy Technology Data Exchange (ETDEWEB)

    Crouser, R. Jordan; Franklin, Lyndsey; Cook, Kris

    2017-01-01

    In the age of data science, the use of interactive information visualization techniques has become increasingly ubiquitous. From online scientific journals to the New York Times graphics desk, the utility of interactive visualization for both storytelling and analysis has become ever more apparent. As these techniques have become more readily accessible, the appeal of combining interactive visualization with computational analysis continues to grow. Arising out of a need for scalable, human-driven analysis, primary objective of visual analytics systems is to capitalize on the complementary strengths of human and machine analysis, using interactive visualization as a medium for communication between the two. These systems leverage developments from the fields of information visualization, computer graphics, machine learning, and human-computer interaction to support insight generation in areas where purely computational analyses fall short. Over the past decade, visual analytics systems have generated remarkable advances in many historically challenging analytical contexts. These include areas such as modeling political systems [Crouser et al. 2012], detecting financial fraud [Chang et al. 2008], and cybersecurity [Harrison et al. 2012]. In each of these contexts, domain expertise and human intuition is a necessary component of the analysis. This intuition is essential to building trust in the analytical products, as well as supporting the translation of evidence into actionable insight. In addition, each of these examples also highlights the need for scalable analysis. In each case, it is infeasible for a human analyst to manually assess the raw information unaided, and the communication overhead to divide the task between a large number of analysts makes simple parallelism intractable. Regardless of the domain, visual analytics tools strive to optimize the allocation of human analytical resources, and to streamline the sensemaking process on data that is massive

  6. Risk analysis of analytical validations by probabilistic modification of FMEA.

    Science.gov (United States)

    Barends, D M; Oldenhof, M T; Vredenbregt, M J; Nauta, M J

    2012-05-01

    Risk analysis is a valuable addition to validation of an analytical chemistry process, enabling not only detecting technical risks, but also risks related to human failures. Failure Mode and Effect Analysis (FMEA) can be applied, using a categorical risk scoring of the occurrence, detection and severity of failure modes, and calculating the Risk Priority Number (RPN) to select failure modes for correction. We propose a probabilistic modification of FMEA, replacing the categorical scoring of occurrence and detection by their estimated relative frequency and maintaining the categorical scoring of severity. In an example, the results of traditional FMEA of a Near Infrared (NIR) analytical procedure used for the screening of suspected counterfeited tablets are re-interpretated by this probabilistic modification of FMEA. Using this probabilistic modification of FMEA, the frequency of occurrence of undetected failure mode(s) can be estimated quantitatively, for each individual failure mode, for a set of failure modes, and the full analytical procedure. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Financial bubbles analysis with a cross-sectional estimator

    OpenAIRE

    Abergel, Frederic; Huth, Nicolas; Toke, Ioane Muni

    2009-01-01

    We highlight a very simple statistical tool for the analysis of financial bubbles, which has already been studied in [1]. We provide extensive empirical tests of this statistical tool and investigate analytically its link with stocks correlation structure.

  8. Introduction, comparison, and validation of Meta-Essentials: A free and simple tool for meta-analysis.

    Science.gov (United States)

    Suurmond, Robert; van Rhee, Henk; Hak, Tony

    2017-12-01

    We present a new tool for meta-analysis, Meta-Essentials, which is free of charge and easy to use. In this paper, we introduce the tool and compare its features to other tools for meta-analysis. We also provide detailed information on the validation of the tool. Although free of charge and simple, Meta-Essentials automatically calculates effect sizes from a wide range of statistics and can be used for a wide range of meta-analysis applications, including subgroup analysis, moderator analysis, and publication bias analyses. The confidence interval of the overall effect is automatically based on the Knapp-Hartung adjustment of the DerSimonian-Laird estimator. However, more advanced meta-analysis methods such as meta-analytical structural equation modelling and meta-regression with multiple covariates are not available. In summary, Meta-Essentials may prove a valuable resource for meta-analysts, including researchers, teachers, and students. © 2017 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.

  9. A Simple Demonstration for Estimating the Persistence of Vision

    Science.gov (United States)

    MacInnes, Iain; Smith, Stuart

    2010-01-01

    In the "The Science Study Series" book "The Physics of Television", it is stated that persistence of vision lasts for about a tenth of a second. This will be a notional figure just as 25 cm is taken to be the least distance of distinct vision. Estimates range from 1/8 to 1/16 s.

  10. Forecasting Hotspots-A Predictive Analytics Approach.

    Science.gov (United States)

    Maciejewski, R; Hafen, R; Rudolph, S; Larew, S G; Mitchell, M A; Cleveland, W S; Ebert, D S

    2011-04-01

    Current visual analytics systems provide users with the means to explore trends in their data. Linked views and interactive displays provide insight into correlations among people, events, and places in space and time. Analysts search for events of interest through statistical tools linked to visual displays, drill down into the data, and form hypotheses based upon the available information. However, current systems stop short of predicting events. In spatiotemporal data, analysts are searching for regions of space and time with unusually high incidences of events (hotspots). In the cases where hotspots are found, analysts would like to predict how these regions may grow in order to plan resource allocation and preventative measures. Furthermore, analysts would also like to predict where future hotspots may occur. To facilitate such forecasting, we have created a predictive visual analytics toolkit that provides analysts with linked spatiotemporal and statistical analytic views. Our system models spatiotemporal events through the combination of kernel density estimation for event distribution and seasonal trend decomposition by loess smoothing for temporal predictions. We provide analysts with estimates of error in our modeling, along with spatial and temporal alerts to indicate the occurrence of statistically significant hotspots. Spatial data are distributed based on a modeling of previous event locations, thereby maintaining a temporal coherence with past events. Such tools allow analysts to perform real-time hypothesis testing, plan intervention strategies, and allocate resources to correspond to perceived threats.

  11. Analytic approaches to atomic response properties

    International Nuclear Information System (INIS)

    Lamm, E.E.

    1980-01-01

    Many important response properties, e.g., multipole polarizabilites and sum rules, photodetachment cross sections, and closely-related long-range dispersion force coefficients, are insensitive to details of electronic structure. In this investigation, analytic asymptotic theories of atomic response properties are constructed that yield results as accurate as those obtained by more elaborate numerical methods. In the first chapter, a novel and simple method is used to determined the multipole sum rules S/sub l/(-k), for positive and negative values of k, of the hydrogen atom and the hydrogen negative ion in the asymptotic approximation. In the second chapter, an analytically-tractable extended asymptotic model for the response properites of weakly-bound anions is proposed and the multipole polarizability, multipole sum rules, and photodetachment cross section determined by the model are computed analytically. Dipole polarizabilities and photodetachment cross sections determined from the model for Li-, Na-, and K- are compared with the numercal results of Moores and Norcross. Agreement is typically within 15% if the pseudopotential is included. In the third chapter a comprehensive and unified treatment of atomic multipole oscillator strengths, dynamic multipole polarizabilites, and dispersion force constants in a variety of Coulomb-like approximations is presented. A theoretically and computationally superior modification of the original Bates-Damgaard (BD) procedure, referred to here as simply the Coulomb approximation (CA), is introduced. An analytic expression for the dynamic multipole polarizability is found which contains as special cases this quantity within the CA, the extended Coulomb approximation (ECA) of Adelman and Szabo, and the quantum defect orbital (QDO) method of Simons

  12. A simple methodology for obtaining X-ray color images in scanning electron microscopy

    International Nuclear Information System (INIS)

    Veiga, M.M. da; Pietroluongo, L.R.V.

    1985-01-01

    A simple methodology for obtaining at least 3 elements X-ray images in only one photography is described. The fluorescent X-ray image is obtained from scanning electron microscopy with energy dispersion analysis system. The change of detector analytic channels, color cellophane foils and color films are used sequentially. (M.C.K.) [pt

  13. Design of laser-generated shockwave experiments. An approach using analytic models

    International Nuclear Information System (INIS)

    Lee, Y.T.; Trainor, R.J.

    1980-01-01

    Two of the target-physics phenomena which must be understood before a clean experiment can be confidently performed are preheating due to suprathermal electrons and shock decay due to a shock-rarefaction interaction. Simple analytic models are described for these two processes and the predictions of these models are compared with those of the LASNEX fluid physics code. We have approached this work not with the view of surpassing or even approaching the reliability of the code calculations, but rather with the aim of providing simple models which may be used for quick parameter-sensitivity evaluations, while providing physical insight into the problems

  14. Workshop on Analytical Methods in Statistics

    CERN Document Server

    Jurečková, Jana; Maciak, Matúš; Pešta, Michal

    2017-01-01

    This volume collects authoritative contributions on analytical methods and mathematical statistics. The methods presented include resampling techniques; the minimization of divergence; estimation theory and regression, eventually under shape or other constraints or long memory; and iterative approximations when the optimal solution is difficult to achieve. It also investigates probability distributions with respect to their stability, heavy-tailness, Fisher information and other aspects, both asymptotically and non-asymptotically. The book not only presents the latest mathematical and statistical methods and their extensions, but also offers solutions to real-world problems including option pricing. The selected, peer-reviewed contributions were originally presented at the workshop on Analytical Methods in Statistics, AMISTAT 2015, held in Prague, Czech Republic, November 10-13, 2015.

  15. Characterization, thermal stability studies, and analytical method development of Paromomycin for formulation development.

    Science.gov (United States)

    Khan, Wahid; Kumar, Neeraj

    2011-06-01

    Paromomycin (PM) is an aminoglycoside antibiotic, first isolated in the 1950s, and approved in 2006 for treatment of visceral leishmaniasis. Although isolated six decades back, sufficient information essential for development of pharmaceutical formulation is not available for PM. The purpose of this paper was to determine thermal stability and development of new analytical method for formulation development of PM. PM was characterized by thermoanalytical (DSC, TGA, and HSM) and by spectroscopic (FTIR) techniques and these techniques were used to establish thermal stability of PM after heating PM at 100, 110, 120, and 130 °C for 24 h. Biological activity of these heated samples was also determined by microbiological assay. Subsequently, a simple, rapid and sensitive RP-HPLC method for quantitative determination of PM was developed using pre-column derivatization with 9-fluorenylmethyl chloroformate. The developed method was applied to estimate PM quantitatively in two parenteral dosage forms. PM was successfully characterized by various stated techniques. These techniques indicated stability of PM for heating up to 120 °C for 24 h, but when heated at 130 °C, PM is liable to degradation. This degradation is also observed in microbiological assay where PM lost ∼30% of its biological activity when heated at 130 °C for 24 h. New analytical method was developed for PM in the concentration range of 25-200 ng/ml with intra-day and inter-day variability of stability of PM was determined successfully. Developed analytical method was found sensitive, accurate, and precise for quantification of PM. Copyright © 2010 John Wiley & Sons, Ltd. Copyright © 2010 John Wiley & Sons, Ltd.

  16. Simple approach for the fabrication of screen-printed carbon-based electrode for amperometric detection on microchip electrophoresis

    International Nuclear Information System (INIS)

    Petroni, Jacqueline Marques; Lucca, Bruno Gabriel; Ferreira, Valdir Souza

    2017-01-01

    This paper describes a simple method for the fabrication of screen-printed based electrodes for amperometric detection on microchip electrophoresis (ME) devices. The procedure developed is quite simple and does not require expensive instrumentation or sophisticated protocols commonly employed on the production of amperometric sensors, such as photolithography or sputtering steps. The electrodes were fabricated through manual deposition of home-made conductive carbon ink over patterned acrylic substrate. Morphological structure and electrochemical behavior of the carbon electrodes were investigated by scanning electron microscopy and cyclic voltammetry. The produced amperometric sensors were coupled to polydimethylsiloxane (PDMS) microchips at end-channel configuration in order to evaluate their analytical performance. For this purpose, electrophoretic experiments were carried out using nitrite and ascorbic acid as model analytes. Separation of these substances was successfully performed within 50s with good resolution (R = 1.2) and sensitivities (713.5 pA/μM for nitrite and 255.4 pA/μM for ascorbate). The reproducibility of the fabrication method was evaluated and revealed good values concerning the peak currents obtained (8.7% for nitrite and 9.3% for ascorbate). The electrodes obtained through this method exhibited satisfactory lifetime (ca. 400 runs) over low fabrication cost (less than $1 per piece). The feasibility of the proposed device for real analysis was demonstrated through the determination of nitrite concentration levels in drinking water samples. Based on the results achieved, the approach proposed here shows itself as an interesting alternative for simple fabrication of carbon-based electrodes. Furthermore, the devices indicate great promise for other kind of analytical applications involving ME devices. - Highlights: • A novel method to fabricate screen-printed electrodes for amperometric detection in ME is demonstrated. • No sophisticated

  17. Simple approach for the fabrication of screen-printed carbon-based electrode for amperometric detection on microchip electrophoresis

    Energy Technology Data Exchange (ETDEWEB)

    Petroni, Jacqueline Marques [Instituto de Química, Universidade Federal de Mato Grosso do Sul, Campo Grande, MS, 79074-460 (Brazil); Lucca, Bruno Gabriel, E-mail: bruno.lucca@ufes.br [Departamento de Ciências Naturais, Universidade Federal do Espírito Santo, São Mateus, ES, 29932-540 (Brazil); Ferreira, Valdir Souza [Instituto de Química, Universidade Federal de Mato Grosso do Sul, Campo Grande, MS, 79074-460 (Brazil)

    2017-02-15

    This paper describes a simple method for the fabrication of screen-printed based electrodes for amperometric detection on microchip electrophoresis (ME) devices. The procedure developed is quite simple and does not require expensive instrumentation or sophisticated protocols commonly employed on the production of amperometric sensors, such as photolithography or sputtering steps. The electrodes were fabricated through manual deposition of home-made conductive carbon ink over patterned acrylic substrate. Morphological structure and electrochemical behavior of the carbon electrodes were investigated by scanning electron microscopy and cyclic voltammetry. The produced amperometric sensors were coupled to polydimethylsiloxane (PDMS) microchips at end-channel configuration in order to evaluate their analytical performance. For this purpose, electrophoretic experiments were carried out using nitrite and ascorbic acid as model analytes. Separation of these substances was successfully performed within 50s with good resolution (R = 1.2) and sensitivities (713.5 pA/μM for nitrite and 255.4 pA/μM for ascorbate). The reproducibility of the fabrication method was evaluated and revealed good values concerning the peak currents obtained (8.7% for nitrite and 9.3% for ascorbate). The electrodes obtained through this method exhibited satisfactory lifetime (ca. 400 runs) over low fabrication cost (less than $1 per piece). The feasibility of the proposed device for real analysis was demonstrated through the determination of nitrite concentration levels in drinking water samples. Based on the results achieved, the approach proposed here shows itself as an interesting alternative for simple fabrication of carbon-based electrodes. Furthermore, the devices indicate great promise for other kind of analytical applications involving ME devices. - Highlights: • A novel method to fabricate screen-printed electrodes for amperometric detection in ME is demonstrated. • No sophisticated

  18. Simple and Reliable Method to Estimate the Fingertip Static Coefficient of Friction in Precision Grip.

    Science.gov (United States)

    Barrea, Allan; Bulens, David Cordova; Lefevre, Philippe; Thonnard, Jean-Louis

    2016-01-01

    The static coefficient of friction (µ static ) plays an important role in dexterous object manipulation. Minimal normal force (i.e., grip force) needed to avoid dropping an object is determined by the tangential force at the fingertip-object contact and the frictional properties of the skin-object contact. Although frequently assumed to be constant for all levels of normal force (NF, the force normal to the contact), µ static actually varies nonlinearly with NF and increases at low NF levels. No method is currently available to measure the relationship between µ static and NF easily. Therefore, we propose a new method allowing the simple and reliable measurement of the fingertip µ static at different NF levels, as well as an algorithm for determining µ static from measured forces and torques. Our method is based on active, back-and-forth movements of a subject's finger on the surface of a fixed six-axis force and torque sensor. µ static is computed as the ratio of the tangential to the normal force at slip onset. A negative power law captures the relationship between µ static and NF. Our method allows the continuous estimation of µ static as a function of NF during dexterous manipulation, based on the relationship between µ static and NF measured before manipulation.

  19. A Simple Semi-Empirical Model for the Estimation of Photosynthetically Active Radiation from Satellite Data in the Tropics

    Directory of Open Access Journals (Sweden)

    S. Janjai

    2013-01-01

    Full Text Available This paper presents a simple semi-empirical model for estimating global photosynthetically active radiation (PAR under all sky conditions. The model expresses PAR as a function of cloud index, aerosol optical depth, total ozone column, solar zenith angle, and air mass. The formulation of the model was based on a four-year period (2008–2011 of PAR data obtained from the measurements at four solar monitoring stations in a tropical environment of Thailand. These are Chiang Mai (18.78°N, 98.98°E, Ubon Ratchathani (15.25°N, 104.87°E, Nakhon Pathom (13.82°N, 100.04°E, and Songkhla (7.20°N, 100.60°E. The cloud index was derived from MTSAT-1R satellite, whereas the aerosol optical depth was obtained from MODIS/Terra satellite. For the total ozone column, it was retrieved from OMI/Aura satellite. The model was validated against independent data set from the four stations. It was found that hourly PAR estimated from the proposed model and that obtained from the measurements were in reasonable agreement, with the root mean square difference (RMSD and mean bias difference (MBD of 14.3% and −5.8%, respectively. In addition, for the case of monthly average hourly PAR, RMSD and MBD were reduced to 11.1% and −5.1%, respectively.

  20. A simple model for indentation creep

    Science.gov (United States)

    Ginder, Ryan S.; Nix, William D.; Pharr, George M.

    2018-03-01

    A simple model for indentation creep is developed that allows one to directly convert creep parameters measured in indentation tests to those observed in uniaxial tests through simple closed-form relationships. The model is based on the expansion of a spherical cavity in a power law creeping material modified to account for indentation loading in a manner similar to that developed by Johnson for elastic-plastic indentation (Johnson, 1970). Although only approximate in nature, the simple mathematical form of the new model makes it useful for general estimation purposes or in the development of other deformation models in which a simple closed-form expression for the indentation creep rate is desirable. Comparison to a more rigorous analysis which uses finite element simulation for numerical evaluation shows that the new model predicts uniaxial creep rates within a factor of 2.5, and usually much better than this, for materials creeping with stress exponents in the range 1 ≤ n ≤ 7. The predictive capabilities of the model are evaluated by comparing it to the more rigorous analysis and several sets of experimental data in which both the indentation and uniaxial creep behavior have been measured independently.

  1. Spectral reflectance of solar light from dirty snow: a simple theoretical model and its validation

    OpenAIRE

    A. Kokhanovsky

    2013-01-01

    A simple analytical equation for the snow albedo as the function of snow grain size, soot concentration, and soot mass absorption coefficient is presented. This simple equation can be used in climate models to assess the influence of snow pollution on snow albedo. It is shown that the squared logarithm of the albedo (in the visible) is directly proportional to the soot concentration. A new method of the determination of the soot mass absorption coefficient in snow is proposed. The equations d...

  2. Logarithmic residues of analytic Banach algebra valued functions possessing a simply meromorphic inverse

    NARCIS (Netherlands)

    H. Bart (Harm); T. Ehrhardt; B. Silbermann

    2001-01-01

    textabstractA logarithmic residue is a contour integral of a logarithmic derivative (left or right) of an analytic Banach algebra valued function. For functions possessing a meromorphic inverse with simple poles only, the logarithmic residues are identified as the sums of idempotents. With the help

  3. The Structured Intuitive Model for Product Line Economics (SIMPLE)

    National Research Council Canada - National Science Library

    Clements, Paul C; McGregor, John D; Cohen, Sholom G

    2005-01-01

    .... This report presents the Structured Intuitive Model of Product Line Economics (SIMPLE), a general-purpose business model that supports the estimation of the costs and benefits in a product line development organization...

  4. Keeping it simple: flowering plants tend to retain, and revert to, simple leaves.

    Science.gov (United States)

    Geeta, R; Dávalos, Liliana M; Levy, André; Bohs, Lynn; Lavin, Mathew; Mummenhoff, Klaus; Sinha, Neelima; Wojciechowski, Martin F

    2012-01-01

    • A wide range of factors (developmental, physiological, ecological) with unpredictable interactions control variation in leaf form. Here, we examined the distribution of leaf morphologies (simple and complex forms) across angiosperms in a phylogenetic context to detect patterns in the directions of changes in leaf shape. • Seven datasets (diverse angiosperms and six nested clades, Sapindales, Apiales, Papaveraceae, Fabaceae, Lepidium, Solanum) were analysed using maximum likelihood and parsimony methods to estimate asymmetries in rates of change among character states. • Simple leaves are most frequent among angiosperm lineages today, were inferred to be ancestral in angiosperms and tended to be retained in evolution (stasis). Complex leaves slowly originated ('gains') and quickly reverted to simple leaves ('losses') multiple times, with a significantly greater rate of losses than gains. Lobed leaves may be a labile intermediate step between different forms. The nested clades showed mixed trends; Solanum, like the angiosperms in general, had higher rates of losses than gains, but the other clades had higher rates of gains than losses. • The angiosperm-wide pattern could be taken as a null model to test leaf evolution patterns in particular clades, in which patterns of variation suggest clade-specific processes that have yet to be investigated fully. © 2011 The Authors. New Phytologist © 2011 New Phytologist Trust.

  5. A genetic algorithm-based job scheduling model for big data analytics.

    Science.gov (United States)

    Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei

    Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.

  6. Design, construction and commissioning of a simple, low cost permanent magnet quadrupole doublet

    International Nuclear Information System (INIS)

    Conard, E.M.; Parcell, S.K.; Arnott, D.W.

    1999-01-01

    In the framework of new beam line developments at the Australian National Medical Cyclotron, a permanent magnet quadrupole doublet was designed and built entirely in house. The design proceeded from the classical work by Halbach et al. but emphasised the 'low cost' aspect by using simple rectangular NdFeB blocks and simple assembly techniques. Numerical simulations using the (2-D) Gemini code were performed to check the field strength and homogeneity predictions of analytical calculations. This paper gives the reasons for the selection of a permanent magnet, the design and construction details of the quadrupole doublet and its field measurement results. (authors)

  7. Least squares estimation in a simple random coefficient autoregressive model

    DEFF Research Database (Denmark)

    Johansen, S; Lange, T

    2013-01-01

    The question we discuss is whether a simple random coefficient autoregressive model with infinite variance can create the long swings, or persistence, which are observed in many macroeconomic variables. The model is defined by yt=stρyt−1+εt,t=1,…,n, where st is an i.i.d. binary variable with p...... we prove the curious result that View the MathML source. The proof applies the notion of a tail index of sums of positive random variables with infinite variance to find the order of magnitude of View the MathML source and View the MathML source and hence the limit of View the MathML source...

  8. Data Acquisition Programming (LabVIEW): An Aid to Teaching Instrumental Analytical Chemistry.

    Science.gov (United States)

    Gostowski, Rudy

    A course was developed at Austin Peay State University (Tennessee) which offered an opportunity for hands-on experience with the essential components of modern analytical instruments. The course aimed to provide college students with the skills necessary to construct a simple model instrument, including the design and fabrication of electronic…

  9. Analytical model for screening potential CO2 repositories

    Science.gov (United States)

    Okwen, R.T.; Stewart, M.T.; Cunningham, J.A.

    2011-01-01

    Assessing potential repositories for geologic sequestration of carbon dioxide using numerical models can be complicated, costly, and time-consuming, especially when faced with the challenge of selecting a repository from a multitude of potential repositories. This paper presents a set of simple analytical equations (model), based on the work of previous researchers, that could be used to evaluate the suitability of candidate repositories for subsurface sequestration of carbon dioxide. We considered the injection of carbon dioxide at a constant rate into a confined saline aquifer via a fully perforated vertical injection well. The validity of the analytical model was assessed via comparison with the TOUGH2 numerical model. The metrics used in comparing the two models include (1) spatial variations in formation pressure and (2) vertically integrated brine saturation profile. The analytical model and TOUGH2 show excellent agreement in their results when similar input conditions and assumptions are applied in both. The analytical model neglects capillary pressure and the pressure dependence of fluid properties. However, simulations in TOUGH2 indicate that little error is introduced by these simplifications. Sensitivity studies indicate that the agreement between the analytical model and TOUGH2 depends strongly on (1) the residual brine saturation, (2) the difference in density between carbon dioxide and resident brine (buoyancy), and (3) the relationship between relative permeability and brine saturation. The results achieved suggest that the analytical model is valid when the relationship between relative permeability and brine saturation is linear or quasi-linear and when the irreducible saturation of brine is zero or very small. ?? 2011 Springer Science+Business Media B.V.

  10. Optomechanical parameter estimation

    International Nuclear Information System (INIS)

    Ang, Shan Zheng; Tsang, Mankei; Harris, Glen I; Bowen, Warwick P

    2013-01-01

    We propose a statistical framework for the problem of parameter estimation from a noisy optomechanical system. The Cramér–Rao lower bound on the estimation errors in the long-time limit is derived and compared with the errors of radiometer and expectation–maximization (EM) algorithms in the estimation of the force noise power. When applied to experimental data, the EM estimator is found to have the lowest error and follow the Cramér–Rao bound most closely. Our analytic results are envisioned to be valuable to optomechanical experiment design, while the EM algorithm, with its ability to estimate most of the system parameters, is envisioned to be useful for optomechanical sensing, atomic magnetometry and fundamental tests of quantum mechanics. (paper)

  11. Estimating linear temporal trends from aggregated environmental monitoring data

    Science.gov (United States)

    Erickson, Richard A.; Gray, Brian R.; Eager, Eric A.

    2017-01-01

    Trend estimates are often used as part of environmental monitoring programs. These trends inform managers (e.g., are desired species increasing or undesired species decreasing?). Data collected from environmental monitoring programs is often aggregated (i.e., averaged), which confounds sampling and process variation. State-space models allow sampling variation and process variations to be separated. We used simulated time-series to compare linear trend estimations from three state-space models, a simple linear regression model, and an auto-regressive model. We also compared the performance of these five models to estimate trends from a long term monitoring program. We specifically estimated trends for two species of fish and four species of aquatic vegetation from the Upper Mississippi River system. We found that the simple linear regression had the best performance of all the given models because it was best able to recover parameters and had consistent numerical convergence. Conversely, the simple linear regression did the worst job estimating populations in a given year. The state-space models did not estimate trends well, but estimated population sizes best when the models converged. We found that a simple linear regression performed better than more complex autoregression and state-space models when used to analyze aggregated environmental monitoring data.

  12. Genetic Algorithms for Estimating Effective Parameters in a Lumped Reactor Model for Reactivity Predictions

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico

    2001-01-01

    The control system of a reactor should be able to predict, in real time, the amount of reactivity to be inserted (e.g., by control rod movements and boron injection and dilution) to respond to a given electrical load demand or to undesired, accidental transients. The real-time constraint renders impractical the use of a large, detailed dynamic reactor code. One has, then, to resort to simplified analytical models with lumped effective parameters suitably estimated from the reactor data.The simple and well-known Chernick model for describing the reactor power evolution in the presence of xenon is considered and the feasibility of using genetic algorithms for estimating the effective nuclear parameters involved and the initial nonmeasurable xenon and iodine conditions is investigated. This approach has the advantage of counterbalancing the inherent model simplicity with the periodic reestimation of the effective parameter values pertaining to each reactor on the basis of its recent history. By so doing, other effects, such as burnup, are automatically taken into account

  13. Making advanced analytics work for you.

    Science.gov (United States)

    Barton, Dominic; Court, David

    2012-10-01

    Senior leaders who write off the move toward big data as a lot of big talk are making, well, a big mistake. So argue McKinsey's Barton and Court, who worked with dozens of companies to figure out how to translate advanced analytics into nuts-and-bolts practices that affect daily operations on the front lines. The authors offer a useful guide for leaders and managers who want to take a deliberative approach to big data-but who also want to get started now. First, companies must identify the right data for their business, seek to acquire the information creatively from diverse sources, and secure the necessary IT support. Second, they need to build analytics models that are tightly focused on improving performance, making the models only as complex as business goals demand. Third, and most important, companies must transform their capabilities and culture so that the analytical results can be implemented from the C-suite to the front lines. That means developing simple tools that everyone in the organization can understand and teaching people why the data really matter. Embracing big data is as much about changing mind-sets as it is about crunching numbers. Executed with the right care and flexibility, this cultural shift could have payoffs that are, well, bigger than you expect.

  14. A simple three dimensional wide-angle beam propagation method

    Science.gov (United States)

    Ma, Changbao; van Keuren, Edward

    2006-05-01

    The development of three dimensional (3-D) waveguide structures for chip scale planar lightwave circuits (PLCs) is hampered by the lack of effective 3-D wide-angle (WA) beam propagation methods (BPMs). We present a simple 3-D wide-angle beam propagation method (WA-BPM) using Hoekstra’s scheme along with a new 3-D wave equation splitting method. The applicability, accuracy and effectiveness of our method are demonstrated by applying it to simulations of wide-angle beam propagation and comparing them with analytical solutions.

  15. Analytical approaches to the determination of simple biophenols in forest trees such as Acer (maple), Betula (birch), Coniferus, Eucalyptus, Juniperus (cedar), Picea (spruce) and Quercus (oak).

    Science.gov (United States)

    Bedgood, Danny R; Bishop, Andrea G; Prenzler, Paul D; Robards, Kevin

    2005-06-01

    Analytical methods are reviewed for the determination of simple biophenols in forest trees such as Acer (maple), Betula (birch), Coniferus, Eucalyptus, Juniperus (cedar), Picea (spruce) and Quercus (oak). Data are limited but nevertheless clearly establish the critical importance of sample preparation and pre-treatment in the analysis. For example, drying methods invariably reduce the recovery of biophenols and this is illustrated by data for birch leaves where flavonoid glycosides were determined as 12.3 +/- 0.44 mg g(-1) in fresh leaves but 9.7 +/- 0.35 mg g(-1) in air-dried samples (data expressed as dry weight). Diverse sample handling procedures have been employed for recovery of biophenols. The range of biophenols and diversity of sample types precludes general procedural recommendations. Caution is necessary in selecting appropriate procedures as the high reactivity of these compounds complicates their analysis. Moreover, our experience suggests that their reactivity is very dependent on the matrix. The actual measurement is less contentious and high performance separation methods particularly liquid chromatography dominate analyses whilst coupled techniques involving electrospray ionization are becoming routine particularly for qualitative applications. Quantitative data are still the exception and are summarized for representative species that dominate the forest canopy of various habitats. Reported concentrations for simple phenols range from trace level (<0.1 microg g(-1)) to in excess of 500 microg g(-1) depending on a range of factors. Plant tissue is one of these variables but various biotic and abiotic processes such as stress are also important considerations.

  16. Analytic investigation of extended Heitler-Matthews model

    Energy Technology Data Exchange (ETDEWEB)

    Grimm, Stefan; Veberic, Darko; Engel, Ralph [KIT, IKP (Germany)

    2016-07-01

    Many features of extensive air showers are qualitatively well described by the Heitler cascade model and its extensions. The core of a shower is given by hadrons that interact with air nuclei. After each interaction some of these hadrons decay and feed the electromagnetic shower component. The most important parameters of such hadronic interactions are inelasticity, multiplicity, and the ratio of charged vs. neutral particles. However, in analytic considerations approximations are needed to include the characteristics of hadron production. We discuss extensions of the simple cascade model by analytic description of air showers by cascade models which include also the elasticity, and derive the number of produced muons. In a second step we apply this model to calculate the dependence of the shower center of gravity on model parameters. The depth of the center of gravity is closely related to that of the shower maximum, which is a commonly-used composition-sensitive observable.

  17. Simple parametrizations of parton distributions with Q$^{2}$ dependence given by asymptotic freedom

    CERN Document Server

    Buras, Andrzej J

    1978-01-01

    The Q/sup 2/ dependence of parton distributions as given by asymptotically free gauge theories can be represented by simple analytic expressions. In particular the sea distributions can be read directly from the first two moments. The results are compared with the SLAC ep data and with the recent Fermilab mu p data. The agreement is good. (32 refs).

  18. Semi-analytic approach to higher-order corrections in simple muonic bound systems: vacuum polarization, self-energy and radiative-recoil

    International Nuclear Information System (INIS)

    Jentschura, U.D.; Wundt, B.J.

    2011-01-01

    The current discrepancy of theory and experiment observed recently in muonic hydrogen necessitates a reinvestigation of all corrections to contribute to the Lamb shift in muonic hydrogen (μH), muonic deuterium (μD), the muonic 3 He ion (denoted here as μ 3 He + ), as well as in the muonic 4 He ion (μ 4 He + ). Here, we choose a semi-analytic approach and evaluate a number of higher-order corrections to vacuum polarization (VP) semi-analytically, while remaining integrals over the spectral density of VP are performed numerically. We obtain semi-analytic results for the second-order correction, and for the relativistic correction to VP. The self-energy correction to VP is calculated, including the perturbations of the Bethe logarithms by vacuum polarization. Sub-leading logarithmic terms in the radiative-recoil correction to the 2S-2P Lamb shift of order α(Zα) 5 μ 3 ln(Zα)/(m μ m N ) where α is the fine structure constant, are also obtained. All calculations are nonperturbative in the mass ratio of orbiting particle and nucleus. (authors)

  19. Using Fourier and Taylor series expansion in semi-analytical deformation analysis of thick-walled isotropic and wound composite structures

    Directory of Open Access Journals (Sweden)

    Jiran L.

    2016-06-01

    Full Text Available Thick-walled tubes made from isotropic and anisotropic materials are subjected to an internal pressure while the semi-analytical method is employed to investigate their elastic deformations. The contribution and novelty of this method is that it works universally for different loads, different boundary conditions, and different geometry of analyzed structures. Moreover, even when composite material is considered, the method requires no simplistic assumptions. The method uses a curvilinear tensor calculus and it works with the analytical expression of the total potential energy while the unknown displacement functions are approximated by using appropriate series expansion. Fourier and Taylor series expansion are involved into analysis in which they are tested and compared. The main potential of the proposed method is in analyses of wound composite structures when a simple description of the geometry is made in a curvilinear coordinate system while material properties are described in their inherent Cartesian coordinate system. Validations of the introduced semi-analytical method are performed by comparing results with those obtained from three-dimensional finite element analysis (FEA. Calculations with Fourier series expansion show noticeable disagreement with results from the finite element model because Fourier series expansion is not able to capture the course of radial deformation. Therefore, it can be used only for rough estimations of a shape after deformation. On the other hand, the semi-analytical method with Fourier Taylor series expansion works very well for both types of material. Its predictions of deformations are reliable and widely exploitable.

  20. Estimation of the core-wide fuel rod damage during a LWR LOCA

    International Nuclear Information System (INIS)

    Mattila, L.; Sairanen, R.; Stengaard, J.-O.

    1975-01-01

    The number of fuel rods puncturing during a LWR LOCA must be estimated as a part of the plant radioactivity release analysis. Due to the great number of fuel rods in the core and the great number of contributing parameters, many of them associated with wide uncertainty and/or truly random variability limits, probabilistic methods are well applicable. A succession of computer models developed for this purpose is described together with applications to WWER-440 PWR. Deterministic models are shown to be seriously inadequate and even misleading under certain circumstances. A simple analytical probabilistic model appears to be suitable for many applications. Monte Carlo techniques allow the development of such sophisticated models that errors in the input data presently available probably become dominant in the residual uncertainty of the corewide fuel rod puncture analysis. (author)

  1. Simple approach to approximate predictions of the vapor–liquid equilibrium curve near the critical point and its application to Lennard-Jones fluids

    International Nuclear Information System (INIS)

    Staśkiewicz, B.; Okrasiński, W.

    2012-01-01

    We propose a simple analytical form of the vapor–liquid equilibrium curve near the critical point for Lennard-Jones fluids. Coexistence densities curves and vapor pressure have been determined using the Van der Waals and Dieterici equation of state. In described method the Bernoulli differential equations, critical exponent theory and some type of Maxwell's criterion have been used. Presented approach has not yet been used to determine analytical form of phase curves as done in this Letter. Lennard-Jones fluids have been considered for analysis. Comparison with experimental data is done. The accuracy of the method is described. -- Highlights: ► We propose a new analytical way to determine the VLE curve. ► Simple, mathematically straightforward form of phase curves is presented. ► Comparison with experimental data is discussed. ► The accuracy of the method has been confirmed.

  2. Analytical Modeling Approach to Study Harmonic Mitigation in AC Grids with Active Impedance at Selective Frequencies

    Directory of Open Access Journals (Sweden)

    Gonzalo Abad

    2018-05-01

    Full Text Available This paper presents an analytical model, oriented to study harmonic mitigation aspects in AC grids. As it is well known, the presence of non-desired harmonics in AC grids can be palliated in several manners. However, in this paper, a power electronic-based active impedance at selective frequencies (ACISEF is used, due to its already proven flexibility and adaptability to the changing characteristics of AC grids. Hence, the proposed analytical model approach is specially conceived to globally consider both the model of the AC grid itself with its electric equivalent impedances, together with the power electronic-based ACISEF, including its control loops. In addition, the proposed analytical model presents practical and useful properties, as it is simple to understand and simple to use, it has low computational cost and simple adaptability to different scenarios of AC grids, and it provides an accurate enough representation of the reality. The benefits of using the proposed analytical model are shown in this paper through some examples of its usefulness, including an analysis of stability and the identification of sources of instability for a robust design, an analysis of effectiveness in harmonic mitigation, an analysis to assist in the choice of the most suitable active impedance under a given state of the AC grid, an analysis of the interaction between different compensators, and so on. To conclude, experimental validation of a 2.15 kA ACISEF in a real 33 kV AC grid is provided, in which real users (household and industry loads and crucial elements such as wind parks and HVDC systems are near inter-connected.

  3. Waste minimization in analytical methods

    International Nuclear Information System (INIS)

    Green, D.W.; Smith, L.L.; Crain, J.S.; Boparai, A.S.; Kiely, J.T.; Yaeger, J.S. Schilling, J.B.

    1995-01-01

    The US Department of Energy (DOE) will require a large number of waste characterizations over a multi-year period to accomplish the Department's goals in environmental restoration and waste management. Estimates vary, but two million analyses annually are expected. The waste generated by the analytical procedures used for characterizations is a significant source of new DOE waste. Success in reducing the volume of secondary waste and the costs of handling this waste would significantly decrease the overall cost of this DOE program. Selection of appropriate analytical methods depends on the intended use of the resultant data. It is not always necessary to use a high-powered analytical method, typically at higher cost, to obtain data needed to make decisions about waste management. Indeed, for samples taken from some heterogeneous systems, the meaning of high accuracy becomes clouded if the data generated are intended to measure a property of this system. Among the factors to be considered in selecting the analytical method are the lower limit of detection, accuracy, turnaround time, cost, reproducibility (precision), interferences, and simplicity. Occasionally, there must be tradeoffs among these factors to achieve the multiple goals of a characterization program. The purpose of the work described here is to add waste minimization to the list of characteristics to be considered. In this paper the authors present results of modifying analytical methods for waste characterization to reduce both the cost of analysis and volume of secondary wastes. Although tradeoffs may be required to minimize waste while still generating data of acceptable quality for the decision-making process, they have data demonstrating that wastes can be reduced in some cases without sacrificing accuracy or precision

  4. Estimating the Ground Water Resources of Atoll Islands

    Directory of Open Access Journals (Sweden)

    Arne E. Olsen

    2010-01-01

    Full Text Available Ground water resources of atolls, already minimal due to the small surface area and low elevation of the islands, are also subject to recurring, and sometimes devastating, droughts. As ground water resources become the sole fresh water source when rain catchment supplies are exhausted, it is critical to assess current groundwater resources and predict their depletion during drought conditions. Several published models, both analytical and empirical, are available to estimate the steady-state freshwater lens thickness of small oceanic islands. None fully incorporates unique shallow geologic characteristics of atoll islands, and none incorporates time-dependent processes. In this paper, we provide a review of these models, and then present a simple algebraic model, derived from results of a comprehensive numerical modeling study of steady-state atoll island aquifer dynamics, to predict the ground water response to changes in recharge on atoll islands. The model provides an estimate thickness of the freshwater lens as a function of annual rainfall rate, island width, Thurber Discontinuity depth, upper aquifer hydraulic conductivity, presence or absence of a confining reef flat plate, and in the case of drought, time. Results compare favorably with published atoll island lens thickness observations. The algebraic model is incorporated into a spreadsheet interface for use by island water resources managers.

  5. A rapid and simple screening test to detect the radiation treatment of fat-containing foods

    International Nuclear Information System (INIS)

    Delincee, H.

    1993-01-01

    In recent years several international efforts have been made to develop analytical detection methods for the radiation treatment of foods. A number of methods has indeed been developed. Particularly, for fat-containing foods several methods are already in an advanced stage. In addition to the sophisticated techniques such as gas chromatography/mass spectrometry which require relatively expensive equipment and/or extended sample preparation time, it would be desirable to have quick and simple screening tests, which immediately on-the-spot give some indication whether a food product has been irradiated or not. A solution to this problem for lipid-containing foods has been put forward by Furuta and co-workers (1991, 1992), who estimated the amount of carbon monoxide originating from the lipid fraction in poultry meat after irradiation. The carbon monoxide was expelled from the frozen meat by quick microwave heating and in the head space of the sample, the formed carbon monoxide was determined by gas chromatography. In order to speed up time of analysis, we have used an electrochemical CO sensor, as also is being used to estimate CO in ambient air in workplaces, to determine the CO content in the vapor expelled from the irradiated samples. This CO test is very simple, cheap and easy to perform. It takes only a few minutes to screen food samples for evidence of their having been radiation processed. If doubts concerning the radiation treatment of a sample arise, the more sophisticated - and expensive -methods for analyzing lipid-containing foods can be applied. Certainly the test is limited to food products which contain a certain amount of fat. A preliminary test with lean shrimps showed practically no difference between irradiated (2.5 and 5 kGy) and non-irradiated samples. By relating CO production to the fat content, possibly a better parameter for classification can be obtained. (orig./vhe)

  6. Methodology for estimating human perception to tremors in high-rise buildings

    Science.gov (United States)

    Du, Wenqi; Goh, Key Seng; Pan, Tso-Chien

    2017-07-01

    Human perception to tremors during earthquakes in high-rise buildings is usually associated with psychological discomfort such as fear and anxiety. This paper presents a methodology for estimating the level of perception to tremors for occupants living in high-rise buildings subjected to ground motion excitations. Unlike other approaches based on empirical or historical data, the proposed methodology performs a regression analysis using the analytical results of two generic models of 15 and 30 stories. The recorded ground motions in Singapore are collected and modified for structural response analyses. Simple predictive models are then developed to estimate the perception level to tremors based on a proposed ground motion intensity parameter—the average response spectrum intensity in the period range between 0.1 and 2.0 s. These models can be used to predict the percentage of occupants in high-rise buildings who may perceive the tremors at a given ground motion intensity. Furthermore, the models are validated with two recent tremor events reportedly felt in Singapore. It is found that the estimated results match reasonably well with the reports in the local newspapers and from the authorities. The proposed methodology is applicable to urban regions where people living in high-rise buildings might feel tremors during earthquakes.

  7. HTS axial flux induction motor with analytic and FEA modeling

    Energy Technology Data Exchange (ETDEWEB)

    Li, S., E-mail: alexlee.zn@gmail.com; Fan, Y.; Fang, J.; Qin, W.; Lv, G.; Li, J.H.

    2013-11-15

    Highlights: •A high temperature superconductor axial flux induction motor and a novel maglev scheme are presented. •Analytic method and finite element method have been adopted to model the motor and to calculate the force. •Magnetic field distribution in HTS coil is calculated by analytic method. •An effective method to improve the critical current of HTS coil is presented. •AC losses of HTS coils in the HTS axial flux induction motor are estimated and tested. -- Abstract: This paper presents a high-temperature superconductor (HTS) axial-flux induction motor, which can output levitation force and torque simultaneously. In order to analyze the character of the force, analytic method and finite element method are adopted to model the motor. To make sure the HTS can carry sufficiently large current and work well, the magnetic field distribution in HTS coil is calculated. An effective method to improve the critical current of HTS coil is presented. Then, AC losses in HTS windings in the motor are estimated and tested.

  8. HTS axial flux induction motor with analytic and FEA modeling

    International Nuclear Information System (INIS)

    Li, S.; Fan, Y.; Fang, J.; Qin, W.; Lv, G.; Li, J.H.

    2013-01-01

    Highlights: •A high temperature superconductor axial flux induction motor and a novel maglev scheme are presented. •Analytic method and finite element method have been adopted to model the motor and to calculate the force. •Magnetic field distribution in HTS coil is calculated by analytic method. •An effective method to improve the critical current of HTS coil is presented. •AC losses of HTS coils in the HTS axial flux induction motor are estimated and tested. -- Abstract: This paper presents a high-temperature superconductor (HTS) axial-flux induction motor, which can output levitation force and torque simultaneously. In order to analyze the character of the force, analytic method and finite element method are adopted to model the motor. To make sure the HTS can carry sufficiently large current and work well, the magnetic field distribution in HTS coil is calculated. An effective method to improve the critical current of HTS coil is presented. Then, AC losses in HTS windings in the motor are estimated and tested

  9. Methods for the calculation of uncertainty in analytical chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Suh, M. Y.; Sohn, S. C.; Park, Y. J.; Park, K. K.; Jee, K. Y.; Joe, K. S.; Kim, W. H

    2000-07-01

    This report describes the statistical rules for evaluating and expressing uncertainty in analytical chemistry. The procedures for the evaluation of uncertainty in chemical analysis are illustrated by worked examples. This report, in particular, gives guidance on how uncertainty can be estimated from various chemical analyses. This report can be also used for planning the experiments which will provide the information required to obtain an estimate of uncertainty for the method.

  10. Analytical studies related to Indian PHWR containment system performance

    International Nuclear Information System (INIS)

    Haware, S.K.; Markandeya, S.G.; Ghosh, A.K.; Kushwaha, H.S.; Venkat Raj, V.

    1998-01-01

    Build-up of pressure in a multi-compartment containment after a postulated accident, the growth, transportation and removal of aerosols in the containment are complex processes of vital importance in deciding the source term. The release of hydrogen and its combustion increases the overpressure. In order to analyze these complex processes and to enable proper estimation of the source term, well tested analytical tools are necessary. This paper gives a detailed account of the analytical tools developed/adapted for PSA level 2 studies. (author)

  11. Analytical solution for beam with time-dependent boundary conditions versus response spectrum

    International Nuclear Information System (INIS)

    Gou, P.F.; Panahi, K.K.

    2001-01-01

    This paper studies the responses of a uniform simple beam for which the supports are subjected to time-dependent conditions. Analytical solution in terms of series was presented for two cases: (1) Two supports of a simple beam are subjected to a harmonic motion, and (2) One of the two supports is stationary while the other is subjected to a harmonic motion. The results of the analytical solution were investigated and compared with the results of conventional response spectrum method using the beam finite element model. One of the applications of the results presented in this paper can be used to assess the adequacy and accuracy of the engineering approaches such as response spectra methods. It has been found that, when the excitation frequency equals the fundamental frequency of the beam, the results from response spectrum method are in good agreement with the exact calculation. The effects of initial conditions on the responses are also examined. It seems that the non-zero initial velocity has pronounced effects on the displacement time histories but it has no effect on the maximum accelerations. (author)

  12. Analytically tractable climate-carbon cycle feedbacks under 21st century anthropogenic forcing

    Science.gov (United States)

    Lade, Steven J.; Donges, Jonathan F.; Fetzer, Ingo; Anderies, John M.; Beer, Christian; Cornell, Sarah E.; Gasser, Thomas; Norberg, Jon; Richardson, Katherine; Rockström, Johan; Steffen, Will

    2018-05-01

    Changes to climate-carbon cycle feedbacks may significantly affect the Earth system's response to greenhouse gas emissions. These feedbacks are usually analysed from numerical output of complex and arguably opaque Earth system models. Here, we construct a stylised global climate-carbon cycle model, test its output against comprehensive Earth system models, and investigate the strengths of its climate-carbon cycle feedbacks analytically. The analytical expressions we obtain aid understanding of carbon cycle feedbacks and the operation of the carbon cycle. Specific results include that different feedback formalisms measure fundamentally the same climate-carbon cycle processes; temperature dependence of the solubility pump, biological pump, and CO2 solubility all contribute approximately equally to the ocean climate-carbon feedback; and concentration-carbon feedbacks may be more sensitive to future climate change than climate-carbon feedbacks. Simple models such as that developed here also provide workbenches for simple but mechanistically based explorations of Earth system processes, such as interactions and feedbacks between the planetary boundaries, that are currently too uncertain to be included in comprehensive Earth system models.

  13. Automated Predictive Big Data Analytics Using Ontology Based Semantics.

    Science.gov (United States)

    Nural, Mustafa V; Cotterell, Michael E; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A

    2015-10-01

    Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology.

  14. Analytic processor model for fast design-space exploration

    NARCIS (Netherlands)

    Jongerius, R.; Mariani, G.; Anghel, A.; Dittmann, G.; Vermij, E.; Corporaal, H.

    2015-01-01

    In this paper, we propose an analytic model that takes as inputs a) a parametric microarchitecture-independent characterization of the target workload, and b) a hardware configuration of the core and the memory hierarchy, and returns as output an estimation of processor-core performance. To validate

  15. Transformation of Bayesian posterior distribution into a basic analytical distribution

    International Nuclear Information System (INIS)

    Jordan Cizelj, R.; Vrbanic, I.

    2002-01-01

    Bayesian estimation is well-known approach that is widely used in Probabilistic Safety Analyses for the estimation of input model reliability parameters, such as component failure rates or probabilities of failure upon demand. In this approach, a prior distribution, which contains some generic knowledge about a parameter is combined with likelihood function, which contains plant-specific data about the parameter. Depending on the type of prior distribution, the resulting posterior distribution can be estimated numerically or analytically. In many instances only a numerical Bayesian integration can be performed. In such a case the posterior is provided in the form of tabular discrete distribution. On the other hand, it is much more convenient to have a parameter's uncertainty distribution that is to be input into a PSA model to be provided in the form of some basic analytical probability distribution, such as lognormal, gamma or beta distribution. One reason is that this enables much more convenient propagation of parameters' uncertainties through the model up to the so-called top events, such as plant system unavailability or core damage frequency. Additionally, software tools used to run PSA models often require that parameter's uncertainty distribution is defined in the form of one among the several allowed basic types of distributions. In such a case the posterior distribution that came as a product of Bayesian estimation needs to be transformed into an appropriate basic analytical form. In this paper, some approaches on transformation of posterior distribution to a basic probability distribution are proposed and discussed. They are illustrated by an example from NPP Krsko PSA model.(author)

  16. Simple heuristic for the viscosity of polydisperse hard spheres

    Science.gov (United States)

    Farr, Robert S.

    2014-12-01

    We build on the work of Mooney [Colloids Sci. 6, 162 (1951)] to obtain an heuristic analytic approximation to the viscosity of a suspension any size distribution of hard spheres in a Newtonian solvent. The result agrees reasonably well with rheological data on monodispserse and bidisperse hard spheres, and also provides an approximation to the random close packing fraction of polydisperse spheres. The implied packing fraction is less accurate than that obtained by Farr and Groot [J. Chem. Phys. 131(24), 244104 (2009)], but has the advantage of being quick and simple to evaluate.

  17. Multiplier ideal sheaves and analytic methods in algebraic geometry

    International Nuclear Information System (INIS)

    Demailly, J.-P.

    2001-01-01

    Our main purpose here is to describe a few analytic tools which are useful to study questions such as linear series and vanishing theorems for algebraic vector bundles. One of the early successes of analytic methods in this context is Kodaira's use of the Bochner technique in relation with the theory of harmonic forms, during the decade 1950-60.The idea is to represent cohomology classes by harmonic forms and to prove vanishing theorems by means of suitable a priori curvature estimates. We pursue the study of L2 estimates, in relation with the Nullstellenstatz and with the extension problem. We show how subadditivity can be used to derive an approximation theorem for (almost) plurisubharmonic functions: any such function can be approximated by a sequence of (almost) plurisubharmonic functions which are smooth outside an analytic set, and which define the same multiplier ideal sheaves. From this, we derive a generalized version of the hard Lefschetz theorem for cohomology with values in a pseudo-effective line bundle; namely, the Lefschetz map is surjective when the cohomology groups are twisted by the relevant multiplier ideal sheaves. These notes are essentially written with the idea of serving as an analytic tool- box for algebraic geometers. Although efficient algebraic techniques exist, our feeling is that the analytic techniques are very flexible and offer a large variety of guidelines for more algebraic questions (including applications to number theory which are not discussed here). We made a special effort to use as little prerequisites and to be as self-contained as possible; hence the rather long preliminary sections dealing with basic facts of complex differential geometry

  18. Multiplier ideal sheaves and analytic methods in algebraic geometry

    Energy Technology Data Exchange (ETDEWEB)

    Demailly, J -P [Universite de Grenoble I, Institut Fourier, Saint-Martin d' Heres (France)

    2001-12-15

    Our main purpose here is to describe a few analytic tools which are useful to study questions such as linear series and vanishing theorems for algebraic vector bundles. One of the early successes of analytic methods in this context is Kodaira's use of the Bochner technique in relation with the theory of harmonic forms, during the decade 1950-60.The idea is to represent cohomology classes by harmonic forms and to prove vanishing theorems by means of suitable a priori curvature estimates. We pursue the study of L2 estimates, in relation with the Nullstellenstatz and with the extension problem. We show how subadditivity can be used to derive an approximation theorem for (almost) plurisubharmonic functions: any such function can be approximated by a sequence of (almost) plurisubharmonic functions which are smooth outside an analytic set, and which define the same multiplier ideal sheaves. From this, we derive a generalized version of the hard Lefschetz theorem for cohomology with values in a pseudo-effective line bundle; namely, the Lefschetz map is surjective when the cohomology groups are twisted by the relevant multiplier ideal sheaves. These notes are essentially written with the idea of serving as an analytic tool- box for algebraic geometers. Although efficient algebraic techniques exist, our feeling is that the analytic techniques are very flexible and offer a large variety of guidelines for more algebraic questions (including applications to number theory which are not discussed here). We made a special effort to use as little prerequisites and to be as self-contained as possible; hence the rather long preliminary sections dealing with basic facts of complex differential geometry.

  19. Spectral reflectance of solar light from dirty snow: a simple theoretical model and its validation

    Directory of Open Access Journals (Sweden)

    A. Kokhanovsky

    2013-08-01

    Full Text Available A simple analytical equation for the snow albedo as the function of snow grain size, soot concentration, and soot mass absorption coefficient is presented. This simple equation can be used in climate models to assess the influence of snow pollution on snow albedo. It is shown that the squared logarithm of the albedo (in the visible is directly proportional to the soot concentration. A new method of the determination of the soot mass absorption coefficient in snow is proposed. The equations derived are applied to a dusty snow layer as well.

  20. Analytical evaluation of fission product sensitivities

    International Nuclear Information System (INIS)

    Sola, A.

    1977-01-01

    Evaluating the concentration of a fission product produced in a reactor requires the knowledge of a fairly large number of variables. Sensitivity studies were made to ascertain the important variables. Analytical formulae were developed sufficiently simple to allow numerical computations. Some simplified formulas are also given and they are applied to the following isotopes: 80 Se, 82 Se, 81 Br, 82 Br, 82 Kr, 83 Kr, 84 Kr, 85 Kr, 86 Kr. Their sensitivities to capture cross sections, fission yields, ratios of activation cross sections, half-lives (during and after irradiation), branching ratios, as well as to the neutron flux and to the time are considered

  1. Analytic Treatment of Deep Neural Networks Under Additive Gaussian Noise

    KAUST Repository

    Alfadly, Modar

    2018-01-01

    Despite the impressive performance of deep neural networks (DNNs) on numerous vision tasks, they still exhibit yet-to-understand uncouth behaviours. One puzzling behaviour is the reaction of DNNs to various noise attacks, where it has been shown that there exist small adversarial noise that can result in a severe degradation in the performance of DNNs. To rigorously treat this, we derive exact analytic expressions for the first and second moments (mean and variance) of a small piecewise linear (PL) network with a single rectified linear unit (ReLU) layer subject to general Gaussian input. We experimentally show that these expressions are tight under simple linearizations of deeper PL-DNNs, especially popular architectures in the literature (e.g. LeNet and AlexNet). Extensive experiments on image classification show that these expressions can be used to study the behaviour of the output mean of the logits for each class, the inter-class confusion and the pixel-level spatial noise sensitivity of the network. Moreover, we show how these expressions can be used to systematically construct targeted and non-targeted adversarial attacks. Then, we proposed a special estimator DNN, named mixture of linearizations (MoL), and derived the analytic expressions for its output mean and variance, as well. We employed these expressions to train the model to be particularly robust against Gaussian attacks without the need for data augmentation. Upon training this network on a loss that is consolidated with the derived output probabilistic moments, the network is not only robust under very high variance Gaussian attacks but is also as robust as networks that are trained with 20 fold data augmentation.

  2. Analytic Treatment of Deep Neural Networks Under Additive Gaussian Noise

    KAUST Repository

    Alfadly, Modar M.

    2018-04-12

    Despite the impressive performance of deep neural networks (DNNs) on numerous vision tasks, they still exhibit yet-to-understand uncouth behaviours. One puzzling behaviour is the reaction of DNNs to various noise attacks, where it has been shown that there exist small adversarial noise that can result in a severe degradation in the performance of DNNs. To rigorously treat this, we derive exact analytic expressions for the first and second moments (mean and variance) of a small piecewise linear (PL) network with a single rectified linear unit (ReLU) layer subject to general Gaussian input. We experimentally show that these expressions are tight under simple linearizations of deeper PL-DNNs, especially popular architectures in the literature (e.g. LeNet and AlexNet). Extensive experiments on image classification show that these expressions can be used to study the behaviour of the output mean of the logits for each class, the inter-class confusion and the pixel-level spatial noise sensitivity of the network. Moreover, we show how these expressions can be used to systematically construct targeted and non-targeted adversarial attacks. Then, we proposed a special estimator DNN, named mixture of linearizations (MoL), and derived the analytic expressions for its output mean and variance, as well. We employed these expressions to train the model to be particularly robust against Gaussian attacks without the need for data augmentation. Upon training this network on a loss that is consolidated with the derived output probabilistic moments, the network is not only robust under very high variance Gaussian attacks but is also as robust as networks that are trained with 20 fold data augmentation.

  3. Approximate analytic theory of the multijunction grill

    International Nuclear Information System (INIS)

    Hurtak, O.; Preinhaelter, J.

    1991-03-01

    An approximate analytic theory of the general multijunction grill is developed. Omitting the evanescent modes in the subsidiary waveguides both at the junction and at the grill mouth and neglecting multiple wave reflection, simple formulae are derived for the reflection coefficient, the amplitudes of the incident and reflected waves and the spectral power density. These quantities are expressed through the basic grill parameters (the electric length of the structure and phase shift between adjacent waveguides) and two sets of reflection coefficients describing wave reflections in the subsidiary waveguides at the junction and at the plasma. Approximate expressions for these coefficients are also given. The results are compared with a numerical solution of two specific examples; they were shown to be useful for the optimization and design of multijunction grills.For the JET structure it is shown that, in the case of a dense plasma,many results can be obtained from the simple formulae for a two-waveguide multijunction grill. (author) 12 figs., 12 refs

  4. Analytical modeling of thin film neutron converters and its application to thermal neutron gas detectors

    Energy Technology Data Exchange (ETDEWEB)

    Piscitelli, F; Esch, P Van, E-mail: piscitelli@ill.fr [Institut Laue-Langevin (ILL), 6, Jules Horowitz, 38042 Grenoble (France)

    2013-04-15

    A simple model is explored mainly analytically to calculate and understand the PHS of single and multi-layer thermal neutron detectors and to help optimize the design in different circumstances. Several theorems are deduced that can help guide the design.

  5. Effect of primary and secondary parameters on analytical estimation of effective thermal conductivity of two phase materials using unit cell approach

    Science.gov (United States)

    S, Chidambara Raja; P, Karthikeyan; Kumaraswamidhas, L. A.; M, Ramu

    2018-05-01

    Most of the thermal design systems involve two phase materials and analysis of such systems requires detailed understanding of the thermal characteristics of the two phase material. This article aimed to develop geometry dependent unit cell approach model by considering the effects of all primary parameters (conductivity ratio and concentration) and secondary parameters (geometry, contact resistance, natural convection, Knudsen and radiation) for the estimation of effective thermal conductivity of two-phase materials. The analytical equations have been formulated based on isotherm approach for 2-D and 3-D spatially periodic medium. The developed models are validated with standard models and suited for all kind of operating conditions. The results have shown substantial improvement compared to the existing models and are in good agreement with the experimental data.

  6. Accurate and simple wavefunctions for the helium isoelectronic sequence with correct cusp conditions

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez, K V [Departamento de Fisica, Universidad Nacional del Sur and Consejo Nacional de Investigaciones CientIficas y Tecnicas, 8000 BahIa Blanca, Buenos Aires (Argentina); Gasaneo, G [Departamento de Fisica, Universidad Nacional del Sur and Consejo Nacional de Investigaciones CientIficas y Tecnicas, 8000 BahIa Blanca, Buenos Aires (Argentina); Mitnik, D M [Instituto de AstronomIa y Fisica del Espacio, and Departamento de Fisica, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, C C 67, Suc. 28 (C1428EGA) Buenos Aires (Argentina)

    2007-10-14

    Simple and accurate wavefunctions for the He atom and He-like isoelectronic ions are presented. These functions-the product of hydrogenic one-electron solutions and a fully correlated part-satisfy all the coalescence cusp conditions at the Coulomb singularities. Functions with different numbers of parameters and different degrees of accuracy are discussed. Simple analytic expressions for the wavefunction and the energy, valid for a wide range of nuclear charges, are presented. The wavefunctions are tested, in the case of helium, through the calculations of various cross sections which probe different regions of the configuration space, mostly those close to the two-particle coalescence points.

  7. Numerically stable algorithm for combining census and sample estimates with the multivariate composite estimator

    Science.gov (United States)

    R. L. Czaplewski

    2009-01-01

    The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...

  8. Dual-domain mass-transfer parameters from electrical hysteresis: theory and analytical approach applied to laboratory, synthetic streambed, and groundwater experiments

    Science.gov (United States)

    Briggs, Martin A.; Day-Lewis, Frederick D.; Ong, John B.; Harvey, Judson W.; Lane, John W.

    2014-01-01

    Models of dual-domain mass transfer (DDMT) are used to explain anomalous aquifer transport behavior such as the slow release of contamination and solute tracer tailing. Traditional tracer experiments to characterize DDMT are performed at the flow path scale (meters), which inherently incorporates heterogeneous exchange processes; hence, estimated “effective” parameters are sensitive to experimental design (i.e., duration and injection velocity). Recently, electrical geophysical methods have been used to aid in the inference of DDMT parameters because, unlike traditional fluid sampling, electrical methods can directly sense less-mobile solute dynamics and can target specific points along subsurface flow paths. Here we propose an analytical framework for graphical parameter inference based on a simple petrophysical model explaining the hysteretic relation between measurements of bulk and fluid conductivity arising in the presence of DDMT at the local scale. Analysis is graphical and involves visual inspection of hysteresis patterns to (1) determine the size of paired mobile and less-mobile porosities and (2) identify the exchange rate coefficient through simple curve fitting. We demonstrate the approach using laboratory column experimental data, synthetic streambed experimental data, and field tracer-test data. Results from the analytical approach compare favorably with results from calibration of numerical models and also independent measurements of mobile and less-mobile porosity. We show that localized electrical hysteresis patterns resulting from diffusive exchange are independent of injection velocity, indicating that repeatable parameters can be extracted under varied experimental designs, and these parameters represent the true intrinsic properties of specific volumes of porous media of aquifers and hyporheic zones.

  9. Semi-analytic approach to higher-order corrections in simple muonic bound systems: vacuum polarization, self-energy and radiative-recoil

    Energy Technology Data Exchange (ETDEWEB)

    Jentschura, U.D. [Department of Physics, Missouri University of Science and Technology, Rolla MO65409 (United States); Institut fur Theoretische Physik, Universitat Heidelberg, Philosophenweg 16, 69120 Heidelberg (Germany); Wundt, B.J. [Department of Physics, Missouri University of Science and Technology, Rolla MO65409 (United States)

    2011-12-15

    The current discrepancy of theory and experiment observed recently in muonic hydrogen necessitates a reinvestigation of all corrections to contribute to the Lamb shift in muonic hydrogen ({mu}H), muonic deuterium ({mu}D), the muonic {sup 3}He ion (denoted here as {mu}{sup 3}He{sup +}), as well as in the muonic {sup 4}He ion ({mu}{sup 4}He{sup +}). Here, we choose a semi-analytic approach and evaluate a number of higher-order corrections to vacuum polarization (VP) semi-analytically, while remaining integrals over the spectral density of VP are performed numerically. We obtain semi-analytic results for the second-order correction, and for the relativistic correction to VP. The self-energy correction to VP is calculated, including the perturbations of the Bethe logarithms by vacuum polarization. Sub-leading logarithmic terms in the radiative-recoil correction to the 2S-2P Lamb shift of order {alpha}(Z{alpha}){sup 5{mu}3}ln(Z{alpha})/(m{sub {mu}mN}) where {alpha} is the fine structure constant, are also obtained. All calculations are nonperturbative in the mass ratio of orbiting particle and nucleus. (authors)

  10. Estimation of fatigue characteristics of asphaltic mixes using simple tests

    NARCIS (Netherlands)

    Medani, T.O.; Molenaar, A.A.A.

    2000-01-01

    A simplified procedure for estimation of fatigue characteristics of asphaltic mixes is presented. The procedure requires the determination of the so-called master curve (Le. the relationship between the mix stiffness, the loading time and the temperature), the asphalt properties and the mix

  11. An overview of analytical activities of control laboratory in NFC

    International Nuclear Information System (INIS)

    Balaji Rao, Y.; Subba Rao, Y.; Saibaba, N.

    2015-01-01

    As per the mandate of Department of Atomic Energy (DAE), Nuclear Fuel Complex (NFC) was established in 1971 for manufacturing Fuel Sub-assemblies for both PHWRs and BWRs operating in India on industrial scale. Control Laboratory (C.Lab) was envisaged as a centralized analytical facility to achieve the objectives of NFC on the similar lines of its predecessor, Analytical Chemistry Division at BARC. With highest ever production of 1200 MT of PHWR Fuel and 16 lakhs PHWR Fuel Tubes achieved during production year of 2014-15 and with increase in demand further for fuel requirements, NFC has got demanding situation in next year and accordingly, C. Lab has also geared up to meet the challenging demands of all the production plant. The average annual analytical load comes around 5 Lakhs estimations and to manage such a massive analytical load a proper synergy between good chemistry, process conditions and analytical methods is a necessity and laboratory is able to meet this important requirement consistently

  12. No Impact of the Analytical Method Used for Determining Cystatin C on Estimating Glomerular Filtration Rate in Children.

    Science.gov (United States)

    Alberer, Martin; Hoefele, Julia; Benz, Marcus R; Bökenkamp, Arend; Weber, Lutz T

    2017-01-01

    Measurement of inulin clearance is considered to be the gold standard for determining kidney function in children, but this method is time consuming and expensive. The glomerular filtration rate (GFR) is on the other hand easier to calculate by using various creatinine- and/or cystatin C (Cys C)-based formulas. However, for the determination of serum creatinine (Scr) and Cys C, different and non-interchangeable analytical methods exist. Given the fact that different analytical methods for the determination of creatinine and Cys C were used in order to validate existing GFR formulas, clinicians should be aware of the type used in their local laboratory. In this study, we compared GFR results calculated on the basis of different GFR formulas and either used Scr and Cys C values as determined by the analytical method originally employed for validation or values obtained by an alternative analytical method to evaluate any possible effects on the performance. Cys C values determined by means of an immunoturbidimetric assay were used for calculating the GFR using equations in which this analytical method had originally been used for validation. Additionally, these same values were then used in other GFR formulas that had originally been validated using a nephelometric immunoassay for determining Cys C. The effect of using either the compatible or the possibly incompatible analytical method for determining Cys C in the calculation of GFR was assessed in comparison with the GFR measured by creatinine clearance (CrCl). Unexpectedly, using GFR equations that employed Cys C values derived from a possibly incompatible analytical method did not result in a significant difference concerning the classification of patients as having normal or reduced GFR compared to the classification obtained on the basis of CrCl. Sensitivity and specificity were adequate. On the other hand, formulas using Cys C values derived from a compatible analytical method partly showed insufficient

  13. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  14. Method of analytic continuation by duality in QCD: Beyond QCD sum rules

    International Nuclear Information System (INIS)

    Kremer, M.; Nasrallah, N.F.; Papadopoulos, N.A.; Schilcher, K.

    1986-01-01

    We present the method of analytic continuation by duality which allows the approximate continuation of QCD amplitudes to small values of the momentum variables where direct perturbative calculations are not possible. This allows a substantial extension of the domain of applications of hadronic QCD phenomenology. The method is illustrated by a simple example which shows its essential features

  15. Determining input values for a simple parametric model to estimate ...

    African Journals Online (AJOL)

    Estimating soil evaporation (Es) is an important part of modelling vineyard evapotranspiration for irrigation purposes. Furthermore, quantification of possible soil texture and trellis effects is essential. Daily Es from six topsoils packed into lysimeters was measured under grapevines on slanting and vertical trellises, ...

  16. Paper-based microfluidic devices on the crime scene: A simple tool for rapid estimation of post-mortem interval using vitreous humour.

    Science.gov (United States)

    Garcia, Paulo T; Gabriel, Ellen F M; Pessôa, Gustavo S; Santos Júnior, Júlio C; Mollo Filho, Pedro C; Guidugli, Ruggero B F; Höehr, Nelci F; Arruda, Marco A Z; Coltro, Wendell K T

    2017-06-29

    This paper describes for the first time the use of paper-based analytical devices at crime scenes to estimate the post-mortem interval (PMI), based on the colorimetric determination of Fe 2+ in vitreous humour (VH) samples. Experimental parameters such as the paper substrate, the microzone diameter, the sample volume and the 1,10-phenanthroline (o-phen) concentration were optimised in order to ensure the best analytical performance. Grade 1 CHR paper, microzone with diameter of 5 mm, a sample volume of 4 μL and an o-phen concentration of 0.05 mol/L were chosen as the optimum experimental conditions. A good linear response was observed for a concentration range of Fe 2+ between 2 and 10 mg/L and the calculated values for the limit of detection (LOD) and limit of quantification (LOQ) were 0.3 and 0.9 mg/L, respectively. The specificity of the Fe 2+ colorimetric response was tested in the presence of the main interfering agents and no significant differences were found. After selecting the ideal experimental conditions, four HV samples were investigated on paper-based devices. The concentration levels of Fe 2+ achieved for samples #1, #2, #3 and #4 were 0.5 ± 0.1, 0.7 ± 0.1, 1.2 ± 0.1 and 15.1 ± 0.1 mg/L, respectively. These values are in good agreement with those calculated by ICP-MS. It important to note that the concentration levels measured using both techniques are proportional to the PMI. The limitation of the proposed analytical device is that it is restricted to a PMI greater than 1 day. The capability of providing an immediate answer about the PMI on the crime scene without any sophisticated instrumentation is a great achievement in modern instrumentation for forensic chemistry. The strategy proposed in this study could be helpful in many criminal investigations. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. A simple method for estimating the length density of convoluted tubular systems.

    Science.gov (United States)

    Ferraz de Carvalho, Cláudio A; de Campos Boldrini, Silvia; Nishimaru, Flávio; Liberti, Edson A

    2008-10-01

    We present a new method for estimating the length density (Lv) of convoluted tubular structures exhibiting an isotropic distribution. Although the traditional equation Lv=2Q/A is used, the parameter Q is obtained by considering the collective perimeters of tubular sections. This measurement is converted to a standard model of the structure, assuming that all cross-sections are approximately circular and have an average perimeter similar to that of actual circular cross-sections observed in the same material. The accuracy of this method was tested in eight experiments using hollow macaroni bent into helical shapes. After measuring the length of the macaroni segments, they were boiled and randomly packed into cylindrical volumes along with an aqueous suspension of gelatin and India ink. The solidified blocks were cut into slices 1.0 cm thick and 33.2 cm2 in area (A). The total perimeter of the macaroni cross-sections so revealed was stereologically estimated using a test system of straight parallel lines. Given Lv and the reference volume, the total length of macaroni in each section could be estimated. Additional corrections were made for the changes induced by boiling, and the off-axis position of the thread used to measure length. No statistical difference was observed between the corrected estimated values and the actual lengths. This technique is useful for estimating the length of capillaries, renal tubules, and seminiferous tubules.

  18. Separate patient serum sodium medians from males and females provide independent information on analytical bias

    DEFF Research Database (Denmark)

    Hansen, Steen Ingemann; Petersen, Per Hyltoft; Lund, Flemming

    2017-01-01

    BACKGROUND: During monitoring of monthly medians of results from patients undertaken to assess analytical stability in routine laboratory performance, the medians for serum sodium for male and female patients were found to be significantly related. METHODS: Daily, weekly and monthly patient medians...... all instruments. CONCLUSIONS: The tight relationship between the gender medians for serum sodium is only possible when raw laboratory data are used for calculation. The two patient medians can be used to confirm both and are useful as independent estimates of analytical bias during constant...... calibration periods. In contrast to the gender combined median, the estimate of analytical bias can be confirmed further by calculation of the ratios of medians for males and females....

  19. Analytical Model for Hook Anchor Pull-Out

    DEFF Research Database (Denmark)

    Brincker, Rune; Ulfkjær, Jens Peder; Adamsen, Peter

    1995-01-01

    A simple analytical model for the pull-out of a hook anchor is presented. The model is based on a simplified version of the fictitious crack model. It is assumed that the fracture process is the pull-off of a cone shaped concrete part, simplifying the problem by assuming pure rigid body motions...... allowing elastic deformations only in a layer between the pull-out cone and the concrete base. The derived model is in good agreement with experimental results, it predicts size effects and the model parameters found by calibration of the model on experimental data are in good agreement with what should...

  20. Analytical Model for Hook Anchor Pull-out

    DEFF Research Database (Denmark)

    Brincker, Rune; Ulfkjær, J. P.; Adamsen, P.

    A simple analytical model for the pull-out of a hook anchor is presented. The model is based on a simplified version of the fictitious crack model. It is assumed that the fracture process is the pull-off of a cone shaped concrete part, simplifying the problem by assuming pure rigid body motions...... allowing elastic deformations only in a layer between the pull-out cone and the concrete base. The derived model is in good agreement with experimental results, it predicts size effects and the model parameters found by calibration of the model on experimental data are in good agreement with what should...