WorldWideScience

Sample records for normal curve equivalent

  1. A note on families of fragility curves

    International Nuclear Information System (INIS)

    Kaplan, S.; Bier, V.M.; Bley, D.C.

    1989-01-01

    In the quantitative assessment of seismic risk, uncertainty in the fragility of a structural component is usually expressed by putting forth a family of fragility curves, with probability serving as the parameter of the family. Commonly, a lognormal shape is used both for the individual curves and for the expression of uncertainty over the family. A so-called composite single curve can also be drawn and used for purposes of approximation. This composite curve is often regarded as equivalent to the mean curve of the family. The equality seems intuitively reasonable, but according to the authors has never been proven. The paper presented proves this equivalence hypothesis mathematically. Moreover, the authors show that this equivalence hypothesis between fragility curves is itself equivalent to an identity property of the standard normal probability curve. Thus, in the course of proving the fragility curve hypothesis, the authors have also proved a rather obscure, but interesting and perhaps previously unrecognized, property of the standard normal curve

  2. On the projective normality of Artin-Schreier curves

    Directory of Open Access Journals (Sweden)

    Alberto Ravagnani

    2013-11-01

    Full Text Available In this paper we study the projective normality of certain Artin-Schreier curves Y_f defined over a field F of characteristic p by the equations y^q+y=f(x, q being a power of p and f in F[x] being a polynomial in x of degree m, with (m,p=1. Many Y_f curves are singular and so, to be precise, here we study the projective normality of appropriate projective models of their normalization.

  3. DUAL TIMELIKE NORMAL AND DUAL TIMELIKE SPHERICAL CURVES IN DUAL MINKOWSKI SPACE

    OpenAIRE

    ÖNDER, Mehmet

    2009-01-01

    Abstract: In this paper, we give characterizations of dual timelike normal and dual timelike spherical curves in the dual Minkowski 3-space and we show that every dual timelike normal curve is also a dual timelike spherical curve. Keywords: Normal curves, Dual Minkowski 3-Space, Dual Timelike curves. Mathematics Subject Classifications (2000): 53C50, 53C40. DUAL MINKOWSKI UZAYINDA DUAL TIMELIKE NORMAL VE DUAL TIMELIKE KÜRESEL EĞRİLER Özet: Bu çalışmada, dual Minkowski 3-...

  4. According to Jim: The Flawed Normal Curve of Intelligence

    Science.gov (United States)

    Gallagher, James J.

    2008-01-01

    In this article, the author talks about the normal curve of intelligence which he thinks is flawed and contends that wrong conclusions have been drawn based on this spurious normal curve. An example is that of racial and ethnic differences wherein some authors maintain that some ethnic and racial groups are clearly superior to others based on…

  5. Computerized tomography and head growth curve infantile macrocephaly with normal psychomotor development

    International Nuclear Information System (INIS)

    Eda, Isematsu; Kitahara, Tadashi; Takashima, Sachio; Takeshita, Kenzo

    1982-01-01

    Macrocephaly was defined as a head measuring larger than 98th percentile. We have evaluated CT findings and head growth curves in 25 infants with large heads. Ten (40%) of 25 infants with large heads were normal developmentally and neurologically. Five (20%) of those were mentally retarded. The other 10 infants (40%) included hydrocephalus (4 cases), malformation syndrome (3 cases), brain tumor (1 case), metabolic disorder (1 case) and degenerative disorder (1 case). Their head growth curves were typed as (I), (II) and (III): Type (I) (excessive head growth curve to 2 SDs above normal); Type (II) (head growth curve gradually approached to 2 SDs above normal); Type (III) (head growth curve parallel to 2 SDs above normal). Ten of macrocephaly with normal psychomotor development were studied clinically and radiologically in details. They were all male. CT pictures of those showed normal or various abnormal findings: ventricular dilatations, wide frontal and temporal subdural spaces, wide interhemispheric fissures, wide cerebral sulci, and large sylvian fissures. CT findings in 2 of those, which because normal after repeated CT examinations, resembled benign subdural collection. CT findings in one of those were external hydrocephalus. Head growth curves were obtained from 8 of those. Six cases revealed type (II) and two cases did type (III). The remaining 2 cases could not be followed up. We consider that CT findings of infants showed macrocephaly with normal psychomotor development reveals normal or various abnormal (ventricular dilatations, benign subdural collection, external hydrocephalus) and their head growth curves are not at least excessive. Infants with mental retardation showed similar CT findings and head growth curves as those with normal psychomotor development. It was difficult to distinguish normal from mentally retarded infants by either CT findings or head growth curves. (author)

  6. Incorporating Measurement Non-Equivalence in a Cross-Study Latent Growth Curve Analysis.

    Science.gov (United States)

    Flora, David B; Curran, Patrick J; Hussong, Andrea M; Edwards, Michael C

    2008-10-01

    A large literature emphasizes the importance of testing for measurement equivalence in scales that may be used as observed variables in structural equation modeling applications. When the same construct is measured across more than one developmental period, as in a longitudinal study, it can be especially critical to establish measurement equivalence, or invariance, across the developmental periods. Similarly, when data from more than one study are combined into a single analysis, it is again important to assess measurement equivalence across the data sources. Yet, how to incorporate non-equivalence when it is discovered is not well described for applied researchers. Here, we present an item response theory approach that can be used to create scale scores from measures while explicitly accounting for non-equivalence. We demonstrate these methods in the context of a latent curve analysis in which data from two separate studies are combined to create a single longitudinal model spanning several developmental periods.

  7. Sketching Curves for Normal Distributions--Geometric Connections

    Science.gov (United States)

    Bosse, Michael J.

    2006-01-01

    Within statistics instruction, students are often requested to sketch the curve representing a normal distribution with a given mean and standard deviation. Unfortunately, these sketches are often notoriously imprecise. Poor sketches are usually the result of missing mathematical knowledge. This paper considers relationships which exist among…

  8. Principal normal indicatrices of closed space curves

    DEFF Research Database (Denmark)

    Røgen, Peter

    1999-01-01

    A theorem due to J. Weiner, which is also proven by B. Solomon, implies that a principal normal indicatrix of a closed space curve with nonvanishing curvature has integrated geodesic curvature zero and contains no subarc with integrated geodesic curvature pi. We prove that the inverse problem alw...

  9. The Hartshorne-Rao module of curves on rational normal scrolls

    Directory of Open Access Journals (Sweden)

    Roberta Di Gennaro

    2000-09-01

    Full Text Available We study the Hartshorne-Rao module of curves lying on a rational normal scroll S_e of invariant e ≥ 0 in P^{e+3} .We calculate the Rao function, we characterize the aCM curves on S_e .Finally, we give an algorithm to check if a curve is aC M or not and, inthe second case, to calculate the Rao function.

  10. NormaCurve: a SuperCurve-based method that simultaneously quantifies and normalizes reverse phase protein array data.

    Directory of Open Access Journals (Sweden)

    Sylvie Troncale

    Full Text Available MOTIVATION: Reverse phase protein array (RPPA is a powerful dot-blot technology that allows studying protein expression levels as well as post-translational modifications in a large number of samples simultaneously. Yet, correct interpretation of RPPA data has remained a major challenge for its broad-scale application and its translation into clinical research. Satisfying quantification tools are available to assess a relative protein expression level from a serial dilution curve. However, appropriate tools allowing the normalization of the data for external sources of variation are currently missing. RESULTS: Here we propose a new method, called NormaCurve, that allows simultaneous quantification and normalization of RPPA data. For this, we modified the quantification method SuperCurve in order to include normalization for (i background fluorescence, (ii variation in the total amount of spotted protein and (iii spatial bias on the arrays. Using a spike-in design with a purified protein, we test the capacity of different models to properly estimate normalized relative expression levels. The best performing model, NormaCurve, takes into account a negative control array without primary antibody, an array stained with a total protein stain and spatial covariates. We show that this normalization is reproducible and we discuss the number of serial dilutions and the number of replicates that are required to obtain robust data. We thus provide a ready-to-use method for reliable and reproducible normalization of RPPA data, which should facilitate the interpretation and the development of this promising technology. AVAILABILITY: The raw data, the scripts and the normacurve package are available at the following web site: http://microarrays.curie.fr.

  11. Modelling growth curves of Nigerian indigenous normal feather ...

    African Journals Online (AJOL)

    This study was conducted to predict the growth curve parameters using Bayesian Gompertz and logistic models and also to compare the two growth function in describing the body weight changes across age in Nigerian indigenous normal feather chicken. Each chick was wing-tagged at day old and body weights were ...

  12. A measurable Lawson criterion and hydro-equivalent curves for inertial confinement fusion

    International Nuclear Information System (INIS)

    Zhou, C. D.; Betti, R.

    2008-01-01

    It is shown that the ignition condition (Lawson criterion) for inertial confinement fusion (ICF) can be cast in a form dependent on the only two parameters of the compressed fuel assembly that can be measured with existing techniques: the hot spot ion temperature (T i h ) and the total areal density (ρR tot ), which includes the cold shell contribution. A marginal ignition curve is derived in the ρR tot , T i h plane and current implosion experiments are compared with the ignition curve. On this plane, hydrodynamic equivalent curves show how a given implosion would perform with respect to the ignition condition when scaled up in the laser-driver energy. For 3 i h > n i h > n 2.6 · tot > n >50 keV 2.6 · g/cm 2 , where tot > n and i h > n are the burn-averaged total areal density and hot spot ion temperature, respectively. Both quantities are calculated without accounting for the alpha-particle energy deposition. Such a criterion can be used to determine how surrogate D 2 and subignited DT target implosions perform with respect to the one-dimensional ignition threshold.

  13. Eyewitness identification: Bayesian information gain, base-rate effect equivalency curves, and reasonable suspicion.

    Science.gov (United States)

    Wells, Gary L; Yang, Yueran; Smalarz, Laura

    2015-04-01

    We provide a novel Bayesian treatment of the eyewitness identification problem as it relates to various system variables, such as instruction effects, lineup presentation format, lineup-filler similarity, lineup administrator influence, and show-ups versus lineups. We describe why eyewitness identification is a natural Bayesian problem and how numerous important observations require careful consideration of base rates. Moreover, we argue that the base rate in eyewitness identification should be construed as a system variable (under the control of the justice system). We then use prior-by-posterior curves and information-gain curves to examine data obtained from a large number of published experiments. Next, we show how information-gain curves are moderated by system variables and by witness confidence and we note how information-gain curves reveal that lineups are consistently more proficient at incriminating the guilty than they are at exonerating the innocent. We then introduce a new type of analysis that we developed called base rate effect-equivalency (BREE) curves. BREE curves display how much change in the base rate is required to match the impact of any given system variable. The results indicate that even relatively modest changes to the base rate can have more impact on the reliability of eyewitness identification evidence than do the traditional system variables that have received so much attention in the literature. We note how this Bayesian analysis of eyewitness identification has implications for the question of whether there ought to be a reasonable-suspicion criterion for placing a person into the jeopardy of an identification procedure. (c) 2015 APA, all rights reserved).

  14. Universal Survival Curve and Single Fraction Equivalent Dose: Useful Tools in Understanding Potency of Ablative Radiotherapy

    International Nuclear Information System (INIS)

    Park, Clint; Papiez, Lech; Zhang Shichuan; Story, Michael; Timmerman, Robert D.

    2008-01-01

    Purpose: Overprediction of the potency and toxicity of high-dose ablative radiotherapy such as stereotactic body radiotherapy (SBRT) by the linear quadratic (LQ) model led to many clinicians' hesitating to adopt this efficacious and well-tolerated therapeutic option. The aim of this study was to offer an alternative method of analyzing the effect of SBRT by constructing a universal survival curve (USC) that provides superior approximation of the experimentally measured survival curves in the ablative, high-dose range without losing the strengths of the LQ model around the shoulder. Methods and Materials: The USC was constructed by hybridizing two classic radiobiologic models: the LQ model and the multitarget model. We have assumed that the LQ model gives a good description for conventionally fractionated radiotherapy (CFRT) for the dose to the shoulder. For ablative doses beyond the shoulder, the survival curve is better described as a straight line as predicted by the multitarget model. The USC smoothly interpolates from a parabola predicted by the LQ model to the terminal asymptote of the multitarget model in the high-dose region. From the USC, we derived two equivalence functions, the biologically effective dose and the single fraction equivalent dose for both CFRT and SBRT. Results: The validity of the USC was tested by using previously published parameters of the LQ and multitarget models for non-small-cell lung cancer cell lines. A comparison of the goodness-of-fit of the LQ and USC models was made to a high-dose survival curve of the H460 non-small-cell lung cancer cell line. Conclusion: The USC can be used to compare the dose fractionation schemes of both CFRT and SBRT. The USC provides an empirically and a clinically well-justified rationale for SBRT while preserving the strengths of the LQ model for CFRT

  15. Analysis of normalized-characteristic curves and determination of the granulometric state of dissolved uranium dioxides

    International Nuclear Information System (INIS)

    Melichar, F.; Neumann, L.

    1977-01-01

    Methods are presented for the analysis of normalized-characteristic curves, which make it possible to determine the granulometric composition of a dissolved polydispersion - the cumulative mass distribution of particles - as a function of the relative particle size. If the size of the largest particle in the dissolved polydispersion is known, these methods allow the determination of the dependence of cumulative mass ratios of particles on their absolute sizes. In the inverse method of the geometrical model for determining the granulometric composition of a dissolved polydispersion, the polydispersion is represented by a finite number of monodispersions. An accurate analysis of normalized-characteristic equations leads to the Akselrud dissolution model. As against the other two methods, the latter allows the determination of the granulometric composition for an arbitrary number of particle sizes. The method of the granulometric atlas is a method for estimating the granulometric composition of a dissolved polydispersion and is based on comparison of a normalized-characteristic curve for an unknown granulometric composition with an atlas of normalized-characteristic curves for selected granulometric spectra of polydispersions. (author)

  16. On the possible ''normalization'' of experimental curves of 230Th vertical distribution in abyssal oceanic sediments

    International Nuclear Information System (INIS)

    Kuznetsov, Yu.V.; Al'terman, Eh.I.; Lisitsyn, A.P.; AN SSSR, Moscow. Inst. Okeanologii)

    1981-01-01

    The possibilities of the method of normalization of experimental ionic curves in reference to dating of abyssal sediments and establishing their accumulation rapidities are studied. The method is based on using correlation between ionic curves extrema and variations of Fe, Mn, C org., and P contents in abyssal oceanic sediments. It has been found that the above method can be successfully applied for correction of 230 Th vertical distribution data obtained by low-background γ-spectrometry. The method leads to most reliable results in those cases when the vertical distribution curves in sediments of elements concentrators of 230 Th are symbasic between themselves. The normalization of experimental ionic curves in many cases gives the possibility to realize the sediment age stratification [ru

  17. Optimization of equivalent uniform dose using the L-curve criterion

    International Nuclear Information System (INIS)

    Chvetsov, Alexei V; Dempsey, James F; Palta, Jatinder R

    2007-01-01

    Optimization of equivalent uniform dose (EUD) in inverse planning for intensity-modulated radiation therapy (IMRT) prevents variation in radiobiological effect between different radiotherapy treatment plans, which is due to variation in the pattern of dose nonuniformity. For instance, the survival fraction of clonogens would be consistent with the prescription when the optimized EUD is equal to the prescribed EUD. One of the problems in the practical implementation of this approach is that the spatial dose distribution in EUD-based inverse planning would be underdetermined because an unlimited number of nonuniform dose distributions can be computed for a prescribed value of EUD. Together with ill-posedness of the underlying integral equation, this may significantly increase the dose nonuniformity. To optimize EUD and keep dose nonuniformity within reasonable limits, we implemented into an EUD-based objective function an additional criterion which ensures the smoothness of beam intensity functions. This approach is similar to the variational regularization technique which was previously studied for the dose-based least-squares optimization. We show that the variational regularization together with the L-curve criterion for the regularization parameter can significantly reduce dose nonuniformity in EUD-based inverse planning

  18. Optimization of equivalent uniform dose using the L-curve criterion

    Energy Technology Data Exchange (ETDEWEB)

    Chvetsov, Alexei V; Dempsey, James F; Palta, Jatinder R [Department of Radiation Oncology, University of Florida, Gainesville, FL 32610-0385 (United States)

    2007-09-21

    Optimization of equivalent uniform dose (EUD) in inverse planning for intensity-modulated radiation therapy (IMRT) prevents variation in radiobiological effect between different radiotherapy treatment plans, which is due to variation in the pattern of dose nonuniformity. For instance, the survival fraction of clonogens would be consistent with the prescription when the optimized EUD is equal to the prescribed EUD. One of the problems in the practical implementation of this approach is that the spatial dose distribution in EUD-based inverse planning would be underdetermined because an unlimited number of nonuniform dose distributions can be computed for a prescribed value of EUD. Together with ill-posedness of the underlying integral equation, this may significantly increase the dose nonuniformity. To optimize EUD and keep dose nonuniformity within reasonable limits, we implemented into an EUD-based objective function an additional criterion which ensures the smoothness of beam intensity functions. This approach is similar to the variational regularization technique which was previously studied for the dose-based least-squares optimization. We show that the variational regularization together with the L-curve criterion for the regularization parameter can significantly reduce dose nonuniformity in EUD-based inverse planning.

  19. Optimization of equivalent uniform dose using the L-curve criterion.

    Science.gov (United States)

    Chvetsov, Alexei V; Dempsey, James F; Palta, Jatinder R

    2007-10-07

    Optimization of equivalent uniform dose (EUD) in inverse planning for intensity-modulated radiation therapy (IMRT) prevents variation in radiobiological effect between different radiotherapy treatment plans, which is due to variation in the pattern of dose nonuniformity. For instance, the survival fraction of clonogens would be consistent with the prescription when the optimized EUD is equal to the prescribed EUD. One of the problems in the practical implementation of this approach is that the spatial dose distribution in EUD-based inverse planning would be underdetermined because an unlimited number of nonuniform dose distributions can be computed for a prescribed value of EUD. Together with ill-posedness of the underlying integral equation, this may significantly increase the dose nonuniformity. To optimize EUD and keep dose nonuniformity within reasonable limits, we implemented into an EUD-based objective function an additional criterion which ensures the smoothness of beam intensity functions. This approach is similar to the variational regularization technique which was previously studied for the dose-based least-squares optimization. We show that the variational regularization together with the L-curve criterion for the regularization parameter can significantly reduce dose nonuniformity in EUD-based inverse planning.

  20. Contractibility of curves

    Directory of Open Access Journals (Sweden)

    Janusz Charatonik

    1991-11-01

    Full Text Available Results concerning contractibility of curves (equivalently: of dendroids are collected and discussed in the paper. Interrelations tetween various conditions which are either sufficient or necessary for a curve to be contractible are studied.

  1. A Note on the Equivalence between the Normal and the Lognormal Implied Volatility : A Model Free Approach

    OpenAIRE

    Grunspan, Cyril

    2011-01-01

    First, we show that implied normal volatility is intimately linked with the incomplete Gamma function. Then, we deduce an expansion on implied normal volatility in terms of the time-value of a European call option. Then, we formulate an equivalence between the implied normal volatility and the lognormal implied volatility with any strike and any model. This generalizes a known result for the SABR model. Finally, we adress the issue of the "breakeven move" of a delta-hedged portfolio.

  2. Equivalent dose determination in foraminifera: analytical description of the CO2--signal dose-response curve

    International Nuclear Information System (INIS)

    Hoffmann, D.; Woda, C.; Mangini, A.

    2003-01-01

    The dose-response of the CO 2 - signal (g=2.0006) in foraminifera with ages between 19 and 300 ka is investigated. The sum of two exponential saturation functions is an adequate function to describe the dose-response curve up to an additional dose of 8000 Gy. It yields excellent dating results but requires an artificial doses of at least 5000 Gy. For small additional doses of about 500 Gy the single exponential saturation function can be used to calculate a reliable equivalent dose D E , although it does not describ the dose-response for higher doses. The CO 2 - -signal dose-response indicates that the signal has two components of which one is less stable than the other

  3. High-fructose corn syrup and sucrose have equivalent effects on energy-regulating hormones at normal human consumption levels.

    Science.gov (United States)

    Yu, Zhiping; Lowndes, Joshua; Rippe, James

    2013-12-01

    Intake of high-fructose corn syrup (HFCS) has been suggested to contribute to the increased prevalence of obesity, whereas a number of studies and organizations have reported metabolic equivalence between HFCS and sucrose. We hypothesized that HFCS and sucrose would have similar effects on energy-regulating hormones and metabolic substrates at normal levels of human consumption and that these values would not change over a 10-week, free-living period at these consumption levels. This was a randomized, prospective, double-blind, parallel group study in which 138 adult men and women consumed 10 weeks of low-fat milk sweetened with either HFCS or sucrose at levels of the 25th, 50th, and 90th percentile population consumption of fructose (the equivalent of 40, 90, or 150 g of sugar per day in a 2000-kcal diet). Before and after the 10-week intervention, 24-hour blood samples were collected. The area under the curve (AUC) for glucose, insulin, leptin, active ghrelin, triglyceride, and uric acid was measured. There were no group differences at baseline or posttesting for all outcomes (interaction, P > .05). The AUC response of glucose, active ghrelin, and uric acid did not change between baseline and posttesting (P > .05), whereas the AUC response of insulin (P < .05), leptin (P < .001), and triglyceride (P < .01) increased over the course of the intervention when the 6 groups were averaged. We conclude that there are no differences in the metabolic effects of HFCS and sucrose when compared at low, medium, and high levels of consumption. © 2013 Elsevier Inc. All rights reserved.

  4. Asymptotic and numerical prediction of current-voltage curves for an organic bilayer solar cell under varying illumination and comparison to the Shockley equivalent circuit

    KAUST Repository

    Foster, J. M.

    2013-01-01

    In this study, a drift-diffusion model is used to derive the current-voltage curves of an organic bilayer solar cell consisting of slabs of electron acceptor and electron donor materials sandwiched together between current collectors. A simplified version of the standard drift-diffusion equations is employed in which minority carrier densities are neglected. This is justified by the large disparities in electron affinity and ionisation potential between the two materials. The resulting equations are solved (via both asymptotic and numerical techniques) in conjunction with (i) Ohmic boundary conditions on the contacts and (ii) an internal boundary condition, imposed on the interface between the two materials, that accounts for charge pair generation (resulting from the dissociation of excitons) and charge pair recombination. Current-voltage curves are calculated from the solution to this model as a function of the strength of the solar charge generation. In the physically relevant power generating regime, it is shown that these current-voltage curves are well-approximated by a Shockley equivalent circuit model. Furthermore, since our drift-diffusion model is predictive, it can be used to directly calculate equivalent circuit parameters from the material parameters of the device. © 2013 AIP Publishing LLC.

  5. Friction characteristics of the curved sidewall surfaces of a rotary MEMS device in oscillating motion

    International Nuclear Information System (INIS)

    Wu, Jie; Wang, Shao; Miao, Jianmin

    2009-01-01

    A MEMS device with a configuration similar to that of a micro-bearing was developed to study the friction behavior of the curved sidewall surfaces. This friction-testing device consists of two sets of actuators for normal motion and rotation, respectively. Friction measurements were performed at the curved sidewall surfaces of single-crystal silicon. Two general models were developed to determine the equivalent tangential stiffness of the bush-flexure assembly at the contact point by reducing a matrix equation to a one-dimensional formulation. With this simplification, the motions of the contacting surfaces were analyzed by using a recently developed quasi-static stick-slip model. The measurement results show that the coefficient of static friction exhibits a nonlinear dependence on the normal load. The true coefficient of static friction was determined by fitting the experimental friction curve

  6. Detecting overpressure using the Eaton and Equivalent Depth methods in Offshore Nova Scotia, Canada

    Science.gov (United States)

    Ernanda; Primasty, A. Q. T.; Akbar, K. A.

    2018-03-01

    Overpressure is an abnormal high subsurface pressure of any fluids which exceeds the hydrostatic pressure of column of water or formation brine. In Offshore Nova Scotia Canada, the values and depth of overpressure zone are determined using the eaton and equivalent depth method, based on well data and the normal compaction trend analysis. Since equivalent depth method is using effective vertical stress principle and Eaton method considers physical property ratio (velocity). In this research, pressure evaluation only applicable on Penobscot L-30 well. An abnormal pressure is detected at depth 11804 feet as possibly overpressure zone, based on pressure gradient curve and calculation between the Eaton method (7241.3 psi) and Equivalent Depth method (6619.4 psi). Shales within Abenaki formation especially Baccaro Member is estimated as possible overpressure zone due to hydrocarbon generation mechanism.

  7. Maturation of the auditory system in clinically normal puppies as reflected by the brain stem auditory-evoked potential wave V latency-intensity curve and rarefaction-condensation differential potentials.

    Science.gov (United States)

    Poncelet, L C; Coppens, A G; Meuris, S I; Deltenre, P F

    2000-11-01

    To evaluate auditory maturation in puppies. Ten clinically normal Beagle puppies. Puppies were examined repeatedly from days 11 to 36 after birth (8 measurements). Click-evoked brain stem auditory-evoked potentials (BAEP) were obtained in response to rarefaction and condensation click stimuli from 90 dB normal hearing level to wave V threshold, using steps of 10 dB. Responses were added, providing an equivalent to alternate polarity clicks, and subtracted, providing the rarefaction-condensation differential potential (RCDP). Steps of 5 dB were used to determine thresholds of RCDP and wave V. Slope of the low-intensity segment of the wave V latency-intensity curve was calculated. The intensity range at which RCDP could not be recorded (ie, pre-RCDP range) was calculated by subtracting the threshold of wave V from threshold of RCDP RESULTS: Slope of the wave V latency-intensity curve low-intensity segment evolved with age, changing from (mean +/- SD) -90.8 +/- 41.6 to -27.8 +/- 4.1 micros/dB. Similar results were obtained from days 23 through 36. The pre-RCDP range diminished as puppies became older, decreasing from 40.0 +/- 7.5 to 20.5 +/- 6.4 dB. Changes in slope of the latency-intensity curve with age suggest enlargement of the audible range of frequencies toward high frequencies up to the third week after birth. Decrease in the pre-RCDP range may indicate an increase of the audible range of frequencies toward low frequencies. Age-related reference values will assist clinicians in detecting hearing loss in puppies.

  8. Alexander-equivalent Zariski pairs of irreducible sextics

    DEFF Research Database (Denmark)

    Eyral, Christophe; Oka, Mutsuo

    2009-01-01

    The existence of Alexander-equivalent Zariski pairs dealing with irreducible curves of degree 6 was proved by Degtyarev. However, no explicit example of such a pair is available (only the existence is known) in the literature. In this paper, we construct the first concrete example.......The existence of Alexander-equivalent Zariski pairs dealing with irreducible curves of degree 6 was proved by Degtyarev. However, no explicit example of such a pair is available (only the existence is known) in the literature. In this paper, we construct the first concrete example....

  9. Contrast-enhanced transrectal ultrasound for prediction of prostate cancer aggressiveness: The role of normal peripheral zone time-intensity curves.

    Science.gov (United States)

    Huang, Hui; Zhu, Zheng-Qiu; Zhou, Zheng-Guo; Chen, Ling-Shan; Zhao, Ming; Zhang, Yang; Li, Hong-Bo; Yin, Li-Ping

    2016-12-08

    To assess the role of time-intensity curves (TICs) of the normal peripheral zone (PZ) in the identification of biopsy-proven prostate nodules using contrast-enhanced transrectal ultrasound (CETRUS). This study included 132 patients with 134 prostate PZ nodules. Arrival time (AT), peak intensity (PI), mean transit time (MTT), area under the curve (AUC), time from peak to one half (TPH), wash in slope (WIS) and time to peak (TTP) were analyzed using multivariate linear logistic regression and receiver operating characteristic (ROC) curves to assess whether combining nodule TICs with normal PZ TICs improved the prediction of prostate cancer (PCa) aggressiveness. The PI, AUC (p < 0.001 for both), MTT and TPH (p = 0.011 and 0.040 respectively) values of the malignant nodules were significantly higher than those of the benign nodules. Incorporating the PI and AUC values (both, p < 0.001) of the normal PZ TIC, but not the MTT and TPH values (p = 0.076 and 0.159 respectively), significantly improved the AUC for prediction of malignancy (PI: 0.784-0.923; AUC: 0.758-0.891) and assessment of cancer aggressiveness (p < 0.001). Thus, all these findings indicate that incorporating normal PZ TICs with nodule TICs in CETRUS readings can improve the diagnostic accuracy for PCa and cancer aggressiveness assessment.

  10. On the variability of the salting-out curves of proteins of normal human plasma and serum

    NARCIS (Netherlands)

    Steyn-Parvé, Elizabeth P.; Hout, A.J. van den

    1953-01-01

    Salting-out curves of proteins of normal human plasma reflect the influence of a number of other factors besides the protein composition: the manner of obtaining the blood, the nature of the anti-coagulant used, the non-protein components of the plasma. Diagrams of serum and plasma obtained from

  11. Remote sensing used for power curves

    International Nuclear Information System (INIS)

    Wagner, R; Joergensen, H E; Paulsen, U S; Larsen, T J; Antoniou, I; Thesbjerg, L

    2008-01-01

    Power curve measurement for large wind turbines requires taking into account more parameters than only the wind speed at hub height. Based on results from aerodynamic simulations, an equivalent wind speed taking the wind shear into account was defined and found to reduce the power standard deviation in the power curve significantly. Two LiDARs and a SoDAR are used to measure the wind profile in front of a wind turbine. These profiles are used to calculate the equivalent wind speed. The comparison of the power curves obtained with the three instruments to the traditional power curve, obtained using a cup anemometer measurement, confirms the results obtained from the simulations. Using LiDAR profiles reduces the error in power curve measurement, when these are used as relative instrument together with a cup anemometer. Results from the SoDAR do not show such promising results, probably because of noisy measurements resulting in distorted profiles

  12. Evaluation of the directional dose equivalent H,(0.07) for ring dosemeters

    International Nuclear Information System (INIS)

    Alvarez R, J.T.; Tovar M, V.M.

    2006-01-01

    The personnel dosimetry laboratory (LDP) of the Metrology department received an user's of radiation beta application that incidentally had irradiated 14 couples of ring dosemeters for extremities of the type TLD-100 given by the LDP. This sample of 14 couples of rings tentatively it was irradiated in the months of July-August of the year 2004, and he requested in an expedite way the evaluation of the received dose equivalent. The LSCD builds two calibration curves in terms of the directional dose equivalent H'(0.07) using two sources patterns of 90 Sr- 90 Y for beta radiation: one of 74 MBq and another of 1850 MBq with traceability to the PTB. The first curve in the interval of 0 to 5 mSv, the second in the range of 5 to 50 mSv, taking into account effects by positioned of the rings in the phantom. Both calibration curves were validated by adjustment lack, symmetry of residuals and normality of the same ones. It is evaluated and analyzing the H'(0.007) for these 14 couples of rings using the Tukey test of media of a single road. It was found that the H , its could be classified in 4 groups, and that the probability that its has irradiated in a random way it was smaller to the level at α = 0.05. (Author)

  13. Adaptive robust polynomial regression for power curve modeling with application to wind power forecasting

    DEFF Research Database (Denmark)

    Xu, Man; Pinson, Pierre; Lu, Zongxiang

    2016-01-01

    of the lack of time adaptivity. In this paper, a refined local polynomial regression algorithm is proposed to yield an adaptive robust model of the time-varying scattered power curve for forecasting applications. The time adaptivity of the algorithm is considered with a new data-driven bandwidth selection......Wind farm power curve modeling, which characterizes the relationship between meteorological variables and power production, is a crucial procedure for wind power forecasting. In many cases, power curve modeling is more impacted by the limited quality of input data rather than the stochastic nature...... of the energy conversion process. Such nature may be due the varying wind conditions, aging and state of the turbines, etc. And, an equivalent steady-state power curve, estimated under normal operating conditions with the intention to filter abnormal data, is not sufficient to solve the problem because...

  14. Rational Multi-curve Models with Counterparty-risk Valuation Adjustments

    DEFF Research Database (Denmark)

    Crépey, Stéphane; Macrina, Andrea; Nguyen, Tuyet Mai

    2016-01-01

    We develop a multi-curve term structure set-up in which the modelling ingredients are expressed by rational functionals of Markov processes. We calibrate to London Interbank Offer Rate swaptions data and show that a rational two-factor log-normal multi-curve model is sufficient to match market da...... with regulatory obligations. In order to compute counterparty-risk valuation adjustments, such as credit valuation adjustment, we show how default intensity processes with rational form can be derived. We flesh out our study by applying the results to a basis swap contract....... with accuracy. We elucidate the relationship between the models developed and calibrated under a risk-neutral measure Q and their consistent equivalence class under the real-world probability measure P. The consistent P-pricing models are applied to compute the risk exposures which may be required to comply...

  15. Mapping of isoexposure curves for evaluation of equivalent environmental doses for radiodiagnostic mobile equipment

    International Nuclear Information System (INIS)

    Bacelar, Alexandre; Andrade, Jose Rodrigo Mendes; Fischer, Andreia Caroline Fischer da Silveira; Accurso, Andre; Hoff, Gabriela

    2011-01-01

    This paper generates iso exposure curves in areas where the mobile radiodiagnostic equipment are used for evaluation of iso kerma map and the environment equivalent dose (H * (d)). It was used a Shimadzu mobile equipment and two Siemens, with non anthropomorphic scatter. The exposure was measured in a mesh of 4.20 x 4.20 square meter in steps of 30 cm, at half height from the scatterer. The calculation of H * (d) were estimated for a worker present in all the procedures in a period of 11 months, being considered 3.55 m As/examination and 44.5 procedures/month (adult UTI) and 3.16 m As/examination and 20.1 procedure/month (pediatric UTI), and 3.16 m As/examination and 20.1 procedure/month (pediatric UTI). It was observed that there exist points where the H * (d) was over the limit established for the free area inside the radius of 30 cm from the central beam of radiation in the case of pediatric UTI and 60 cm for adult UTI. The points localized 2.1 m from the center presented values lower than 25% of those limit

  16. Evidence of non-coincidence of normalized sigmoidal curves of two different structural properties for two-state protein folding/unfolding

    International Nuclear Information System (INIS)

    Rahaman, Hamidur; Khan, Md. Khurshid Alam; Hassan, Md. Imtaiyaz; Islam, Asimul; Moosavi-Movahedi, Ali Akbar; Ahmad, Faizan

    2013-01-01

    Highlights: ► Non-coincidence of normalized sigmoidal curves of two different structural properties is consistence with the two-state protein folding/unfolding. ► DSC measurements of denaturation show a two-state behavior of g-cyt-c at pH 6.0. ► Urea-induced denaturation of g-cyt-c is a variable two- state process at pH 6.0. ► GdmCl-induced denaturation of g-cyt-c is a fixed two- state process at pH 6.0. -- Abstract: In practice, the observation of non-coincidence of normalized sigmoidal transition curves measured by two different structural properties constitutes a proof of existence of thermodynamically stable intermediate(s) on the folding ↔ unfolding pathway of a protein. Here we give first experimental evidence that this non-coincidence is also observed for a two-state protein denaturation. Proof of this evidence comes from our studies of denaturation of goat cytochrome-c (g-cyt-c) at pH 6.0. These studies involve differential scanning calorimetry (DSC) measurements in the absence of urea and measurements of urea-induced denaturation curves monitored by observing changes in absorbance at 405, 530, and 695 nm and circular dichroism (CD) at 222, 405, and 416 nm. DSC measurements showed that denaturation of the protein is a two-state process, for calorimetric and van’t Hoff enthalpy changes are, within experimental errors, identical. Normalization of urea-induced denaturation curves monitored by optical properties leads to noncoincident sigmoidal curves. Heat-induced transition of g-cyt-c in the presence of different urea concentrations was monitored by CD at 222 nm and absorption at 405 nm. It was observed that these two different structural probes gave not only identical values of T m (transition temperature), ΔH m (change in enthalpy at T m ) and ΔC p (constant-pressure heat capacity change), but these thermodynamic parameters in the absence of urea are also in agreement with those obtained from DSC measurements

  17. Design of elliptic curve cryptoprocessors over GF(2^163 using the Gaussian normal basis

    Directory of Open Access Journals (Sweden)

    Paulo Cesar Realpe

    2014-05-01

    Full Text Available This paper presents the efficient hardware implementation of cryptoprocessors that carry out the scalar multiplication kP over finite field GF(2163 using two digit-level multipliers. The finite field arithmetic operations were implemented using Gaussian normal basis (GNB representation, and the scalar multiplication kP was implemented using Lopez-Dahab algorithm, 2-NAF halve-and-add algorithm and w-tNAF method for Koblitz curves. The processors were designed using VHDL description, synthesized on the Stratix-IV FPGA using Quartus II 12.0 and verified using SignalTAP II and Matlab. The simulation results show that the cryptoprocessors present a very good performance to carry out the scalar multiplication kP. In this case, the computation times of the multiplication kP using Lopez-Dahab, 2-NAF halve-and-add and 16-tNAF for Koblitz curves were 13.37 µs, 16.90 µs and 5.05 µs, respectively.

  18. Low-loss, compact, and fabrication-tolerant Si-wire 90° waveguide bend using clothoid and normal curves for large scale photonic integrated circuits.

    Science.gov (United States)

    Fujisawa, Takeshi; Makino, Shuntaro; Sato, Takanori; Saitoh, Kunimasa

    2017-04-17

    Ultimately low-loss 90° waveguide bend composed of clothoid and normal curves is proposed for dense optical interconnect photonic integrated circuits. By using clothoid curves at the input and output of 90° waveguide bend, straight and bent waveguides are smoothly connected without increasing the footprint. We found that there is an optimum ratio of clothoid curves in the bend and the bending loss can be significantly reduced compared with normal bend. 90% reduction of the bending loss for the bending radius of 4 μm is experimentally demonstrated with excellent agreement between theory and experiment. The performance is compared with the waveguide bend with offset, and the proposed bend is superior to the waveguide bend with offset in terms of fabrication tolerance.

  19. Non normal and non quadratic anisotropic plasticity coupled with ductile damage in sheet metal forming: Application to the hydro bulging test

    International Nuclear Information System (INIS)

    Badreddine, Houssem; Saanouni, Khemaies; Dogui, Abdelwaheb

    2007-01-01

    In this work an improved material model is proposed that shows good agreement with experimental data for both hardening curves and plastic strain ratios in uniaxial and equibiaxial proportional loading paths for steel metal until the final fracture. This model is based on non associative and non normal flow rule using two different orthotropic equivalent stresses in both yield criterion and plastic potential functions. For the plastic potential the classical Hill 1948 quadratic equivalent stress is considered while for the yield criterion the Karafillis and Boyce 1993 non quadratic equivalent stress is used taking into account the non linear mixed (kinematic and isotropic) hardening. Applications are made to hydro bulging tests using both circular and elliptical dies. The results obtained with different particular cases of the model such as the normal quadratic and the non normal non quadratic cases are compared and discussed with respect to the experimental results

  20. A new formula for normal tissue complication probability (NTCP) as a function of equivalent uniform dose (EUD).

    Science.gov (United States)

    Luxton, Gary; Keall, Paul J; King, Christopher R

    2008-01-07

    To facilitate the use of biological outcome modeling for treatment planning, an exponential function is introduced as a simpler equivalent to the Lyman formula for calculating normal tissue complication probability (NTCP). The single parameter of the exponential function is chosen to reproduce the Lyman calculation to within approximately 0.3%, and thus enable easy conversion of data contained in empirical fits of Lyman parameters for organs at risk (OARs). Organ parameters for the new formula are given in terms of Lyman model m and TD(50), and conversely m and TD(50) are expressed in terms of the parameters of the new equation. The role of the Lyman volume-effect parameter n is unchanged from its role in the Lyman model. For a non-homogeneously irradiated OAR, an equation relates d(ref), n, v(eff) and the Niemierko equivalent uniform dose (EUD), where d(ref) and v(eff) are the reference dose and effective fractional volume of the Kutcher-Burman reduction algorithm (i.e. the LKB model). It follows in the LKB model that uniform EUD irradiation of an OAR results in the same NTCP as the original non-homogeneous distribution. The NTCP equation is therefore represented as a function of EUD. The inverse equation expresses EUD as a function of NTCP and is used to generate a table of EUD versus normal tissue complication probability for the Emami-Burman parameter fits as well as for OAR parameter sets from more recent data.

  1. A new formula for normal tissue complication probability (NTCP) as a function of equivalent uniform dose (EUD)

    International Nuclear Information System (INIS)

    Luxton, Gary; Keall, Paul J; King, Christopher R

    2008-01-01

    To facilitate the use of biological outcome modeling for treatment planning, an exponential function is introduced as a simpler equivalent to the Lyman formula for calculating normal tissue complication probability (NTCP). The single parameter of the exponential function is chosen to reproduce the Lyman calculation to within ∼0.3%, and thus enable easy conversion of data contained in empirical fits of Lyman parameters for organs at risk (OARs). Organ parameters for the new formula are given in terms of Lyman model m and TD 50 , and conversely m and TD 50 are expressed in terms of the parameters of the new equation. The role of the Lyman volume-effect parameter n is unchanged from its role in the Lyman model. For a non-homogeneously irradiated OAR, an equation relates d ref , n, v eff and the Niemierko equivalent uniform dose (EUD), where d ref and v eff are the reference dose and effective fractional volume of the Kutcher-Burman reduction algorithm (i.e. the LKB model). It follows in the LKB model that uniform EUD irradiation of an OAR results in the same NTCP as the original non-homogeneous distribution. The NTCP equation is therefore represented as a function of EUD. The inverse equation expresses EUD as a function of NTCP and is used to generate a table of EUD versus normal tissue complication probability for the Emami-Burman parameter fits as well as for OAR parameter sets from more recent data

  2. Equivalent models of wind farms by using aggregated wind turbines and equivalent winds

    International Nuclear Information System (INIS)

    Fernandez, L.M.; Garcia, C.A.; Saenz, J.R.; Jurado, F.

    2009-01-01

    As a result of the increasing wind farms penetration on power systems, the wind farms begin to influence power system, and therefore the modeling of wind farms has become an interesting research topic. In this paper, new equivalent models of wind farms equipped with wind turbines based on squirrel-cage induction generators and doubly-fed induction generators are proposed to represent the collective behavior on large power systems simulations, instead of using a complete model of wind farms where all the wind turbines are modeled. The models proposed here are based on aggregating wind turbines into an equivalent wind turbine which receives an equivalent wind of the ones incident on the aggregated wind turbines. The equivalent wind turbine presents re-scaled power capacity and the same complete model as the individual wind turbines, which supposes the main feature of the present equivalent models. Two equivalent winds are evaluated in this work: (1) the average wind from the ones incident on the aggregated wind turbines with similar winds, and (2) an equivalent incoming wind derived from the power curve and the wind incident on each wind turbine. The effectiveness of the equivalent models to represent the collective response of the wind farm at the point of common coupling to grid is demonstrated by comparison with the wind farm response obtained from the detailed model during power system dynamic simulations, such as wind fluctuations and a grid disturbance. The present models can be used for grid integration studies of large power system with an important reduction of the model order and the computation time

  3. Equivalent physical models and formulation of equivalent source layer in high-resolution EEG imaging

    International Nuclear Information System (INIS)

    Yao Dezhong; He Bin

    2003-01-01

    In high-resolution EEG imaging, both equivalent dipole layer (EDL) and equivalent charge layer (ECL) assumed to be located just above the cortical surface have been proposed as high-resolution imaging modalities or as intermediate steps to estimate the epicortical potential. Presented here are the equivalent physical models of these two equivalent source layers (ESL) which show that the strength of EDL is proportional to the surface potential of the layer when the outside of the layer is filled with an insulator, and that the strength of ECL is the normal current of the layer when the outside is filled with a perfect conductor. Based on these equivalent physical models, closed solutions of ECL and EDL corresponding to a dipole enclosed by a spherical layer are given. These results provide the theoretical basis of ESL applications in high-resolution EEG mapping

  4. Equivalent physical models and formulation of equivalent source layer in high-resolution EEG imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yao Dezhong [School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu City, 610054, Sichuan Province (China); He Bin [The University of Illinois at Chicago, IL (United States)

    2003-11-07

    In high-resolution EEG imaging, both equivalent dipole layer (EDL) and equivalent charge layer (ECL) assumed to be located just above the cortical surface have been proposed as high-resolution imaging modalities or as intermediate steps to estimate the epicortical potential. Presented here are the equivalent physical models of these two equivalent source layers (ESL) which show that the strength of EDL is proportional to the surface potential of the layer when the outside of the layer is filled with an insulator, and that the strength of ECL is the normal current of the layer when the outside is filled with a perfect conductor. Based on these equivalent physical models, closed solutions of ECL and EDL corresponding to a dipole enclosed by a spherical layer are given. These results provide the theoretical basis of ESL applications in high-resolution EEG mapping.

  5. A neutron dose equivalent meter at CAEP

    International Nuclear Information System (INIS)

    Tian Shihai; Lu Yan; Wang Heyi; Yuan Yonggang; Chen Xu

    2012-01-01

    The measurement of neutron dose equivalent has been a widespread need in industry and research. In this paper, aimed at improving the accuracy of neutron dose equivalent meter: a neutron dose counter is simulated with MCNP5, and the energy response curve is optimized. The results show that the energy response factor is from 0.2 to 1.8 for neutrons in the energy range of 2.53×10 -8 MeV to 10 MeV Compared with other related meters, it turns that the design of this meter is right. (authors)

  6. Development of a statistically-based lower bound fracture toughness curve (Ksub(IR) curve)

    International Nuclear Information System (INIS)

    Wullaert, R.A.; Server, W.L.; Oldfield, W.; Stahlkopf, K.E.

    1977-01-01

    A program of initiation fracture toughness measurements on fifty heats of nuclear pressure vessel production materials (including weldments) was used to develop a methodology for establishing a revised reference toughness curve. The new methodology was statistically developed and provides a predefined confidence limit (or tolerance limit) for fracture toughness based upon many heats of a particular type of material. Overall reference curves were developed for seven specific materials using large specimen static and dynamic fracture toughness results. The heat-to-heat variation was removed by normalizing both the fracture toughness and temperature data with the precracked Charpy tanh curve coefficients for each particular heat. The variance and distribution about the curve were determined, and lower bounds of predetermined statistical significance were drawn based upon a Pearson distribution in the lower shelf region (since the data were skewed to high values) and a t-distribution in the transition temperature region (since the data were normally distributed)

  7. Uniformly accelerating charged particles. A threat to the equivalence principle

    International Nuclear Information System (INIS)

    Lyle, Stephen N.

    2008-01-01

    There has been a long debate about whether uniformly accelerated charges should radiate electromagnetic energy and how one should describe their worldline through a flat spacetime, i.e., whether the Lorentz-Dirac equation is right. There are related questions in curved spacetimes, e.g., do different varieties of equivalence principle apply to charged particles, and can a static charge in a static spacetime radiate electromagnetic energy? The problems with the LD equation in flat spacetime are spelt out in some detail here, and its extension to curved spacetime is discussed. Different equivalence principles are compared and some vindicated. The key papers are discussed in detail and many of their conclusions are significantly revised by the present solution. (orig.)

  8. Families of bitangent planes of space curves and minimal non-fibration families

    KAUST Repository

    Lubbes, Niels

    2014-01-01

    A cone curve is a reduced sextic space curve which lies on a quadric cone and does not pass through the vertex. We classify families of bitangent planes of cone curves. The methods we apply can be used for any space curve with ADE singularities, though in this paper we concentrate on cone curves. An embedded complex projective surface which is adjoint to a degree one weak Del Pezzo surface contains families of minimal degree rational curves, which cannot be defined by the fibers of a map. Such families are called minimal non-fibration families. Families of bitangent planes of cone curves correspond to minimal non-fibration families. The main motivation of this paper is to classify minimal non-fibration families. We present algorithms which compute all bitangent families of a given cone curve and their geometric genus. We consider cone curves to be equivalent if they have the same singularity configuration. For each equivalence class of cone curves we determine the possible number of bitangent families and the number of rational bitangent families. Finally we compute an example of a minimal non-fibration family on an embedded weak degree one Del Pezzo surface.

  9. Object-Image Correspondence for Algebraic Curves under Projections

    Directory of Open Access Journals (Sweden)

    Joseph M. Burdis

    2013-03-01

    Full Text Available We present a novel algorithm for deciding whether a given planar curve is an image of a given spatial curve, obtained by a central or a parallel projection with unknown parameters. The motivation comes from the problem of establishing a correspondence between an object and an image, taken by a camera with unknown position and parameters. A straightforward approach to this problem consists of setting up a system of conditions on the projection parameters and then checking whether or not this system has a solution. The computational advantage of the algorithm presented here, in comparison to algorithms based on the straightforward approach, lies in a significant reduction of a number of real parameters that need to be eliminated in order to establish existence or non-existence of a projection that maps a given spatial curve to a given planar curve. Our algorithm is based on projection criteria that reduce the projection problem to a certain modification of the equivalence problem of planar curves under affine and projective transformations. To solve the latter problem we make an algebraic adaptation of signature construction that has been used to solve the equivalence problems for smooth curves. We introduce a notion of a classifying set of rational differential invariants and produce explicit formulas for such invariants for the actions of the projective and the affine groups on the plane.

  10. Equivalent distributed capacitance model of oxide traps on frequency dispersion of C – V curve for MOS capacitors

    International Nuclear Information System (INIS)

    Lu Han-Han; Xu Jing-Ping; Liu Lu; Lai Pui-To; Tang Wing-Man

    2016-01-01

    An equivalent distributed capacitance model is established by considering only the gate oxide-trap capacitance to explain the frequency dispersion in the C – V curve of MOS capacitors measured for a frequency range from 1 kHz to 1 MHz. The proposed model is based on the Fermi–Dirac statistics and the charging/discharging effects of the oxide traps induced by a small ac signal. The validity of the proposed model is confirmed by the good agreement between the simulated results and experimental data. Simulations indicate that the capacitance dispersion of an MOS capacitor under accumulation and near flatband is mainly caused by traps adjacent to the oxide/semiconductor interface, with negligible effects from the traps far from the interface, and the relevant distance from the interface at which the traps can still contribute to the gate capacitance is also discussed. In addition, by excluding the negligible effect of oxide-trap conductance, the model avoids the use of imaginary numbers and complex calculations, and thus is simple and intuitive. (paper)

  11. Change of annual collective dose equivalent of radiation workers at KURRI

    International Nuclear Information System (INIS)

    Okamoto, Kenichi

    1994-01-01

    The change of exposure dose equivalent of radiation workers at KURRI (Kyoto University Research Reactor Institute) in the past 30 years is reported together with the operational accomplishments. The reactor achieved criticality on June 24, 1964 and reached the normal power of 1000 kW on August 17 of the same year, and the normal power was elevated to 5000 kW on July 16, 1968 until today. The change of the annual effective dose equivalent, the collective dose equivalent, the average annual dose equivalent and the maximum dose equivalent are indicated in the table and the figure. The chronological table on the activities of the reactor is added. (T.H.)

  12. Power Curve Measurements REWS

    DEFF Research Database (Denmark)

    Gómez Arranz, Paula; Vesth, Allan

    This report describes the power curve measurements carried out on a given wind turbine in a chosen period. The measurements were carried out following the measurement procedure in the draft of IEC 61400-12-1 Ed.2 [1], with some deviations mostly regarding uncertainty calculation. Here, the refere......This report describes the power curve measurements carried out on a given wind turbine in a chosen period. The measurements were carried out following the measurement procedure in the draft of IEC 61400-12-1 Ed.2 [1], with some deviations mostly regarding uncertainty calculation. Here......, the reference wind speed used in the power curve is the equivalent wind speed obtained from lidar measurements at several heights between lower and upper blade tip, in combination with a hub height meteorological mast. The measurements have been performed using DTU’s measurement equipment, the analysis...

  13. Exact equivalent straight waveguide model for bent and twisted waveguides

    DEFF Research Database (Denmark)

    Shyroki, Dzmitry

    2008-01-01

    Exact equivalent straight waveguide representation is given for a waveguide of arbitrary curvature and torsion. No assumptions regarding refractive index contrast, isotropy of materials, or particular morphology in the waveguide cross section are made. This enables rigorous full-vector modeling...... of in-plane curved or helically wound waveguides with use of available simulators for straight waveguides without the restrictions of the known approximate equivalent-index formulas....

  14. Multi-MW wind turbine power curve measurements using remote sensing instruments - the first Hoevsoere campaign

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, R.; Courtney, M.

    2009-02-15

    Power curve measurement for large wind turbines requires taking into account more parameters than only the wind speed at hub height. Based on results from aerodynamic simulations, an equivalent wind speed taking the wind shear into account was defined and found to reduce the scatter in the power curve significantly. Two LiDARs and a SoDAR are used to measure the wind profile in front of a wind turbine. These profiles are used to calculate the equivalent wind speed. LiDAR are found to be more accurate than SoDAR and therefore more suitable for power performance measurement. The equivalent wind speed calculated from LiDAR profile measurements gave a small reduction of the power curve uncertainty. Several factors can explain why this difference is smaller than expected, including the experimental design and errors pertaining to the LiDAR at that time. This first measurement campaign shows that used of the equivalent wind speed at least results in a power curve with no more scatter than using the conventional method. (au)

  15. Environmental bias and elastic curves on surfaces

    International Nuclear Information System (INIS)

    Guven, Jemal; María Valencia, Dulce; Vázquez-Montejo, Pablo

    2014-01-01

    The behavior of an elastic curve bound to a surface will reflect the geometry of its environment. This may occur in an obvious way: the curve may deform freely along directions tangent to the surface, but not along the surface normal. However, even if the energy itself is symmetric in the curve's geodesic and normal curvatures, which control these modes, very distinct roles are played by the two. If the elastic curve binds preferentially on one side, or is itself assembled on the surface, not only would one expect the bending moduli associated with the two modes to differ, binding along specific directions, reflected in spontaneous values of these curvatures, may be favored. The shape equations describing the equilibrium states of a surface curve described by an elastic energy accommodating environmental factors will be identified by adapting the method of Lagrange multipliers to the Darboux frame associated with the curve. The forces transmitted to the surface along the surface normal will be determined. Features associated with a number of different energies, both of physical relevance and of mathematical interest, are described. The conservation laws associated with trajectories on surface geometries exhibiting continuous symmetries are also examined. (paper)

  16. Wind Turbine Power Curves Incorporating Turbulence Intensity

    DEFF Research Database (Denmark)

    Sørensen, Emil Hedevang Lohse

    2014-01-01

    . The model and method are parsimonious in the sense that only a single function (the zero-turbulence power curve) and a single auxiliary parameter (the equivalent turbulence factor) are needed to predict the mean power at any desired turbulence intensity. The method requires only ten minute statistics......The performance of a wind turbine in terms of power production (the power curve) is important to the wind energy industry. The current IEC-61400-12-1 standard for power curve evaluation recognizes only the mean wind speed at hub height and the air density as relevant to the power production...

  17. Remote sensing used for power curves

    DEFF Research Database (Denmark)

    Wagner, Rozenn; Ejsing Jørgensen, Hans; Schmidt Paulsen, Uwe

    2008-01-01

    Power curve measurement for large wind turbines requires taking into account more parameters than only the wind speed at hub height. Based on results from aerodynamic simulations, an equivalent wind speed taking the wind shear into account was defined and found to reduce the power standard deviat...

  18. Equivalent distributed capacitance model of oxide traps on frequency dispersion of C-V curve for MOS capacitors

    Science.gov (United States)

    Lu, Han-Han; Xu, Jing-Ping; Liu, Lu; Lai, Pui-To; Tang, Wing-Man

    2016-11-01

    An equivalent distributed capacitance model is established by considering only the gate oxide-trap capacitance to explain the frequency dispersion in the C-V curve of MOS capacitors measured for a frequency range from 1 kHz to 1 MHz. The proposed model is based on the Fermi-Dirac statistics and the charging/discharging effects of the oxide traps induced by a small ac signal. The validity of the proposed model is confirmed by the good agreement between the simulated results and experimental data. Simulations indicate that the capacitance dispersion of an MOS capacitor under accumulation and near flatband is mainly caused by traps adjacent to the oxide/semiconductor interface, with negligible effects from the traps far from the interface, and the relevant distance from the interface at which the traps can still contribute to the gate capacitance is also discussed. In addition, by excluding the negligible effect of oxide-trap conductance, the model avoids the use of imaginary numbers and complex calculations, and thus is simple and intuitive. Project supported by the National Natural Science Foundation of China (Grant Nos. 61176100 and 61274112), the University Development Fund of the University of Hong Kong, China (Grant No. 00600009), and the Hong Kong Polytechnic University, China (Grant No. 1-ZVB1).

  19. Equivalent intraperitoneal doses of ibuprofen supplemented in drinking water or in diet: a behavioral and biochemical assay using antinociceptive and thromboxane inhibitory dose–response curves in mice

    Directory of Open Access Journals (Sweden)

    Raghda A.M. Salama

    2016-07-01

    Full Text Available Background. Ibuprofen is used chronically in different animal models of inflammation by administration in drinking water or in diet due to its short half-life. Though this practice has been used for years, ibuprofen doses were never assayed against parenteral dose–response curves. This study aims at identifying the equivalent intraperitoneal (i.p. doses of ibuprofen, when it is administered in drinking water or in diet. Methods. Bioassays were performed using formalin test and incisional pain model for antinociceptive efficacy and serum TXB2 for eicosanoid inhibitory activity. The dose–response curve of i.p. administered ibuprofen was constructed for each test using 50, 75, 100 and 200 mg/kg body weight (b.w.. The dose–response curves were constructed of phase 2a of the formalin test (the most sensitive phase to COX inhibitory agents, the area under the ‘change in mechanical threshold’-time curve in the incisional pain model and serum TXB2 levels. The assayed ibuprofen concentrations administered in drinking water were 0.2, 0.35, 0.6 mg/ml and those administered in diet were 82, 263, 375 mg/kg diet. Results. The 3 concentrations applied in drinking water lay between 73.6 and 85.5 mg/kg b.w., i.p., in case of the formalin test; between 58.9 and 77.8 mg/kg b.w., i.p., in case of the incisional pain model; and between 71.8 and 125.8 mg/kg b.w., i.p., in case of serum TXB2 levels. The 3 concentrations administered in diet lay between 67.6 and 83.8 mg/kg b.w., i.p., in case of the formalin test; between 52.7 and 68.6 mg/kg b.w., i.p., in case of the incisional pain model; and between 63.6 and 92.5 mg/kg b.w., i.p., in case of serum TXB2 levels. Discussion. The increment in pharmacological effects of different doses of continuously administered ibuprofen in drinking water or diet do not parallel those of i.p. administered ibuprofen. It is therefore difficult to assume the equivalent parenteral daily doses based on mathematical calculations.

  20. Phonon transport across nano-scale curved thin films

    Energy Technology Data Exchange (ETDEWEB)

    Mansoor, Saad B.; Yilbas, Bekir S., E-mail: bsyilbas@kfupm.edu.sa

    2016-12-15

    Phonon transport across the curve thin silicon film due to temperature disturbance at film edges is examined. The equation for radiative transport is considered via incorporating Boltzmann transport equation for the energy transfer. The effect of the thin film curvature on phonon transport characteristics is assessed. In the analysis, the film arc length along the film centerline is considered to be constant and the film arc angle is varied to obtain various film curvatures. Equivalent equilibrium temperature is introduced to assess the phonon intensity distribution inside the curved thin film. It is found that equivalent equilibrium temperature decay along the arc length is sharper than that of in the radial direction, which is more pronounced in the region close to the film inner radius. Reducing film arc angle increases the film curvature; in which case, phonon intensity decay becomes sharp in the close region of the high temperature edge. Equivalent equilibrium temperature demonstrates non-symmetric distribution along the radial direction, which is more pronounced in the near region of the high temperature edge.

  1. Phonon transport across nano-scale curved thin films

    International Nuclear Information System (INIS)

    Mansoor, Saad B.; Yilbas, Bekir S.

    2016-01-01

    Phonon transport across the curve thin silicon film due to temperature disturbance at film edges is examined. The equation for radiative transport is considered via incorporating Boltzmann transport equation for the energy transfer. The effect of the thin film curvature on phonon transport characteristics is assessed. In the analysis, the film arc length along the film centerline is considered to be constant and the film arc angle is varied to obtain various film curvatures. Equivalent equilibrium temperature is introduced to assess the phonon intensity distribution inside the curved thin film. It is found that equivalent equilibrium temperature decay along the arc length is sharper than that of in the radial direction, which is more pronounced in the region close to the film inner radius. Reducing film arc angle increases the film curvature; in which case, phonon intensity decay becomes sharp in the close region of the high temperature edge. Equivalent equilibrium temperature demonstrates non-symmetric distribution along the radial direction, which is more pronounced in the near region of the high temperature edge.

  2. Gravitational leptogenesis, C, CP and strong equivalence

    International Nuclear Information System (INIS)

    McDonald, Jamie I.; Shore, Graham M.

    2015-01-01

    The origin of matter-antimatter asymmetry is one of the most important outstanding problems at the interface of particle physics and cosmology. Gravitational leptogenesis (baryogenesis) provides a possible mechanism through explicit couplings of spacetime curvature to appropriate lepton (or baryon) currents. In this paper, the idea that these strong equivalence principle violating interactions could be generated automatically through quantum loop effects in curved spacetime is explored, focusing on the realisation of the discrete symmetries C, CP and CPT which must be broken to induce matter-antimatter asymmetry. The related issue of quantum corrections to the dispersion relation for neutrino propagation in curved spacetime is considered within a fully covariant framework.

  3. Gravitational leptogenesis, C, CP and strong equivalence

    Energy Technology Data Exchange (ETDEWEB)

    McDonald, Jamie I.; Shore, Graham M. [Department of Physics, Swansea University,Swansea, SA2 8PP (United Kingdom)

    2015-02-12

    The origin of matter-antimatter asymmetry is one of the most important outstanding problems at the interface of particle physics and cosmology. Gravitational leptogenesis (baryogenesis) provides a possible mechanism through explicit couplings of spacetime curvature to appropriate lepton (or baryon) currents. In this paper, the idea that these strong equivalence principle violating interactions could be generated automatically through quantum loop effects in curved spacetime is explored, focusing on the realisation of the discrete symmetries C, CP and CPT which must be broken to induce matter-antimatter asymmetry. The related issue of quantum corrections to the dispersion relation for neutrino propagation in curved spacetime is considered within a fully covariant framework.

  4. Experimental and statistical requirements for developing a well-defined K/sub IR/ curve. Final report

    International Nuclear Information System (INIS)

    Server, W.L.; Oldfield, W.; Wullaert, R.A.

    1977-05-01

    Further development of a statistically well-defined reference fracture toughness curve to verify and compliment the K/sub IR/ curve presently specified in Appendix G, Section III of the ASME Code was accomplished by performing critical experiments in small specimen fracture mechanics and improving techniques for statistical analysis of the data. Except for cleavage-initiated fracture, crack initiation was observed to occur prior to maximum load for all of the materials investigated. Initiation fracture toughness values (K/sub Jc/) based on R-curve heat-tinting studies were up to 50 percent less than the previously reported equivalent energy values (K*/sub d/). At upper shelf temperatures, the initiation fracture toughness (K/sub Jc/) generally increased with stress intensification rate. Both K/sub Jc/--Charpy V-notch and K/sub Ic/--specimen strength ratio correlations are promising methods for predicting thick-section behavior from small specimens. The previously developed tanh curve fitting procedure was improved to permit estimates of the variances and covariances of the regression coefficients to be computed. The distribution of the fracture toughness data was determined as a function of temperature. Instrumented precracked Charpy results were used to normalize the larger specimen fracture toughness data. The transformed large specimen fracture toughness data are used to generate statistically based lower-bound fracture toughness curves for either static or dynamic test results. A comparison of these lower bound curves with the K/sub IR/ curve shows that the K/sub IR/ curve is more conservative over most of its range. 143 figures, 26 tables

  5. Atlas of stress-strain curves

    CERN Document Server

    2002-01-01

    The Atlas of Stress-Strain Curves, Second Edition is substantially bigger in page dimensions, number of pages, and total number of curves than the previous edition. It contains over 1,400 curves, almost three times as many as in the 1987 edition. The curves are normalized in appearance to aid making comparisons among materials. All diagrams include metric (SI) units, and many also include U.S. customary units. All curves are captioned in a consistent format with valuable information including (as available) standard designation, the primary source of the curve, mechanical properties (including hardening exponent and strength coefficient), condition of sample, strain rate, test temperature, and alloy composition. Curve types include monotonic and cyclic stress-strain, isochronous stress-strain, and tangent modulus. Curves are logically arranged and indexed for fast retrieval of information. The book also includes an introduction that provides background information on methods of stress-strain determination, on...

  6. Compact Hilbert Curve Index Algorithm Based on Gray Code

    Directory of Open Access Journals (Sweden)

    CAO Xuefeng

    2016-12-01

    Full Text Available Hilbert curve has best clustering in various kinds of space filling curves, and has been used as an important tools in discrete global grid spatial index design field. But there are lots of redundancies in the standard Hilbert curve index when the data set has large differences between dimensions. In this paper, the construction features of Hilbert curve is analyzed based on Gray code, and then the compact Hilbert curve index algorithm is put forward, in which the redundancy problem has been avoided while Hilbert curve clustering preserved. Finally, experiment results shows that the compact Hilbert curve index outperforms the standard Hilbert index, their 1 computational complexity is nearly equivalent, but the real data set test shows the coding time and storage space decrease 40%, the speedup ratio of sorting speed is nearly 4.3.

  7. Modeling error distributions of growth curve models through Bayesian methods.

    Science.gov (United States)

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  8. Preparation of data relevant to ''Equivalent Uniform Burnup'' and Equivalent Initial Enrichment'' for burnup credit evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Nomura, Yasushi; Okuno, Hiroshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Murazaki, Minoru [Tokyo Nuclear Service Inc., Tokyo (Japan)

    2001-11-01

    Based on the PWR spent fuel composition data measured at JAERI, two kinds of simplified methods such as ''Equivalent Uniform Burnup'' and ''Equivalent Initial Enrichment'' have been introduced. And relevant evaluation curves have been prepared for criticality safety evaluation of spent fuel storage pool and transport casks, taking burnup of spent fuel into consideration. These simplified methods can be used to obtain an effective neutron multiplication factor for a spent fuel storage/transportation system by using the ORIGEN2.1 burnup code and the KENO-Va criticality code without considering axial burnup profile in spent fuel and other various factors introducing calculated errors. ''Equivalent Uniform Burnup'' is set up for its criticality analysis to be reactivity equivalent with the detailed analysis, in which the experimentally obtained isotopic composition together with a typical axial burnup profile and various factors such as irradiation history are considered on the conservative side. On the other hand, Equivalent Initial Enrichment'' is set up for its criticality analysis to be reactivity equivalent with the detailed analysis such as above when it is used in the so called fresh fuel assumption. (author)

  9. Evaluation of treatment effects for high-performance dye-sensitized solar cells using equivalent circuit analysis

    International Nuclear Information System (INIS)

    Murayama, Masaki; Mori, Tatsuo

    2006-01-01

    Equivalent circuit analysis using a one-diode model was carried out as a simpler, more convenient method to evaluate the electric mechanism and to employ effective treatment of a dye-sensitized solar cell (DSC). Cells treated using acetic acid or 4,t-butylpyridine were measured under irradiation (0.1 W/m 2 , AM 1.5) to obtain current-voltage (I-V) curves. Cell performance and equivalent circuit parameters were calculated from the I-V curves. Evaluation based on residual factors was useful for better fitting of the equivalent circuit to the I-V curve. The diode factor value was often over two for high-performance DSCs. Acetic acid treatment was effective to increase the short-circuit current by decreasing the series resistance of cells. In contrast, 4,t-butylpyridine was effective to increase open-circuit voltage by increasing the cell shunt resistance. Previous explanations considered that acetic acid worked to decrease the internal resistance of the TiO 2 layer and butylpyridine worked to lower the back-electron-transfer from the TiO 2 to the electrolyte

  10. Power curve report - with turbulence intensity normalization

    DEFF Research Database (Denmark)

    Gómez Arranz, Paula; Wagner, Rozenn; Vesth, Allan

    , additional shear and turbulence intensitity filters are applied on the measured data. Secondly, the method for normalization to a given reference turbulence intensity level (as described in Annex M of the draft of IEC 61400-12-1 Ed.2 [3]) is applied. The measurements have been performed using DTU...

  11. Mannheim Partner D-Curves in the Euclidean 3-space

    Directory of Open Access Journals (Sweden)

    Mustafa Kazaz

    2015-02-01

    Full Text Available In this paper, we consider the idea of Mannheim partner curves for curves lying on surfaces. By considering the Darboux frames of surface curves, we define Mannheim partner D-curves and give the characterizations for these curves. We also find the relations between geodesic curvatures, normal curvatures and geodesic torsions of these associated curves. Furthermore, we show that definition and characterizations of Mannheim partner D-curves include those of Mannheim partner curves in some special cases.

  12. Multi-MW wind turbine power curve measurements using remote sensing instruments – the first Høvsøre campaign

    DEFF Research Database (Denmark)

    Wagner, Rozenn; Courtney, Michael

    curve significantly. Two LiDARs and a SoDAR are used to measure the wind profile in front of a wind turbine. These profiles are used to calculate the equivalent wind speed. LiDAR are found to be more accurate than SoDAR and therefore more suitable for power performance measurement. The equivalent wind...... that used of the equivalent wind speed at least results in a power curve with no more scatter than using the conventional method....

  13. Integrable motion of curves in self-consistent potentials: Relation to spin systems and soliton equations

    Energy Technology Data Exchange (ETDEWEB)

    Myrzakulov, R.; Mamyrbekova, G.K.; Nugmanova, G.N.; Yesmakhanova, K.R. [Eurasian International Center for Theoretical Physics and Department of General and Theoretical Physics, Eurasian National University, Astana 010008 (Kazakhstan); Lakshmanan, M., E-mail: lakshman@cnld.bdu.ac.in [Centre for Nonlinear Dynamics, School of Physics, Bharathidasan University, Tiruchirapalli 620 024 (India)

    2014-06-13

    Motion of curves and surfaces in R{sup 3} lead to nonlinear evolution equations which are often integrable. They are also intimately connected to the dynamics of spin chains in the continuum limit and integrable soliton systems through geometric and gauge symmetric connections/equivalence. Here we point out the fact that a more general situation in which the curves evolve in the presence of additional self-consistent vector potentials can lead to interesting generalized spin systems with self-consistent potentials or soliton equations with self-consistent potentials. We obtain the general form of the evolution equations of underlying curves and report specific examples of generalized spin chains and soliton equations. These include principal chiral model and various Myrzakulov spin equations in (1+1) dimensions and their geometrically equivalent generalized nonlinear Schrödinger (NLS) family of equations, including Hirota–Maxwell–Bloch equations, all in the presence of self-consistent potential fields. The associated gauge equivalent Lax pairs are also presented to confirm their integrability. - Highlights: • Geometry of continuum spin chain with self-consistent potentials explored. • Mapping on moving space curves in R{sup 3} in the presence of potential fields carried out. • Equivalent generalized nonlinear Schrödinger (NLS) family of equations identified. • Integrability of identified nonlinear systems proved by deducing appropriate Lax pairs.

  14. Problems associated with use of the logarithmic equivalent strain in high pressure torsion

    International Nuclear Information System (INIS)

    Jonas, J J; Aranas, C Jr

    2014-01-01

    The logarithmic 'equivalent' strain is frequently recommended for description of the experimental flow curves determined in high pressure torsion (HPT) tests. Some experimental results determined at -196 and 190 °C on a 2024 aluminum alloy are plotted using both the von Mises and logarithmic equivalent strains. Three types of problems associated with use of the latter are described. The first involves the lack of work conjugacy between the logarithmic and shear stress/shear strain curves, a topic that has been discussed earlier. The second concerns the problems associated with testing at constant logarithmic strain rate, a feature of particular importance when the material is rate sensitive. The third type of problem involves the 'history dependence' of this measure in that the incremental logarithmic strain depends on whether the prior strain accumulated in the sample is known or not. This is a difficulty that does not affect use of the von Mises equivalent strain. For these reasons, it is concluded that the qualifier 'equivalent' should not be used when the logarithmic strain is employed to describe HPT results

  15. Matching of equivalent field regions

    DEFF Research Database (Denmark)

    Appel-Hansen, Jørgen; Rengarajan, S.B.

    2005-01-01

    In aperture problems, integral equations for equivalent currents are often found by enforcing matching of equivalent fields. The enforcement is made in the aperture surface region adjoining the two volumes on each side of the aperture. In the case of an aperture in a planar perfectly conducting...... screen, having the same homogeneous medium on both sides and an impressed current on one aide, an alternative procedure is relevant. We make use of the fact that in the aperture the tangential component of the magnetic field due to the induced currents in the screen is zero. The use of such a procedure...... shows that equivalent currents can be found by a consideration of only one of the two volumes into which the aperture plane divides the space. Furthermore, from a consideration of an automatic matching at the aperture, additional information about tangential as well as normal field components...

  16. Irregular conformal block, spectral curve and flow equations

    International Nuclear Information System (INIS)

    Choi, Sang Kwan; Rim, Chaiho; Zhang, Hong

    2016-01-01

    Irregular conformal block is motivated by the Argyres-Douglas type of N=2 super conformal gauge theory. We investigate the classical/NS limit of irregular conformal block using the spectral curve on a Riemann surface with irregular punctures, which is equivalent to the loop equation of irregular matrix model. The spectral curve is reduced to the second order (Virasoro symmetry, SU(2) for the gauge theory) and third order (W_3 symmetry, SU(3)) differential equations of a polynomial with finite degree. The conformal and W symmetry generate the flow equations in the spectral curve and determine the irregular conformal block, hence the partition function of the Argyres-Douglas theory ala AGT conjecture.

  17. TORREFACTION OF CELLULOSE: VALIDITY AND LIMITATION OF THE TEMPERATURE/DURATION EQUIVALENCE

    OpenAIRE

    Lv , Pin; Almeida , Giana; Perré , Patrick

    2012-01-01

    During torrefaction of biomass, equivalence between temperature and residence time is often reported, either in terms of the loss of mass or the alternation of properties. The present work proposes a rigorous investigation of this equivalence. Cellulose, as the main lignocellulosic biomass component, was treated under mild pyrolysis for 48 hours. Several couples of T-D (temperature-duration) points were selected from TGA curves to obtain mass losses of 11.6%, 25%, 50%, 74.4%, and 86.7%. The c...

  18. Validação da curva normal de peso fetal estimado pela ultra-sonografia para o diagnóstico do peso neonatal Validity of the normal fetal weight curve estimated by ultrasound for diagnosis of neonatal weight

    Directory of Open Access Journals (Sweden)

    José Guilherme Cecatti

    2003-02-01

    Full Text Available OBJETIVO: avaliar a concordância entre o peso fetal estimado (PFE por ultra-sonografia e o neonatal, o desempenho da curva normal de PFE por idade gestacional no diagnóstico de desvios do peso fetal/neonatal e fatores associados. MÉTODOS: participaram do estudo 186 grávidas atendidas de novembro de 1998 a janeiro de 2000, com avaliação ultra-sonográfica até 3 dias antes do parto, determinação do PFE e do índice de líquido amniótico e parto na instituição. O PFE foi calculado e classificado de acordo com a curva de valores normais de PFE em: pequeno para a idade gestacional (PIG, adequado para a idade gestacional (AIG e grande para a idade gestacional (GIG. A mesma classificação foi feita para o peso neonatal. A variabilidade das medidas e o grau de correlação linear entre o PFE e o peso neonatal foram calculados, bem como a sensibilidade, especificidade e valores preditivos para o uso da curva de valores normais de PFE para o diagnóstico dos desvios do peso neonatal. RESULTADOS: diferença entre o PFE e o peso neonatal variou entre -540 e +594 g, com média de +47,1 g, e as duas medidas apresentaram um coeficiente de correlação linear de 0,94. A curva normal de PFE teve sensibilidade de 100% e especificidade de 90,5% em detectar PIG ao nascimento, e de 94,4 e 92,8%, respectivamente, em detectar GIG, porém os valores preditivos positivos foram baixos para ambos. CONCLUSÕES: a estimativa ultra-sonográfica do peso fetal foi concordante com o peso neonatal, superestimando-o em apenas cerca de 47 g e a curva do PFE teve bom desempenho no rastreamento diagnóstico de recém-nascidos PIG e GIG.PURPOSE: tocompare the ultrasound estimation of fetal weight (EFW with neonatal weight and to evaluate the performance of the normal EFW curve according to gestational age for the diagnosis of fetal/neonatal weight deviation and associated factors. METHODS: one hundred and eighty-six pregnant women who delivered at the institution from

  19. Supply-cost curves for geographically distributed renewable-energy resources

    International Nuclear Information System (INIS)

    Izquierdo, Salvador; Dopazo, Cesar; Fueyo, Norberto

    2010-01-01

    The supply-cost curves of renewable-energy sources are an essential tool to synthesize and analyze large-scale energy-policy scenarios, both in the short and long terms. Here, we suggest and test a parametrization of such curves that allows their representation for modeling purposes with a minimal set of information. In essence, an economic potential is defined based on the mode of the marginal supply-cost curves; and, using this definition, a normalized log-normal distribution function is used to model these curves. The feasibility of this proposal is assessed with data from a GIS-based analysis of solar, wind and biomass technologies in Spain. The best agreement is achieved for solar energy.

  20. Denotational Aspects of Untyped Normalization by Evaluation

    DEFF Research Database (Denmark)

    Filinski, Andrzej; Rohde, Henning Korsholm

    2005-01-01

    of soundness (the output term, if any, is in normal form and ß-equivalent to the input term); identification (ß-equivalent terms are mapped to the same result); and completeness (the function is defined for all terms that do have normal forms). We also show how the semantic construction enables a simple yet...... formal correctness proof for the normalization algorithm, expressed as a functional program in an ML-like, call-by-value language. Finally, we generalize the construction to produce an infinitary variant of normal forms, namely Böhm trees. We show that the three-part characterization of correctness...

  1. Minimal families of curves on surfaces

    KAUST Repository

    Lubbes, Niels

    2014-11-01

    A minimal family of curves on an embedded surface is defined as a 1-dimensional family of rational curves of minimal degree, which cover the surface. We classify such minimal families using constructive methods. This allows us to compute the minimal families of a given surface.The classification of minimal families of curves can be reduced to the classification of minimal families which cover weak Del Pezzo surfaces. We classify the minimal families of weak Del Pezzo surfaces and present a table with the number of minimal families of each weak Del Pezzo surface up to Weyl equivalence.As an application of this classification we generalize some results of Schicho. We classify algebraic surfaces that carry a family of conics. We determine the minimal lexicographic degree for the parametrization of a surface that carries at least 2 minimal families. © 2014 Elsevier B.V.

  2. Feynman propagator in curved space-time

    International Nuclear Information System (INIS)

    Candelas, P.; Raine, D.J.

    1977-01-01

    The Wick rotation is generalized in a covariant manner so as to apply to curved manifolds in a way that is independent of the analytic properties of the manifold. This enables us to show that various methods for defining a Feynman propagator to be found in the literature are equivalent where they are applicable. We are also able to discuss the relation between certain regularization methods that have been employed

  3. Experimental simulation of closed timelike curves.

    Science.gov (United States)

    Ringbauer, Martin; Broome, Matthew A; Myers, Casey R; White, Andrew G; Ralph, Timothy C

    2014-06-19

    Closed timelike curves are among the most controversial features of modern physics. As legitimate solutions to Einstein's field equations, they allow for time travel, which instinctively seems paradoxical. However, in the quantum regime these paradoxes can be resolved, leaving closed timelike curves consistent with relativity. The study of these systems therefore provides valuable insight into nonlinearities and the emergence of causal structures in quantum mechanics--essential for any formulation of a quantum theory of gravity. Here we experimentally simulate the nonlinear behaviour of a qubit interacting unitarily with an older version of itself, addressing some of the fascinating effects that arise in systems traversing a closed timelike curve. These include perfect discrimination of non-orthogonal states and, most intriguingly, the ability to distinguish nominally equivalent ways of preparing pure quantum states. Finally, we examine the dependence of these effects on the initial qubit state, the form of the unitary interaction and the influence of decoherence.

  4. Growth curves for Laron syndrome.

    OpenAIRE

    Laron, Z; Lilos, P; Klinger, B

    1993-01-01

    Growth curves for children with Laron syndrome were constructed on the basis of repeated measurements made throughout infancy, childhood, and puberty in 24 (10 boys, 14 girls) of the 41 patients with this syndrome investigated in our clinic. Growth retardation was already noted at birth, the birth length ranging from 42 to 46 cm in the 12/20 available measurements. The postnatal growth curves deviated sharply from the normal from infancy on. Both sexes showed no clear pubertal spurt. Girls co...

  5. Wind-Induced Fatigue Analysis of High-Rise Steel Structures Using Equivalent Structural Stress Method

    Directory of Open Access Journals (Sweden)

    Zhao Fang

    2017-01-01

    Full Text Available Welded beam-to-column connections of high-rise steel structures are susceptive to fatigue damage under wind loading. However, most fatigue assessments in the field of civil engineering are mainly based on nominal stress or hot spot stress theories, which has the disadvantage of dependence on the meshing styles and massive curves selected. To address this problem, in this paper, the equivalent structural stress method with advantages of mesh-insensitive quality and capability of unifying different stress-life curves (S-N curves into one is introduced to the wind-induced fatigue assessment of a large-scale complicated high-rise steel structure. The multi-scale finite element model is established and the corresponding wind loading is simulated. Fatigue life assessments using equivalent structural stress method, hot spot stress method and nominal stress method are performed, and the results are verified and comparisons are made. The mesh-insensitive quality is also verified. The results show that the lateral weld toe of the butt weld connecting the beam flange plate and the column is the location where fatigue damage most likely happens. Nominal stress method considers fatigue assessment of welds in a more global way by averaging all the stress on the weld section while in equivalent structural stress method and hot spot method local stress concentration can be taken into account more precisely.

  6. Electron fluence to dose equivalent conversion factors calculated with EGS3 for electrons and positrons with energies from 100 keV to 20 GeV

    International Nuclear Information System (INIS)

    Rogers, D.W.O.

    1983-01-01

    At NRC the general purpose Monte-Carlo electron-photon transport code EGS3 is being applied to a variety of radiation dosimetry problems. To test its accuracy at low energies a detailed set of depth-dose curves for electrons and photons has been generated and compared to previous calculations. It was found that by changing the default step-size algorithm in EGS3, significant changes were obtained for incident electron beam cases. It was found that restricting the step-size to a 4% energy loss was appropriate below incident electron beam energies of 10 MeV. With this change, the calculated depth-dose curves were found to be in reasonable agreement with other calculations right down to incident electron energies of 100 keV although small (less than or equal to 10%) but persistent discrepancies with the NBS code ETRAN were obtained. EGS3 predicts higher initial dose and shorter range than ETRAN. These discrepancies are typical of a wide range of energies as is the better agreement with the results of Nahum. Data is presented for the electron fluence to maximal dose equivalent in a 30 cm thick slab of ICRU 4-element tissue irradiated by broad parallel beams of electrons incident normal to the surface. On their own, these values only give an indication of the dose equivalent expected from a spectrum of electrons since one needs to fold the spectrum maximal dose equivalent value. Calculations have also been done for incident positron beams. Despite the large statistical uncertainties, maximal dose equivalent although their values are 5 to 10% lower in a band around 10 MeV

  7. A NURBS approximation of experimental stress-strain curves

    International Nuclear Information System (INIS)

    Fedorov, Timofey V.; Morrev, Pavel G.

    2016-01-01

    A compact universal representation of monotonic experimental stress-strain curves of metals and alloys is proposed. It is based on the nonuniform rational Bezier splines (NURBS) of second order and may be used in a computer library of materials. Only six parameters per curve are needed; this is equivalent to a specification of only three points in a stress-strain plane. NURBS-functions of higher order prove to be surplus. Explicit expressions for both yield stress and hardening modulus are given. Two types of curves are considered: at a finite interval of strain and at infinite one. A broad class of metals and alloys of various chemical compositions subjected to various types of preliminary thermo-mechanical working is selected from a comprehensive data base in order to test the methodology proposed. The results demonstrate excellent correspondence to the experimental data. Keywords: work hardening, stress-strain curve, spline approximation, nonuniform rational B-spline, NURBS.

  8. SU-F-T-181: Proton Therapy Tissue-Equivalence of 3D Printed Materials

    International Nuclear Information System (INIS)

    Taylor, P; Craft, D; Followill, D; Howell, R

    2016-01-01

    Purpose: This work investigated the proton tissue-equivalence of various 3D printed materials. Methods: Three 3D printers were used to create 5 cm cubic phantoms made of different plastics with varying percentages of infill. White resin, polylactic acid (PLA), and NinjaFlex plastics were used. The infills ranged from 15% to 100%. Each phantom was scanned with a CT scanner to obtain the HU value. The relative linear stopping power (RLSP) was then determined using a multi-layer ion chamber in a 200 MeV proton beam. The RLSP was measured both parallel and perpendicular to the print direction for each material. Results: The HU values of the materials ranged from lung-equivalent (−820 HU σ160) when using a low infill, to soft-tissue-equivalent 159 (σ12). The RLSP of the materials depended on the orientation of the beam relative to the print direction. When the proton beam was parallel to the print direction, the RLSP was generally higher than the RLSP in the perpendicular orientation, by up to 45%. This difference was smaller (less than 6%) for the materials with 100% infill. For low infill cubes irradiated parallel to the print direction, the SOBP curve showed extreme degradation of the beam in the distal region. The materials with 15–25% infill had wide-ranging agreement with a clinical HU-RLSP conversion curve, with some measurements falling within 1% of the curve and others deviating up to 45%. The materials with 100% infill all fell within 7% of the curve. Conclusion: While some materials tested fall within 1% of a clinical HU-RLSP curve, caution should be taken when using 3D printed materials with proton therapy, as the orientation of the beam relative to the print direction can result in a large change in RLSP. Further investigation is needed to measure how the infill pattern affects the material RLSP. This work was supported by PHS grant CA180803.

  9. SU-F-T-181: Proton Therapy Tissue-Equivalence of 3D Printed Materials

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, P; Craft, D; Followill, D; Howell, R [UT MD Anderson Cancer Center, Houston, TX (United States)

    2016-06-15

    Purpose: This work investigated the proton tissue-equivalence of various 3D printed materials. Methods: Three 3D printers were used to create 5 cm cubic phantoms made of different plastics with varying percentages of infill. White resin, polylactic acid (PLA), and NinjaFlex plastics were used. The infills ranged from 15% to 100%. Each phantom was scanned with a CT scanner to obtain the HU value. The relative linear stopping power (RLSP) was then determined using a multi-layer ion chamber in a 200 MeV proton beam. The RLSP was measured both parallel and perpendicular to the print direction for each material. Results: The HU values of the materials ranged from lung-equivalent (−820 HU σ160) when using a low infill, to soft-tissue-equivalent 159 (σ12). The RLSP of the materials depended on the orientation of the beam relative to the print direction. When the proton beam was parallel to the print direction, the RLSP was generally higher than the RLSP in the perpendicular orientation, by up to 45%. This difference was smaller (less than 6%) for the materials with 100% infill. For low infill cubes irradiated parallel to the print direction, the SOBP curve showed extreme degradation of the beam in the distal region. The materials with 15–25% infill had wide-ranging agreement with a clinical HU-RLSP conversion curve, with some measurements falling within 1% of the curve and others deviating up to 45%. The materials with 100% infill all fell within 7% of the curve. Conclusion: While some materials tested fall within 1% of a clinical HU-RLSP curve, caution should be taken when using 3D printed materials with proton therapy, as the orientation of the beam relative to the print direction can result in a large change in RLSP. Further investigation is needed to measure how the infill pattern affects the material RLSP. This work was supported by PHS grant CA180803.

  10. Determination of electron depth-dose curves for water, ICRU tissue, and PMMA and their application to radiation protection dosimetry

    International Nuclear Information System (INIS)

    Grosswendt, B.

    1994-01-01

    For monoenergetic electrons in the energy range between 60 keV and 10 MeV, normally incident on water, 4-element ICRU tissue and PMMA phantoms, depth-dose curves have been calculated using the Monte Carlo method. The phantoms' shape was that of a rectangular solid with a square front face of 30 cm x 30 cm and a thickness of 15 cm; it corresponds to that recommended by the ICRU for use in the procedure of calibrating radiation protection dosemeters. The depth-dose curves have been used to determine practical ranges, half-value depths, electron fluence to maximum absorbed dose conversion factors, and conversion factors between electron fluence and absorbed dose at depths d corresponding to 0.007 g.cm -2 , 0.3 g.cm -2 , and 1.0 g.cm -2 . The latter data can be used as fluence to dose equivalent conversion factors for extended parallel electron beams. (Author)

  11. Using frequency equivalency in stability calculations

    Energy Technology Data Exchange (ETDEWEB)

    Gruzdev, I.A.; Temirbulatov, R.A.; Tereshko, L.A.

    1981-01-01

    A methodology for calculating oscillatory instability that involves using frequency equivalency is employed in carrying out the following proceedures: dividing an electric power system into subgroups; determining the adjustments to the automatic excitation control in each subsystem; simplifying the mathematical definition of the separate subsystems by using frequency equivalency; gradually re-tuning the automatic excitation control in the separate subsystems to account for neighboring subsystems by using their equivalent frequency characteristics. The methodology is to be used with a computer program to determine the gain in the stabilization channels of the automatic excitation control unit in which static stability of the entire aggregate of normal and post-breakdown conditions acceptable damping of transient processes are provided. The possibility of reducing the equation series to apply to chosen regions of the existing range of frequencies is demonstrated. The use of the methodology is illustrated in a sample study on stability in a Siberian unified power system.

  12. 51Cr - erythrocyte survival curves

    International Nuclear Information System (INIS)

    Paiva Costa, J. de.

    1982-07-01

    Sixteen patients were studied, being fifteen patients in hemolytic state, and a normal individual as a witness. The aim was to obtain better techniques for the analysis of the erythrocytes, survival curves, according to the recommendations of the International Committee of Hematology. It was used the radiochromatic method as a tracer. Previously a revisional study of the International Literature was made in its aspects inherent to the work in execution, rendering possible to establish comparisons and clarify phonomena observed in cur investigation. Several parameters were considered in this study, hindering both the exponential and the linear curves. The analysis of the survival curves of the erythrocytes in the studied group, revealed that the elution factor did not present a homogeneous answer quantitatively to all, though, the result of the analysis of these curves have been established, through listed programs in the electronic calculator. (Author) [pt

  13. Direct Extraction of InP/GaAsSb/InP DHBT Equivalent-Circuit Elements From S-Parameters Measured at Cut-Off and Normal Bias Conditions

    DEFF Research Database (Denmark)

    Johansen, Tom Keinicke; Leblanc, Rémy; Poulain, Julien

    2016-01-01

    A unique direct parameter extraction method for the small-signal equivalent-circuit model of InP/GaAsSb/InP double heterojunction bipolar transistors (DHBTs) is presented. $S$-parameters measured at cut-off bias are used, at first, to extract the distribution factor $X_{0}$ for the base-collector......A unique direct parameter extraction method for the small-signal equivalent-circuit model of InP/GaAsSb/InP double heterojunction bipolar transistors (DHBTs) is presented. $S$-parameters measured at cut-off bias are used, at first, to extract the distribution factor $X_{0}$ for the base......-collector capacitance at zero collector current and the collector-to-emitter overlap capacitance $C_{ceo}$ present in InP DHBT devices. Low-frequency $S$-parameters measured at normal bias conditions then allows the extraction of the external access resistances $R_{bx}$, $R_{e}$, and $R_{cx}$ as well as the intrinsic...

  14. Observer-dependent quantum vacua in curved space. II

    International Nuclear Information System (INIS)

    Castagnino, M.A.; Sztrajman, J.B.

    1989-01-01

    An observer-dependent Hamiltonian is introduced in order to describe massless spin-1 particles in curved space-times. The vacuum state is defined by means of Hamiltonian diagonalization and minimization, which turns out to be equivalent criteria. This method works in an arbitrary geometry, although a condition on the fluid of observers is required. Computations give the vacua commonly accepted in the literature

  15. Comparable attenuation of sympathetic nervous system activity in obese subjects with normal glucose tolerance, impaired glucose tolerance and treatment naïve type 2 diabetes following equivalent weight loss

    Directory of Open Access Journals (Sweden)

    Nora E. Straznicky

    2016-11-01

    Full Text Available Background and Purpose: Elevated sympathetic nervous system (SNS activity is a characteristic of obesity and type 2 diabetes (T2D that contributes to target organ damage and cardiovascular risk. In this study we examined whether baseline metabolic status influences the degree of sympathoinhibition attained following equivalent dietary weight loss. Methods: Un-medicated obese individuals categorized as normal glucose tolerant (NGT, n=15, impaired glucose tolerant (IGT, n=24 and newly-diagnosed T2D (n=15 consumed a hypocaloric diet (29% fat, 23% protein, 45% carbohydrate for 4-months. The three groups were matched for baseline age (56 + 1 years, body mass index (BMI, 32.9 + 0.7 kg/m2 and gender. Clinical measurements included whole-body norepinephrine kinetics, muscle sympathetic nerve activity (MSNA, by microneurography, spontaneous cardiac baroreflex sensitivity (BRS and oral glucose tolerance test. Results: Weight loss averaged -7.5 + 0.8, -8.1 + 0.5 and -8.0 + 0.9 % of body weight in NGT, IGT and T2D groups, respectively. T2D subjects had significantly greater reductions in fasting glucose, 2-h glucose and glucose area under the curve (AUC0-120 compared to NGT and IGT (group effect, P<0.001. Insulinogenic index decreased in IGT and NGT groups and increased in T2D (group x time, P=0.04. The magnitude of reduction in MSNA (-7 + 3, -8 + 4, -15 + 4 burst/100hb, respectively and whole-body norepinephrine spillover rate (-28 + 8, -18 + 6 and -25 + 7 %, respectively, time effect both P<0.001, did not differ between groups. After adjustment for age and change in body weight, ∆ insulin AUC0-120 was independently associated with reduction in arterial norepinephrine concentration, whilst ∆ LDL-cholesterol and improvement in BRS were independently associated with decrease in MSNA. Conclusions: Equivalent weight loss through hypocaloric diet is accompanied by similar sympathoinhibition in matched obese subjects with different baseline glucose tolerance

  16. Relativistic electron-beam transport in curved channels

    International Nuclear Information System (INIS)

    Vittitoe, C.N.; Morel, J.E.; Wright, T.P.

    1982-01-01

    Collisionless single particle trajectories are modeled for a single plasma channel having one section curved in a circular arc. The magnetic field is developed by superposition of straight and curved channel segments. The plasma density gives charge and beam-current neutralization. High transport efficiencies are found for turning a relativistic electron beam 90 0 under reasonable conditions of plasma current, beam energy, arc radius, channel radius, and injection distributions in velocity and in position at the channel entrance. Channel exit distributions in velocity and position are found consistent with those for a straight plasma channel of equivalent length. Such transport problems are important in any charged particle-beam application constrained by large diode-to-target distance or by requirements of maximum power deposition in a confined area

  17. Equivalence principle and quantum mechanics: quantum simulation with entangled photons.

    Science.gov (United States)

    Longhi, S

    2018-01-15

    Einstein's equivalence principle (EP) states the complete physical equivalence of a gravitational field and corresponding inertial field in an accelerated reference frame. However, to what extent the EP remains valid in non-relativistic quantum mechanics is a controversial issue. To avoid violation of the EP, Bargmann's superselection rule forbids a coherent superposition of states with different masses. Here we suggest a quantum simulation of non-relativistic Schrödinger particle dynamics in non-inertial reference frames, which is based on the propagation of polarization-entangled photon pairs in curved and birefringent optical waveguides and Hong-Ou-Mandel quantum interference measurement. The photonic simulator can emulate superposition of mass states, which would lead to violation of the EP.

  18. Growth curves for Laron syndrome.

    Science.gov (United States)

    Laron, Z; Lilos, P; Klinger, B

    1993-01-01

    Growth curves for children with Laron syndrome were constructed on the basis of repeated measurements made throughout infancy, childhood, and puberty in 24 (10 boys, 14 girls) of the 41 patients with this syndrome investigated in our clinic. Growth retardation was already noted at birth, the birth length ranging from 42 to 46 cm in the 12/20 available measurements. The postnatal growth curves deviated sharply from the normal from infancy on. Both sexes showed no clear pubertal spurt. Girls completed their growth between the age of 16-19 years to a final mean (SD) height of 119 (8.5) cm whereas the boys continued growing beyond the age of 20 years, achieving a final height of 124 (8.5) cm. At all ages the upper to lower body segment ratio was more than 2 SD above the normal mean. These growth curves constitute a model not only for primary, hereditary insulin-like growth factor-I (IGF-I) deficiency (Laron syndrome) but also for untreated secondary IGF-I deficiencies such as growth hormone gene deletion and idiopathic congenital isolated growth hormone deficiency. They should also be useful in the follow up of children with Laron syndrome treated with biosynthetic recombinant IGF-I. PMID:8333769

  19. An Equivalent Circuit of Longitudinal Vibration for a Piezoelectric Structure with Losses.

    Science.gov (United States)

    Yuan, Tao; Li, Chaodong; Fan, Pingqing

    2018-03-22

    Equivalent circuits of piezoelectric structures such as bimorphs and unimorphs conventionally focus on the bending vibration modes. However, the longitudinal vibration modes are rarely considered even though they also play a remarkable role in piezoelectric devices. Losses, especially elastic loss in the metal substrate, are also generally neglected, which leads to discrepancies compared with experiments. In this paper, a novel equivalent circuit with four kinds of losses is proposed for a beamlike piezoelectric structure under the longitudinal vibration mode. This structure consists of a slender beam as the metal substrate, and a piezoelectric patch which covers a partial length of the beam. In this approach, first, complex numbers are used to deal with four kinds of losses-elastic loss in the metal substrate, and piezoelectric, dielectric, and elastic losses in the piezoelectric patch. Next in this approach, based on Mason's model, a new equivalent circuit is developed. Using MATLAB, impedance curves of this structure are simulated by the equivalent circuit method. Experiments are conducted and good agreements are revealed between experiments and equivalent circuit results. It is indicated that the introduction of four losses in an equivalent circuit can increase the result accuracy considerably.

  20. Birth weight curves tailored to maternal world region.

    Science.gov (United States)

    Ray, Joel G; Sgro, Michael; Mamdani, Muhammad M; Glazier, Richard H; Bocking, Alan; Hilliard, Robert; Urquia, Marcelo L

    2012-02-01

    Newborns of certain immigrant mothers are smaller at birth than those of domestically born mothers. Contemporary, population-derived percentile curves for these newborns are lacking, as are estimates of their risk of being misclassified as too small or too large using conventional rather than tailored birth weight curves. We completed a population-based study of 766 688 singleton live births in Ontario from 2002 to 2007. Smoothed birth weight percentile curves were generated for males and females, categorized by maternal world region of birth: Canada (63.5%), Europe/Western nations (7.6%), Africa/Caribbean (4.9%), Middle East/North Africa (3.4%), Latin America (3.4%), East Asia/Pacific (8.1%), and South Asia (9.2%). We determined the likelihood of misclassifying an infant as small for gestational age (≤ 10th percentile for weight) or as large for gestational age (≥ 90th percentile for weight) on a Canadian-born maternal curve versus one specific to maternal world region of origin. Significantly lower birth weights were seen at gestation-specific 10th, 50th, and 90th percentiles among term infants born to mothers from each world region, with the exception of Europe/Western nations, compared with those for infants of Canadian-born mothers. For example, for South Asian babies born at 40 weeks' gestation, the absolute difference at the 10th percentile was 198 g (95% CI 183 to 212) for males and 170 g (95% CI 161 to 179) for females. Controlling for maternal age and parity, South Asian males had an odds ratio of 2.60 (95% CI 2.53 to 2.68) of being misclassified as small for gestational age, equivalent to approximately 116 in 1000 newborns; for South Asian females the OR was 2.41 (95% CI 2.34 to 2.48), equivalent to approximately 106 per 1000 newborns. Large for gestational age would be missed in approximately 61 per 1000 male and 57 per 1000 female South Asian newborns if conventional rather than ethnicity-specific birth weight curves were used. Birth weight curves

  1. Computational tools for the construction of calibration curves for use in dose calculations in radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Oliveira, Alex C.H.; Vieira, Jose W.; Escola Politecnica de Pernambuco , Recife, PE

    2011-01-01

    The realization of tissue inhomogeneity corrections in image-based treatment planning improves the accuracy of radiation dose calculations for patients undergoing external-beam radiotherapy. Before the tissue inhomogeneity correction can be applied, the relationship between the computed tomography (CT) numbers and density must be established. This relationship is typically established by a calibration curve empirically obtained from CT images of a phantom that has several inserts of tissue-equivalent materials, covering a wide range of densities. This calibration curve is scanner-dependent and allows the conversion of CT numbers in densities for use in dose calculations. This paper describes the implementation of computational tools necessary to construct calibration curves. These tools are used for reading and displaying of CT images in DICOM format, determination of the mean CT numbers (and their standard deviations) of each tissue-equivalent material and construction of calibration curves by fits with bilinear equations. All these tools have been implemented in the Microsoft Visual Studio 2010 in C≠ programming language. (author)

  2. Normalization of flow-mediated dilation to shear stress area under the curve eliminates the impact of variable hyperemic stimulus.

    Science.gov (United States)

    Padilla, Jaume; Johnson, Blair D; Newcomer, Sean C; Wilhite, Daniel P; Mickleborough, Timothy D; Fly, Alyce D; Mather, Kieren J; Wallace, Janet P

    2008-09-04

    Normalization of brachial artery flow-mediated dilation (FMD) to individual shear stress area under the curve (peak FMD:SSAUC ratio) has recently been proposed as an approach to control for the large inter-subject variability in reactive hyperemia-induced shear stress; however, the adoption of this approach among researchers has been slow. The present study was designed to further examine the efficacy of FMD normalization to shear stress in reducing measurement variability. Five different magnitudes of reactive hyperemia-induced shear stress were applied to 20 healthy, physically active young adults (25.3 +/- 0. 6 yrs; 10 men, 10 women) by manipulating forearm cuff occlusion duration: 1, 2, 3, 4, and 5 min, in a randomized order. A venous blood draw was performed for determination of baseline whole blood viscosity and hematocrit. The magnitude of occlusion-induced forearm ischemia was quantified by dual-wavelength near-infrared spectrometry (NIRS). Brachial artery diameters and velocities were obtained via high-resolution ultrasound. The SSAUC was individually calculated for the duration of time-to-peak dilation. One-way repeated measures ANOVA demonstrated distinct magnitudes of occlusion-induced ischemia (volume and peak), hyperemic shear stress, and peak FMD responses (all p index of endothelial function.

  3. Normalization of flow-mediated dilation to shear stress area under the curve eliminates the impact of variable hyperemic stimulus

    Directory of Open Access Journals (Sweden)

    Mickleborough Timothy D

    2008-09-01

    Full Text Available Abstract Background Normalization of brachial artery flow-mediated dilation (FMD to individual shear stress area under the curve (peak FMD:SSAUC ratio has recently been proposed as an approach to control for the large inter-subject variability in reactive hyperemia-induced shear stress; however, the adoption of this approach among researchers has been slow. The present study was designed to further examine the efficacy of FMD normalization to shear stress in reducing measurement variability. Methods Five different magnitudes of reactive hyperemia-induced shear stress were applied to 20 healthy, physically active young adults (25.3 ± 0. 6 yrs; 10 men, 10 women by manipulating forearm cuff occlusion duration: 1, 2, 3, 4, and 5 min, in a randomized order. A venous blood draw was performed for determination of baseline whole blood viscosity and hematocrit. The magnitude of occlusion-induced forearm ischemia was quantified by dual-wavelength near-infrared spectrometry (NIRS. Brachial artery diameters and velocities were obtained via high-resolution ultrasound. The SSAUC was individually calculated for the duration of time-to-peak dilation. Results One-way repeated measures ANOVA demonstrated distinct magnitudes of occlusion-induced ischemia (volume and peak, hyperemic shear stress, and peak FMD responses (all p AUC (p = 0.785. Conclusion Our data confirm that normalization of FMD to SSAUC eliminates the influences of variable shear stress and solidifies the utility of FMD:SSAUC ratio as an index of endothelial function.

  4. An investigation on vulnerability assessment of steel structures with thin steel shear wall through development of fragility curves

    OpenAIRE

    Mohsen Gerami; Saeed Ghaffari; Amir Mahdi Heidari Tafreshi

    2017-01-01

    Fragility curves play an important role in damage assessment of buildings. Probability of damage induction to the structure against seismic events can be investigated upon generation of afore mentioned curves. In current research 360 time history analyses have been carried out on structures of 3, 10 and 20 story height and subsequently fragility curves have been adopted. The curves are developed based on two indices of inter story drifts and equivalent strip axial strains of the shear wall. T...

  5. Synthesis of Titanium Dioxide Nanoparticles Using a Double-Slit Curved Wall-Jet Burner

    KAUST Repository

    Ismail, Mohamed

    2016-05-04

    A novel double-slit curved wall-jet (DS-CWJ) burner was proposed and utilized for flame synthesis. This burner was comprised of double curved wall-jet nozzles with coaxial slits; the inner slit was for the delivery of titanium tetraisopropoxide (TTIP) precursor while the outer one was to supply premixed fuel/air mixture of ethylene (C2H4) or propane (C3H8). This configuration enabled rapid mixing between the precursor and reactants along the curved surface and inside the recirculation zone of the burner. Particle growth of titanium dioxide (TiO2) nanoparticles and their phases was investigated with varying equivalence ratio and Reynolds number. Flow field and flame structure were measured using particle image velocimetry (PIV) and OH planar laser-induced fluorescence (PLIF) techniques, respectively. The nanoparticles were characterized using high-resolution transmission electron microscopy (HRTEM), X-ray diffraction (XRD), and nitrogen adsorption Brunauer–Emmett–Teller (BET) for surface area analysis. The flow field consisted of a wall-jet region leading to a recirculation zone, an interaction jet region, followed by a merged-jet region. The DS-CWJ burner revealed appreciable mixing characteristics between the precursor and combustion gases near the nozzle regions, with a slight increase in the axial velocity due to the precursor injection. The precursor supply had a negligible effect on the flame structure. The burner produced a reasonably uniform size (13–18 nm) nanoparticles with a high BET surface area (>100 m2/g). The phase of TiO2 nanoparticles was mainly dependent on the equivalence ratio and fuel type, which impact flame height, heat release rate, and high temperature residence time of the precursor vapor. For ethylene flames, the anatase content increased with the equivalence ratio, whereas it decreased in the case of propane flames. The synthesized TiO2 nanoparticles exhibited high crystallinity and the anatase phase was dominant at high equivalence

  6. Revealing the equivalence of two clonal survival models by principal component analysis

    International Nuclear Information System (INIS)

    Lachet, Bernard; Dufour, Jacques

    1976-01-01

    The principal component analysis of 21 chlorella cell survival curves, adjusted by one-hit and two-hit target models, lead to quite similar projections on the principal plan: the homologous parameters of these models are linearly correlated; the reason for the statistical equivalence of these two models, in the present state of experimental inaccuracy, is revealed [fr

  7. Estimation of Curve Tracing Time in Supercapacitor based PV Characterization

    Science.gov (United States)

    Basu Pal, Sudipta; Das Bhattacharya, Konika; Mukherjee, Dipankar; Paul, Debkalyan

    2017-08-01

    Smooth and noise-free characterisation of photovoltaic (PV) generators have been revisited with renewed interest in view of large size PV arrays making inroads into the urban sector of major developing countries. Such practice has recently been observed to be confronted by the use of a suitable data acquisition system and also the lack of a supporting theoretical analysis to justify the accuracy of curve tracing. However, the use of a selected bank of supercapacitors can mitigate the said problems to a large extent. Assuming a piecewise linear analysis of the V-I characteristics of a PV generator, an accurate analysis of curve plotting time has been possible. The analysis has been extended to consider the effect of equivalent series resistance of the supercapacitor leading to increased accuracy (90-95%) of curve plotting times.

  8. An equivalent ground thermal test method for single-phase fluid loop space radiator

    Directory of Open Access Journals (Sweden)

    Xianwen Ning

    2015-02-01

    Full Text Available Thermal vacuum test is widely used for the ground validation of spacecraft thermal control system. However, the conduction and convection can be simulated in normal ground pressure environment completely. By the employment of pumped fluid loops’ thermal control technology on spacecraft, conduction and convection become the main heat transfer behavior between radiator and inside cabin. As long as the heat transfer behavior between radiator and outer space can be equivalently simulated in normal pressure, the thermal vacuum test can be substituted by the normal ground pressure thermal test. In this paper, an equivalent normal pressure thermal test method for the spacecraft single-phase fluid loop radiator is proposed. The heat radiation between radiator and outer space has been equivalently simulated by combination of a group of refrigerators and thermal electrical cooler (TEC array. By adjusting the heat rejection of each device, the relationship between heat flux and surface temperature of the radiator can be maintained. To verify this method, a validating system has been built up and the experiments have been carried out. The results indicate that the proposed equivalent ground thermal test method can simulate the heat rejection performance of radiator correctly and the temperature error between in-orbit theory value and experiment result of the radiator is less than 0.5 °C, except for the equipment startup period. This provides a potential method for the thermal test of space systems especially for extra-large spacecraft which employs single-phase fluid loop radiator as thermal control approach.

  9. Influence of thermoluminescence trapping parameter from abundant quartz powder on equivalent dose

    International Nuclear Information System (INIS)

    Zhao Qiuyue; Wei Mingjian; Song Bo; Pan Baolin; Zhou Rui

    2014-01-01

    Glow curves of abundant quartz powder were obtained with the RGD-3B thermoluminescence (TL) reader. TL peaks with 448, 551, 654, 756 K were identified at the heating rate of 5 K/s. The activation energy, frequency factor and lifetime of trapped charge were evaluated at ambient temperature for four peaks by the method of various heating rates. Within a certain range of activation energy, the equivalent dose increases exponentially with the activation energy. The equivalent dose increases from 54 Gy to 485 Gy with the temperature from 548 K to 608 K, and it fluctuates around 531 Gy with the temperature from 608 K to 748 K. (authors)

  10. Comparison of NCHS, CDC, and WHO curves in children with cardiovascular risk.

    Science.gov (United States)

    Oliveira, Grasiela Junges de; Barbiero, Sandra Mari; Cesa, Claudia Ciceri; Pellanda, Lucia Campos

    2013-01-01

    The study aimed to compare the prevalence of overweight and obesity according to three growth curves, created by the World Health Organization (WHO/2006), by the National Center for Health Statistics (NCHS/1977), and by the Centers for Disease Control and Prevention (CDC/2000) in children with cardiovascular risk factors. Data from 118 children and adolescents, aged between 2 and 19 years, treated between the years 2001 to 2009 at the Pediatric Preventive Cardiology Outpatient Clinic of the Instituto de Cardiologia de Porto Alegre were evaluated. The variables analyzed were: weight, height, age, and gender. Variables were classified according to the following criteria: weight/age, height/age, and body mass index (BMI). The cutoff points used were obtained from the three growth curves: WHO/2006, NCHS/1977, and CDC/2000. Regarding the criterion weight/age by the NCHS curve, 18% of the children were classified as having normal weight, and 82% had excess weight; by the CDC curve, 28% had normal and 72% had excess weight; by the WHO curve, 16.0% had normal weight and 84% had excess weight. According to the BMI, 0.8% of the population was underweight. According to the CDC and WHO curves, 7.6% and 6.8% had normal weight; 26.3% and 11.9% were overweight; and 65.3% and 80.5% were obese, respectively. Regarding the height/age criterion, there was no significant difference between the references and, on average, 98.3% of the population showed adequate height for age. The new WHO curves are more sensitive to identify obesity in a population at risk, which has important implications for preventive and therapeutic management. Copyright © 2013 Elsevier Editora Ltda. All rights reserved.

  11. An index formula for the self-linking number of a space curve

    DEFF Research Database (Denmark)

    Røgen, Peter

    2008-01-01

    Given an embedded closed space curve with non-vanishing curvature, its self-linking number is defined as the linking number between the original curve and a curve pushed slightly off in the direction of its principal normals. We present an index formula for the self-linking number in terms of the...

  12. Analysis and comparison of immune reactivity in guinea-pigs immunized with equivalent numbers of normal or radiation-attenuated cercariae of Schistosoma mansoni

    International Nuclear Information System (INIS)

    Rogers, M.V.; McLaren, D.J.

    1987-01-01

    Guinea-pigs immunized with equivalent numbers of normal or radiation-attenuated cercariae of Schistosoma mansoni develop close to complete resistance to reinfection at weeks 12 and 4.5 respectively. We here analyse and compare the immune responses induced by the two populations of cercariae. Both radiation-attenuated and normal parasites of S. mansoni elicited an extensive germinal centre response in guinea-pigs by week 4.5 post-immunization. The anti-parasite antibody titre and cytotoxic activity of serum from 4.5-week-vaccinated, or 4.5-week-infected guinea-pigs were approximately equal, but sera from 12-week-infected individuals had high titres of anti-parasite antibody, which promoted significant larvicidal activity in vitro. In all cases, larvicidal activity was mediated by the IgG 2 fraction of the immune serum. Lymphocyte transformation tests conducted on splenic lymphocytes from 4.5-week vaccinated guinea-pigs revealed maximal stimulation against cercarial, 2-week and 3-week worm antigens, whereas spleen cells from 4.5-week-infected guinea-pigs were maximally stimulated by cercarial and 6-week worm antigens. The splenic lymphocyte responses of 12-week infected animals were dramatic against antigens prepared from all life-stages of the parasite. (author)

  13. ACL graft can replicate the normal ligament's tension curve

    NARCIS (Netherlands)

    Arnold, MP; Verdonschot, N; van Kampen, A

    2005-01-01

    The anatomical femoral insertion of the normal anterior cruciate ligament (ACL) lies on the deep portion of the lateral wall of the intercondylar fossa. Following the deep bone-cartilage border, it stretches from 11 o'clock high in the notch all the way down to its lowest border at 8 o'clock. The

  14. The exp-normal distribution is infinitely divisible

    OpenAIRE

    Pinelis, Iosif

    2018-01-01

    Let $Z$ be a standard normal random variable (r.v.). It is shown that the distribution of the r.v. $\\ln|Z|$ is infinitely divisible; equivalently, the standard normal distribution considered as the distribution on the multiplicative group over $\\mathbb{R}\\setminus\\{0\\}$ is infinitely divisible.

  15. Politico-economic equivalence

    DEFF Research Database (Denmark)

    Gonzalez Eiras, Martin; Niepelt, Dirk

    2015-01-01

    Traditional "economic equivalence'' results, like the Ricardian equivalence proposition, define equivalence classes over exogenous policies. We derive "politico-economic equivalence" conditions that apply in environments where policy is endogenous and chosen sequentially. A policy regime and a st......Traditional "economic equivalence'' results, like the Ricardian equivalence proposition, define equivalence classes over exogenous policies. We derive "politico-economic equivalence" conditions that apply in environments where policy is endogenous and chosen sequentially. A policy regime...... their use in the context of several applications, relating to social security reform, tax-smoothing policies and measures to correct externalities....

  16. The radiobiology of boron neutron capture therapy: Are ''photon-equivalent'' doses really photon-equivalent?

    International Nuclear Information System (INIS)

    Coderre, J.A.; Diaz, A.Z.; Ma, R.

    2001-01-01

    Boron neutron capture therapy (BNCT) produces a mixture of radiation dose components. The high-linear energy transfer (LET) particles are more damaging in tissue than equal doses of low-LET radiation. Each of the high-LET components can multiplied by an experimentally determined factor to adjust for the increased biological effectiveness and the resulting sum expressed in photon-equivalent units (Gy-Eq). BNCT doses in photon-equivalent units are based on a number of assumptions. It may be possible to test the validity of these assumptions and the accuracy of the calculated BNCT doses by 1) comparing the effects of BNCT in other animal or biological models where the effects of photon radiation are known, or 2) if there are endpoints reached in the BNCT dose escalation clinical trials that can be related to the known response to photons of the tissue in question. The calculated Gy-Eq BNCT doses delivered to dogs and to humans with BPA and the epithermal neutron beam of the Brookhaven Medical Research Reactor were compared to expected responses to photon irradiation. The data indicate that Gy-Eq doses in brain may be underestimated. Doses to skin are consistent with the expected response to photons. Gy-Eq doses to tumor are significantly overestimated. A model system of cells in culture irradiated at various depths in a lucite phantom using the epithermal beam is under development. Preliminary data indicate that this approach can be used to detect differences in the relative biological effectiveness of the beam. The rat 9L gliosarcoma cell survival data was converted to photon-equivalent doses using the same factors assumed in the clinical studies. The results superimposed on the survival curve derived from irradiation with Cs-137 photons indicating the potential utility of this model system. (author)

  17. SU-F-T-02: Estimation of Radiobiological Doses (BED and EQD2) of Single Fraction Electronic Brachytherapy That Equivalent to I-125 Eye Plaque: By Using Linear-Quadratic and Universal Survival Curve Models

    International Nuclear Information System (INIS)

    Kim, Y; Waldron, T; Pennington, E

    2016-01-01

    Purpose: To test the radiobiological impact of hypofractionated choroidal melanoma brachytherapy, we calculated single fraction equivalent doses (SFED) of the tumor that equivalent to 85 Gy of I125-BT for 20 patients. Corresponding organs-at-risks (OARs) doses were estimated. Methods: Twenty patients treated with I125-BT were retrospectively examined. The tumor SFED values were calculated from tumor BED using a conventional linear-quadratic (L-Q) model and an universal survival curve (USC). The opposite retina (α/β = 2.58), macula (2.58), optic disc (1.75), and lens (1.2) were examined. The % doses of OARs over tumor doses were assumed to be the same as for a single fraction delivery. The OAR SFED values were converted into BED and equivalent dose in 2 Gy fraction (EQD2) by using both L-Q and USC models, then compared to I125-BT. Results: The USC-based BED and EQD2 doses of the macula, optic disc, and the lens were on average 118 ± 46% (p 14 Gy). Conclusion: The estimated single fraction doses were feasible to be delivered within 1 hour using a high dose rate source such as electronic brachytherapy (eBT). However, the estimated OAR doses using eBT were 112 ∼ 118% higher than when using the I125-BT technique. Continued exploration of alternative dose rate or fractionation schedules should be followed.

  18. Sample size determination for equivalence assessment with multiple endpoints.

    Science.gov (United States)

    Sun, Anna; Dong, Xiaoyu; Tsong, Yi

    2014-01-01

    Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.

  19. What is correct: equivalent dose or dose equivalent

    International Nuclear Information System (INIS)

    Franic, Z.

    1994-01-01

    In Croatian language some physical quantities in radiation protection dosimetry have not precise names. Consequently, in practice either terms in English or mathematical formulas are used. The situation is even worse since the Croatian language only a limited number of textbooks, reference books and other papers are available. This paper compares the concept of ''dose equivalent'' as outlined in International Commission on Radiological Protection (ICRP) recommendations No. 26 and newest, conceptually different concept of ''equivalent dose'' which is introduced in ICRP 60. It was found out that Croatian terminology is both not uniform and unprecise. For the term ''dose equivalent'' was, under influence of Russian and Serbian languages, often used as term ''equivalent dose'' even from the point of view of ICRP 26 recommendations, which was not justified. Unfortunately, even now, in Croatia the legal unit still ''dose equivalent'' defined as in ICRP 26, but the term used for it is ''equivalent dose''. Therefore, in Croatian legislation a modified set of quantities introduced in ICRP 60, should be incorporated as soon as possible

  20. Unsaturated aldehydes as alkene equivalents in the Diels-Alder reaction

    DEFF Research Database (Denmark)

    Taarning, Esben; Madsen, Robert

    2008-01-01

    A one-pot procedure is described for using alpha,beta-unsaturated aldehydes as olefin equivalents in the Diels-Alder reaction. The method combines the normal electron demand cycloaddition with aldehyde dienophiles and the rhodium-catalyzed decarbonylation of aldehydes to afford cyclohexenes...

  1. Energy response of detectors to alpha/beta particles and compatibility of the equivalent factors

    International Nuclear Information System (INIS)

    Lin Bingxing; Li Guangxian; Lin Lixiong

    2011-01-01

    By measuring detect efficiency and equivalent factors of alpha/beta radiation with different energies on three types of detectors, this paper compares compatibility of their equivalent factors and discusses applicability of detectors to measuring total alpha/beta radiation. The result shows the relationship between efficiency of alpha/beta radiation and their energies on 3 types of detectors, such as scintillation and proportional and semiconductor counters, are overall identical. Alpha count efficiency display exponential relation with alpha-particle energy. While beta count efficiency display logarithm relation with beta-particle energy, but the curves appears deflection at low energy. Comparison test of energy response also shows that alpha and beta equivalent factors of scintillation and proportional counters have a good compatibility, and alpha equivalent factors of the semiconductor counters are in good agreement with those of the above two types of counters, but beta equivalent factors have obvious difference, or equivalent factors of low energy beta-particle are lower than those of other detectors. So, the semiconductor counter can not be used for measuring total radioactivity or for the measurements for the purpose of food safety. (authors)

  2. Fasting plasma glucose and serum uric acid levels in a general Chinese population with normal glucose tolerance: A U-shaped curve.

    Directory of Open Access Journals (Sweden)

    Yunyang Wang

    Full Text Available Although several epidemiological studies assessed the relationship between fasting plasma glucose (FPG and serum uric acid (SUA levels, the results were inconsistent. A cross-sectional study was conducted to investigate this relationship in Chinese individuals with normal glucose tolerance.A total of 5,726 women and 5,457 men with normal glucose tolerance were enrolled in the study. All subjects underwent a 75-g oral glucose tolerance test. Generalized additive models and two-piecewise linear regression models were applied to assess the relationship.A U-shaped relationship between FPG and SUA was observed. After adjusting for potential confounders, the inflection points of FPG levels in the curves were 4.6 mmol/L in women and 4.7 mmol/L in men respectively. SUA levels decreased with increasing fasting plasma glucose concentrations before the inflection points (regression coefficient [β] = -36.4, P < 0.001 for women; β = -33.5, P < 0.001 for men, then SUA levels increased (β = 17.8, P < 0.001 for women; β = 13.9, P < 0.001 for men. Additionally, serum insulin levels were positively associated with FPG and SUA (P < 0.05.A U-shaped relationship between FPG and SUA levels existed in Chinese individuals with normal glucose tolerance. The association is partly mediated through serum insulin levels.

  3. Investigation of 1-cm dose equivalent for photons behind shielding materials

    International Nuclear Information System (INIS)

    Hirayama, Hideo; Tanaka, Shun-ichi

    1991-03-01

    The ambient dose equivalent at 1-cm depth, assumed equivalent to the 1-cm dose equivalent in practical dose estimations behind shielding slabs of water, concrete, iron or lead for normally incident photons having various energies was calculated by using conversion factors for a slab phantom. It was compared with the 1-cm depth dose calculated with the Monte Carlo code EGS4. It was concluded from this comparison that the ambient dose equivalent calculated by using the conversion factors for the ICRU sphere could be used for the evaluation of the 1-cm dose equivalent for the sphere phantom within 20% errors. Average and practical conversion factors are defined as the conversion factors from exposure to ambient dose equivalent in a finite slab or an infinite one, respectively. They were calculated with EGS4 and the discrete ordinates code PALLAS. The exposure calculated with simple estimation procedures such as point kernel methods can be easily converted to ambient dose equivalent by using these conversion factors. The maximum value between 1 and 30 mfp can be adopted as the conversion factor which depends only on material and incident photon energy. This gives the ambient dose equivalent on the safe side. 13 refs., 7 figs., 2 tabs

  4. MAGNETIC CIRCUIT EQUIVALENT OF THE SYNCHRONOUS MOTOR WITH INCORPORATED MAGNETS

    Directory of Open Access Journals (Sweden)

    Fyong Le Ngo

    2015-01-01

    Full Text Available Magnetic circuitry computation is one of the central stages of designing a synchronous motor with incorporated magnets, which can be performed by means of a simplified method of the magnetic-circuits equivalent modeling. The article studies the magnetic circuit of the motor with the rotor-incorporated magnets, which includes four sectors: constant magnets with the field pole extension made of magnetically soft steel, magniflux dispersion sections containing air barriers and steel bridges; the air gap; the stator grooves, cogs and the frame yoke. The authors introduce an equivalent model of the magnetic circuit. High-energy magnets with a linear demagnetization curve are employed in the capacity of constant magnets. Two magnets create the magnetic flux for one pole. The decline of magnetic potential in the steel of the pole is negligible consequent on the admission that the poles magnetic inductivity µ = ∞. The rotor design provides for the air barriers and the steel bridges that close leakage flux. The induction-permeability curve linearization serves for the bridges magnetic saturation accountability and presents a polygonal line consisting of two linear sections. The estimation of the magnet circuit section including the cogs and the frame yoke is executed with account of the steel saturation, their magnetic conductivities thereat being dependent on the saturation rate. Relying on the equivalent model of the magnetic circuit, the authors deduce a system of two equations written from the first and the second Kirchhoff laws of the magnetic circuits. These equations allow solving two problems: specifying dimensions of the magnets by the preset value of the magnetic flow in the clearance and determining the clearance magnetic flow at the preset motor rotor-and-stator design.

  5. Investigation of radiological properties and water equivalency of PRESAGE dosimeters

    International Nuclear Information System (INIS)

    Gorjiara, Tina; Hill, Robin; Kuncic, Zdenka; Adamovics, John; Bosi, Stephen; Kim, Jung-Ha; Baldock, Clive

    2011-01-01

    Purpose: PRESAGE is a dosimeter made of polyurethane, which is suitable for 3D dosimetry in modern radiation treatment techniques. Since an ideal dosimeter is radiologically water equivalent, the authors investigated water equivalency and the radiological properties of three different PRESAGE formulations that differ primarily in their elemental compositions. Two of the formulations are new and have lower halogen content than the original formulation. Methods: The radiological water equivalence was assessed by comparing the densities, interaction probabilities, and radiation dosimetry properties of the three different PRESAGE formulations to the corresponding values for water. The relative depth doses were calculated using Monte Carlo methods for 50, 100, 200, and 350 kVp and 6 MV x-ray beams. Results: The mass densities of the three PRESAGE formulations varied from 5.3% higher than that of water to as much as 10% higher than that of water for the original formulation. The probability of photoelectric absorption in the three different PRESAGE formulations varied from 2.2 times greater than that of water for the new formulations to 3.5 times greater than that of water for the original formulation. The mass attenuation coefficient for the three formulations is 12%-50% higher than the value for water. These differences occur over an energy range (10-100 keV) in which the photoelectric effect is the dominant interaction. The collision mass stopping powers of the relatively lower halogen-containing PRESAGE formulations also exhibit marginally better water equivalency than the original higher halogen-containing PRESAGE formulation. Furthermore, the depth dose curves for the lower halogen-containing PRESAGE formulations are slightly closer to that of water for a 6 MV beam. In the kilovoltage energy range, the depth dose curves for the lower halogen-containing PRESAGE formulations are in better agreement with water than the original PRESAGE formulation. Conclusions: Based

  6. A COMPREHENSIVE ANALYSIS OF SWIFT/X-RAY TELESCOPE DATA. IV. SINGLE POWER-LAW DECAYING LIGHT CURVES VERSUS CANONICAL LIGHT CURVES AND IMPLICATIONS FOR A UNIFIED ORIGIN OF X-RAYS

    International Nuclear Information System (INIS)

    Liang Enwei; Lue Houjun; Hou Shujin; Zhang Binbin; Zhang Bing

    2009-01-01

    By systematically analyzing the Swift/XRT light curves detected before 2009 July, we find 19 light curves that monotonously decay as a single power law (SPL) with an index of 1 ∼ 1.7 from tens (or hundreds) of seconds to ∼10 5 s post the gamma-ray burst (GRB) trigger. They are apparently different from the canonical light curves characterized by a shallow-to-normal decay transition. We compare the observations of the prompt gamma rays and the X-rays for these two samples of GRBs (SPL vs. canonical). No statistical difference is found in the prompt gamma-ray properties for the two samples. The X-ray properties of the two samples are also similar, although the SPL sample tends to have a slightly lower neutral hydrogen absorption column for the host galaxies and a slightly larger energy release compared with the canonical sample. The SPL X-ray Telescope (XRT) light curves in the burst frame gradually merge into a conflux, and their luminosities at 10 5 s are normally distributed at log L/ergs s -1 = 45.6 ± 0.5. The normal decay segment of the canonical XRT light curves has the same feature. Similar to the normal decay segment, the SPL light curves satisfy the closure relations and therefore can be roughly explained with external shock models. In the scenario that the X-rays are the afterglows of the GRB fireball, our results indicate that the shallow decay would be due to energy injection into the fireball and the total energy budget after injection for both samples of GRBs is comparable. More intriguing, we find that a prior X-ray emission model proposed by Yamazaki is more straightforward to interpret the observed XRT data. We show that the zero times (T 0 ) of the X-rays are 10 2 -10 5 s prior to the GRB trigger for the canonical sample, and satisfy a log-normal distribution. The negligible T 0 's of the SPL sample are consistent with being the tail of T 0 distributions at low end, suggesting that the SPL sample and the canonical sample may be from a same

  7. Curve fitting for RHB Islamic Bank annual net profit

    Science.gov (United States)

    Nadarajan, Dineswary; Noor, Noor Fadiya Mohd

    2015-05-01

    The RHB Islamic Bank net profit data are obtained from 2004 to 2012. Curve fitting is done by assuming the data are exact or experimental due to smoothing process. Higher order Lagrange polynomial and cubic spline with curve fitting procedure are constructed using Maple software. Normality test is performed to check the data adequacy. Regression analysis with curve estimation is conducted in SPSS environment. All the eleven models are found to be acceptable at 10% significant level of ANOVA. Residual error and absolute relative true error are calculated and compared. The optimal model based on the minimum average error is proposed.

  8. The Source Equivalence Acceleration Method

    International Nuclear Information System (INIS)

    Everson, Matthew S.; Forget, Benoit

    2015-01-01

    Highlights: • We present a new acceleration method, the Source Equivalence Acceleration Method. • SEAM forms an equivalent coarse group problem for any spatial method. • Equivalence is also formed across different spatial methods and angular quadratures. • Testing is conducted using OpenMOC and performance is compared with CMFD. • Results show that SEAM is preferable for very expensive transport calculations. - Abstract: Fine-group whole-core reactor analysis remains one of the long sought goals of the reactor physics community. Such a detailed analysis is typically too computationally expensive to be realized on anything except the largest of supercomputers. Recondensation using the Discrete Generalized Multigroup (DGM) method, though, offers a relatively cheap alternative to solving the fine group transport problem. DGM, however, suffered from inconsistencies when applied to high-order spatial methods. While an exact spatial recondensation method was developed and provided full spatial consistency with the fine group problem, this approach substantially increased memory requirements for realistic problems. The method described in this paper, called the Source Equivalence Acceleration Method (SEAM), forms a coarse-group problem which preserves the fine-group problem even when using higher order spatial methods. SEAM allows recondensation to converge to the fine-group solution with minimal memory requirements and little additional overhead. This method also provides for consistency when using different spatial methods and angular quadratures between the coarse group and fine group problems. SEAM was implemented in OpenMOC, a 2D MOC code developed at MIT, and its performance tested against Coarse Mesh Finite Difference (CMFD) acceleration on the C5G7 benchmark problem and on a 361 group version of the problem. For extremely expensive transport calculations, SEAM was able to outperform CMFD, resulting in speed-ups of 20–45 relative to the normal power

  9. An equivalent body surface charge model representing three-dimensional bioelectrical activity

    Science.gov (United States)

    He, B.; Chernyak, Y. B.; Cohen, R. J.

    1995-01-01

    A new surface-source model has been developed to account for the bioelectrical potential on the body surface. A single-layer surface-charge model on the body surface has been developed to equivalently represent bioelectrical sources inside the body. The boundary conditions on the body surface are discussed in relation to the surface-charge in a half-space conductive medium. The equivalent body surface-charge is shown to be proportional to the normal component of the electric field on the body surface just outside the body. The spatial resolution of the equivalent surface-charge distribution appears intermediate between those of the body surface potential distribution and the body surface Laplacian distribution. An analytic relationship between the equivalent surface-charge and the surface Laplacian of the potential was found for a half-space conductive medium. The effects of finite spatial sampling and noise on the reconstruction of the equivalent surface-charge were evaluated by computer simulations. It was found through computer simulations that the reconstruction of the equivalent body surface-charge from the body surface Laplacian distribution is very stable against noise and finite spatial sampling. The present results suggest that the equivalent body surface-charge model may provide an additional insight to our understanding of bioelectric phenomena.

  10. Equivalent Lagrangians

    International Nuclear Information System (INIS)

    Hojman, S.

    1982-01-01

    We present a review of the inverse problem of the Calculus of Variations, emphasizing the ambiguities which appear due to the existence of equivalent Lagrangians for a given classical system. In particular, we analyze the properties of equivalent Lagrangians in the multidimensional case, we study the conditions for the existence of a variational principle for (second as well as first order) equations of motion and their solutions, we consider the inverse problem of the Calculus of Variations for singular systems, we state the ambiguities which emerge in the relationship between symmetries and conserved quantities in the case of equivalent Lagrangians, we discuss the problems which appear in trying to quantize classical systems which have different equivalent Lagrangians, we describe the situation which arises in the study of equivalent Lagrangians in field theory and finally, we present some unsolved problems and discussion topics related to the content of this article. (author)

  11. Limbal Fibroblasts Maintain Normal Phenotype in 3D RAFT Tissue Equivalents Suggesting Potential for Safe Clinical Use in Treatment of Ocular Surface Failure.

    Science.gov (United States)

    Massie, Isobel; Dale, Sarah B; Daniels, Julie T

    2015-06-01

    Limbal epithelial stem cell deficiency can cause blindness, but transplantation of these cells on a carrier such as human amniotic membrane can restore vision. Unfortunately, clinical graft manufacture using amnion can be inconsistent. Therefore, we have developed an alternative substrate, Real Architecture for 3D Tissue (RAFT), which supports human limbal epithelial cells (hLE) expansion. Epithelial organization is improved when human limbal fibroblasts (hLF) are incorporated into RAFT tissue equivalent (TE). However, hLF have the potential to transdifferentiate into a pro-scarring cell type, which would be incompatible with therapeutic transplantation. The aim of this work was to assess the scarring phenotype of hLF in RAFT TEs in hLE+ and hLE- RAFT TEs and in nonairlifted and airlifted RAFT TEs. Diseased fibroblasts (dFib) isolated from the fibrotic conjunctivae of ocular mucous membrane pemphigoid (Oc-MMP) patients were used as a pro-scarring positive control against which hLF were compared using surrogate scarring parameters: matrix metalloproteinase (MMP) activity, de novo collagen synthesis, α-smooth muscle actin (α-SMA) expression, and transforming growth factor-β (TGF-β) secretion. Normal hLF and dFib maintained different phenotypes in RAFT TE. MMP-2 and -9 activity, de novo collagen synthesis, and α-SMA expression were all increased in dFib cf. normal hLF RAFT TEs, although TGF-β1 secretion did not differ between normal hLF and dFib RAFT TEs. Normal hLF do not progress toward a scarring-like phenotype during culture in RAFT TEs and, therefore, may be safe to include in therapeutic RAFT TE, where they can support hLE, although in vivo work is required to confirm this. dFib RAFT TEs (used in this study as a positive control) may be useful toward the development of an ex vivo disease model of Oc-MMP.

  12. Arctic curves in path models from the tangent method

    Science.gov (United States)

    Di Francesco, Philippe; Lapa, Matthew F.

    2018-04-01

    Recently, Colomo and Sportiello introduced a powerful method, known as the tangent method, for computing the arctic curve in statistical models which have a (non- or weakly-) intersecting lattice path formulation. We apply the tangent method to compute arctic curves in various models: the domino tiling of the Aztec diamond for which we recover the celebrated arctic circle; a model of Dyck paths equivalent to the rhombus tiling of a half-hexagon for which we find an arctic half-ellipse; another rhombus tiling model with an arctic parabola; the vertically symmetric alternating sign matrices, where we find the same arctic curve as for unconstrained alternating sign matrices. The latter case involves lattice paths that are non-intersecting but that are allowed to have osculating contact points, for which the tangent method was argued to still apply. For each problem we estimate the large size asymptotics of a certain one-point function using LU decomposition of the corresponding Gessel–Viennot matrices, and a reformulation of the result amenable to asymptotic analysis.

  13. Extracting the normal lung dose–response curve from clinical DVH data: a possible role for low dose hyper-radiosensitivity, increased radioresistance

    International Nuclear Information System (INIS)

    Gordon, J J; Snyder, K; Zhong, H; Barton, K; Sun, Z; Chetty, I J; Matuszak, M; Ten Haken, R K

    2015-01-01

    In conventionally fractionated radiation therapy for lung cancer, radiation pneumonitis’ (RP) dependence on the normal lung dose-volume histogram (DVH) is not well understood. Complication models alternatively make RP a function of a summary statistic, such as mean lung dose (MLD). This work searches over damage profiles, which quantify sub-volume damage as a function of dose. Profiles that achieve best RP predictive accuracy on a clinical dataset are hypothesized to approximate DVH dependence.Step function damage rate profiles R(D) are generated, having discrete steps at several dose points. A range of profiles is sampled by varying the step heights and dose point locations. Normal lung damage is the integral of R(D) with the cumulative DVH. Each profile is used in conjunction with a damage cutoff to predict grade 2 plus (G2+) RP for DVHs from a University of Michigan clinical trial dataset consisting of 89 CFRT patients, of which 17 were diagnosed with G2+ RP.Optimal profiles achieve a modest increase in predictive accuracy—erroneous RP predictions are reduced from 11 (using MLD) to 8. A novel result is that optimal profiles have a similar distinctive shape: enhanced damage contribution from low doses (<20 Gy), a flat contribution from doses in the range ∼20–40 Gy, then a further enhanced contribution from doses above 40 Gy. These features resemble the hyper-radiosensitivity / increased radioresistance (HRS/IRR) observed in some cell survival curves, which can be modeled using Joiner’s induced repair model.A novel search strategy is employed, which has the potential to estimate RP dependence on the normal lung DVH. When applied to a clinical dataset, identified profiles share a characteristic shape, which resembles HRS/IRR. This suggests that normal lung may have enhanced sensitivity to low doses, and that this sensitivity can affect RP risk. (paper)

  14. Fragility curves for bridges under differential support motions

    DEFF Research Database (Denmark)

    Konakli, Katerina

    2012-01-01

    This paper employs the notion of fragility to investigate the seismic vulnerability of bridges subjected to spatially varying support motions. Fragility curves are developed for four highway bridges in California with vastly different structural characteristics. The input in this analysis consists...... of simulated ground motion arrays with temporal and spectral nonstationarities, and consistent with prescribed spatial variation patterns. Structural damage is quantified through displacement ductility demands obtained from nonlinear time-history analysis. The potential use of the ‘equal displacement’ rule...... to approximately evaluate displacement demands from analysis of the equivalent linear systems is examined....

  15. Pharmaceutical equivalence of metformin tablets with various binders

    Directory of Open Access Journals (Sweden)

    L. C. Block

    2009-01-01

    Full Text Available

    Normal" style="margin: 0cm 0cm 0pt; line-height: normal; text-align: justify; mso-layout-grid-align: none;"> Metformin hydrochloride is a high-dose drug widely used as an oral anti-hyperglycemic agent. As it is highly crystalline and has poor compaction properties, it is difficult to form tablets by direct compression. The aim of this study was to develop adequate metformin tablets, pharmaceutically equivalent to the reference product, Glucophage® (marketed as Glifage® in Brazil. Metformin 500mg tablets were produced by wet granulation with various binders (A = starch, B = starch 1500®, C = PVP K30®, D = PVP K90®. The tablets were analyzed for their hardness, friability, disintegration, dissolution, content uniformity and dissolution profile (basket apparatus at 50 rpm, pH 6.8 phosphate buffer. The 4 formulations, F1 (5% A and 5% C, F2 (5% B and 5% C, F3 (10% C and F4 (5% D, demonstrated adequate uniformity of content, hardness, friability, disintegration and total drug dissolution after 30 minutes (F1, F2 and F4, and after 60 minutes (F3. The drug release time profiles fitted a Higuchi model (F1, F2 and F3, similarly to the pharmaceutical reference, or a zero order model (F4. The dissolution efficiency for all the formulations was 75%, except for F3 (45%. F1 and F2 were thus equivalent to Glifage®. Keywords: dissolution; metformin; tablet; binder; pharmaceutical equivalence

  16. Lactation Curve Pattern and Prediction of Milk Production Performance in Crossbred Cows

    Directory of Open Access Journals (Sweden)

    Suresh Jingar

    2014-01-01

    Full Text Available Data pertaining to 11728 test-day daily milk yields of normal and mastitis Karan Fries cows were collected from the institute herd and divided as mastitis and nonmastitis and parity-wise. The data of lactation curves of the normal and mastitis crossbred cows was analyzed using gamma type function. FTDMY in normal and mastitis cows showed an increasing trend from TD-1 to TD-4 and a gradual decrease (P<0.01 thereafter until the end of lactation (TD-21 in different parities. The FTDMY was maximum (peak yield in the fourth parity. Parity-wise lactation curve revealed a decrease in persistency, steeper decline in descending slope (c, and steeper increase in ascending slope (b from 1st to 5th and above parity. The higher coefficient of determination (R2 and lower root mean square error (RMSE indicated goodness and accuracy of the model for the prediction of milk prediction performance under field conditions. Clinical mastitis resulted in a significantly higher loss of milk yield (P<0.05. The FTDMY was maximum (P<0.05 in the fourth parity in comparison to the rest of parity. It is demonstrated that gamma type function can give the best fit lactation curve in normal and mastitis infected crossbred cows.

  17. Minimally invasive estimation of ventricular dead space volume through use of Frank-Starling curves.

    Directory of Open Access Journals (Sweden)

    Shaun Davidson

    Full Text Available This paper develops a means of more easily and less invasively estimating ventricular dead space volume (Vd, an important, but difficult to measure physiological parameter. Vd represents a subject and condition dependent portion of measured ventricular volume that is not actively participating in ventricular function. It is employed in models based on the time varying elastance concept, which see widespread use in haemodynamic studies, and may have direct diagnostic use. The proposed method involves linear extrapolation of a Frank-Starling curve (stroke volume vs end-diastolic volume and its end-systolic equivalent (stroke volume vs end-systolic volume, developed across normal clinical procedures such as recruitment manoeuvres, to their point of intersection with the y-axis (where stroke volume is 0 to determine Vd. To demonstrate the broad applicability of the method, it was validated across a cohort of six sedated and anaesthetised male Pietrain pigs, encompassing a variety of cardiac states from healthy baseline behaviour to circulatory failure due to septic shock induced by endotoxin infusion. Linear extrapolation of the curves was supported by strong linear correlation coefficients of R = 0.78 and R = 0.80 average for pre- and post- endotoxin infusion respectively, as well as good agreement between the two linearly extrapolated y-intercepts (Vd for each subject (no more than 7.8% variation. Method validity was further supported by the physiologically reasonable Vd values produced, equivalent to 44.3-53.1% and 49.3-82.6% of baseline end-systolic volume before and after endotoxin infusion respectively. This method has the potential to allow Vd to be estimated without a particularly demanding, specialised protocol in an experimental environment. Further, due to the common use of both mechanical ventilation and recruitment manoeuvres in intensive care, this method, subject to the availability of multi-beat echocardiography, has the potential to

  18. Timescale stretch parameterization of Type Ia supernova B-band light curves

    International Nuclear Information System (INIS)

    Goldhaber, G.; Groom, D.E.; Kim, A.; Aldering, G.; Astier, P.; Conley, A.; Deustua, S.E.; Ellis, R.; Fabbro, S.; Fruchter, A.S.; Goobar, A.; Hook, I.; Irwin, M.; Kim, M.; Knop, R.A.; Lidman, C.; McMahon, R.; Nugent, P.E.; Pain, R.; Panagia, N.; Pennypacker, C.R.; Perlmutter, S.; Ruiz-Lapuente, P.; Schaefer, B.; Walton, N.A.; York, T.

    2001-01-01

    R-band intensity measurements along the light curve of Type Ia supernovae discovered by the Cosmology Project (SCP) are fitted in brightness to templates allowing a free parameter the time-axis width factor w identically equal to s times (1+z). The data points are then individually aligned in the time-axis, normalized and K-corrected back to the rest frame, after which the nearly 1300 normalized intensity measurements are found to lie on a well-determined common rest-frame B-band curve which we call the ''composite curve.'' The same procedure is applied to 18 low-redshift Calan/Tololo SNe with Z < 0.11; these nearly 300 B-band photometry points are found to lie on the composite curve equally well. The SCP search technique produces several measurements before maximum light for each supernova. We demonstrate that the linear stretch factor, s, which parameterizes the light-curve timescale appears independent of z, and applies equally well to the declining and rising parts of the light curve. In fact, the B band template that best fits this composite curve fits the individual supernova photometry data when stretched by a factor s with chi 2/DoF ∼ 1, thus as well as any parameterization can, given the current data sets. The measurement of the data of explosion, however, is model dependent and not tightly constrained by the current data. We also demonstrate the 1 + z light-cure time-axis broadening expected from cosmological expansion. This argues strongly against alternative explanations, such as tired light, for the redshift of distant objects

  19. Radiobiological equivalent of low/high dose rate brachytherapy and evaluation of tumor and normal responses to the dose.

    Science.gov (United States)

    Manimaran, S

    2007-06-01

    The aim of this study was to compare the biological equivalent of low-dose-rate (LDR) and high-dose-rate (HDR) brachytherapy in terms of the more recent linear quadratic (LQ) model, which leads to theoretical estimation of biological equivalence. One of the key features of the LQ model is that it allows a more systematic radiobiological comparison between different types of treatment because the main parameters alpha/beta and micro are tissue-specific. Such comparisons also allow assessment of the likely change in the therapeutic ratio when switching between LDR and HDR treatments. The main application of LQ methodology, which focuses on by increasing the availability of remote afterloading units, has been to design fractionated HDR treatments that can replace existing LDR techniques. In this study, with LDR treatments (39 Gy in 48 h) equivalent to 11 fractions of HDR irradiation at the experimental level, there are increasing reports of reproducible animal models that may be used to investigate the biological basis of brachytherapy and to help confirm theoretical predictions. This is a timely development owing to the nonavailability of sufficient retrospective patient data analysis. It appears that HDR brachytherapy is likely to be a viable alternative to LDR only if it is delivered without a prohibitively large number of fractions (e.g., fewer than 11). With increased scientific understanding and technological capability, the prospect of a dose equivalent to HDR brachytherapy will allow greater utilization of the concepts discussed in this article.

  20. Statistical equivalence and test-retest reliability of delay and probability discounting using real and hypothetical rewards.

    Science.gov (United States)

    Matusiewicz, Alexis K; Carter, Anne E; Landes, Reid D; Yi, Richard

    2013-11-01

    Delay discounting (DD) and probability discounting (PD) refer to the reduction in the subjective value of outcomes as a function of delay and uncertainty, respectively. Elevated measures of discounting are associated with a variety of maladaptive behaviors, and confidence in the validity of these measures is imperative. The present research examined (1) the statistical equivalence of discounting measures when rewards were hypothetical or real, and (2) their 1-week reliability. While previous research has partially explored these issues using the low threshold of nonsignificant difference, the present study fully addressed this issue using the more-compelling threshold of statistical equivalence. DD and PD measures were collected from 28 healthy adults using real and hypothetical $50 rewards during each of two experimental sessions, one week apart. Analyses using area-under-the-curve measures revealed a general pattern of statistical equivalence, indicating equivalence of real/hypothetical conditions as well as 1-week reliability. Exceptions are identified and discussed. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Probabilistic evaluation of design S-N curve and reliability assessment of ASME code-based evaluation

    International Nuclear Information System (INIS)

    Zhao Yongxiang

    1999-01-01

    A probabilistic evaluating approach of design S-N curve and a reliability assessment approach of the ASME code-based evaluation are presented on the basis of Langer S-N model-based P-S-N curves. The P-S-N curves are estimated by a so-called general maximum likelihood method. This method can be applied to deal with the virtual stress amplitude-crack initial life data which have a characteristics of double random variables. Investigation of a set of the virtual stress amplitude-crack initial life (S-N) data of 1Cr18Ni9Ti austenitic stainless steel-welded joint reveals that the P-S-N curves can give a good prediction of scatter regularity of the S-N data. Probabilistic evaluation of the design S-N curve with 0.9999 survival probability has considered various uncertainties, besides of the scatter of the S-N data, to an appropriate extent. The ASME code-based evaluation with 20 reduction factor on the mean life is much more conservative than that with 2 reduction factor on the stress amplitude. Evaluation of the latter in 666.61 MPa virtual stress amplitude is equivalent to 0.999522 survival probability and in 2092.18 MPa virtual stress amplitude equivalent to 0.9999999995 survival probability. This means that the evaluation in the low loading level may be non-conservative and in contrast, too conservative in the high loading level. Cause is that the reduction factors are constants and the factors can not take into account the general observation that scatter of the N data increases with the loading level decreasing. This has indicated that it is necessary to apply the probabilistic approach to the evaluation of design S-N curve

  2. Analytical relations between elastic-plastic fracture criteria

    International Nuclear Information System (INIS)

    Merkle, J.G.

    1976-07-01

    The equation of the normalized COD design curve recently proposed in the UK as a basis for determining allowable crack sizes is derived from the Equivalent Energy approximation for the J Integral. It is also shown that another approximation for the J Integral recently proposed by Westinghouse is mathematically equivalent to the normalized COD approach

  3. Curve Evolution in Subspaces and Exploring the Metameric Class of Histogram of Gradient Orientation based Features using Nonlinear Projection Methods

    DEFF Research Database (Denmark)

    Tatu, Aditya Jayant

    This thesis deals with two unrelated issues, restricting curve evolution to subspaces and computing image patches in the equivalence class of Histogram of Gradient orientation based features using nonlinear projection methods. Curve evolution is a well known method used in various applications like...... tracking interfaces, active contour based segmentation methods and others. It can also be used to study shape spaces, as deforming a shape can be thought of as evolving its boundary curve. During curve evolution a curve traces out a path in the infinite dimensional space of curves. Due to application...... specific requirements like shape priors or a given data model, and due to limitations of the computer, the computed curve evolution forms a path in some finite dimensional subspace of the space of curves. We give methods to restrict the curve evolution to a finite dimensional linear or implicitly defined...

  4. Lagrangian Curves on Spectral Curves of Monopoles

    International Nuclear Information System (INIS)

    Guilfoyle, Brendan; Khalid, Madeeha; Ramon Mari, Jose J.

    2010-01-01

    We study Lagrangian points on smooth holomorphic curves in TP 1 equipped with a natural neutral Kaehler structure, and prove that they must form real curves. By virtue of the identification of TP 1 with the space LE 3 of oriented affine lines in Euclidean 3-space, these Lagrangian curves give rise to ruled surfaces in E 3 , which we prove have zero Gauss curvature. Each ruled surface is shown to be the tangent lines to a curve in E 3 , called the edge of regression of the ruled surface. We give an alternative characterization of these curves as the points in E 3 where the number of oriented lines in the complex curve Σ that pass through the point is less than the degree of Σ. We then apply these results to the spectral curves of certain monopoles and construct the ruled surfaces and edges of regression generated by the Lagrangian curves.

  5. On a framework for generating PoD curves assisted by numerical simulations

    Energy Technology Data Exchange (ETDEWEB)

    Subair, S. Mohamed, E-mail: prajagopal@iitm.ac.in; Agrawal, Shweta, E-mail: prajagopal@iitm.ac.in; Balasubramaniam, Krishnan, E-mail: prajagopal@iitm.ac.in; Rajagopal, Prabhu, E-mail: prajagopal@iitm.ac.in [Indian Institute of Technology Madras, Department of Mechanical Engineering, Chennai, T.N. (India); Kumar, Anish; Rao, Purnachandra B.; Tamanna, Jayakumar [Indira Gandhi Centre for Atomic Research, Metallurgy and Materials Group, Kalpakkam, T.N. (India)

    2015-03-31

    The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here we develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.

  6. Investigation of the bases for use of the KIc curve

    International Nuclear Information System (INIS)

    McCabe, D.E.; Nanstad, R.K.; Rosenfield, A.R.; Marschall, C.W.; Irwin, G.R.

    1991-01-01

    Title 10 of the Code of Federal Regulations, Part 50 (10CFR50), Appendix G, establishes the bases for setting allowable pressure and temperature limits on reactors during heatup and cooldown operation. Both the K Ic and K Ia curves are utilized in prescribed ways to maintain reactor vessel structural integrity in the presence of an assumed or actual flaw and operating stresses. Currently, the code uses the K Ia curve, normalized to the RT NDT , to represent the fracture toughness trend for unirradiated and irradiated pressure vessel steels. Although this is clearly a conservative policy, it has been suggested that the K Ic curve is the more appropriate for application to a non-accident operating condition. A number of uncertainties have been identified, however, that might convert normal operating transients into a dynamic loading situation. Those include the introduction of running cracks from local brittle zones, crack pop-ins, reduced toughness from arrested cleavage cracks, description of the K Ic curve for irradiated materials, and other related unresolved issues relative to elastic-plastic fracture mechanics. Some observations and conclusions can be made regarding various aspects of those uncertainties and they are discussed in this paper. A discussion of further work required and under way to address the remaining uncertainties is also presented

  7. Strength Estimation of Die Cast Beams Considering Equivalent Porous Defects

    Energy Technology Data Exchange (ETDEWEB)

    Park, Moon Shik [Hannam Univ., Daejeon (Korea, Republic of)

    2017-05-15

    As a shop practice, a strength estimation method for die cast parts is suggested, in which various defects such as pores can be allowed. The equivalent porosity is evaluated by combining the stiffness data from a simple elastic test at the part level during the shop practice and the theoretical stiffness data, which are defect free. A porosity equation is derived from Eshelby's inclusion theory. Then, using the Mori-Tanaka method, the porosity value is used to draw a stress-strain curve for the porous material. In this paper, the Hollomon equation is used to capture the strain hardening effect. This stress-strain curve can be used to estimate the strength of a die cast part with porous defects. An elastoplastic theoretical solution is derived for the three-point bending of a die cast beam by using the plastic hinge method as a reference solution for a part with porous defects.

  8. Estimation of blocking temperatures from ZFC/FC curves

    DEFF Research Database (Denmark)

    Hansen, Mikkel Fougt; Mørup, Steen

    1999-01-01

    We present a new method to extract the parameters of a log-normal distribution of energy barriers in an assembly of ultrafine magnetic particles from simple featurees of the zero-field cooled and field cooled magnetisation curves. The method is established using numerical simulations and is tested...

  9. Plasma flow in a curved magnetic field

    International Nuclear Information System (INIS)

    Lindberg, L.

    1977-09-01

    A beam of collisionless plasma is injected along a longitudinal magnetic field into a region of curved magnetic field. Two unpredicted phenomena are observed: The beam becomes deflected in the direction opposite to that in which the field is curved, and it contracts to a flat slab in the plane of curvature of the magnetic field. The phenomenon is of a general character and can be expected to occur in a very wide range of densities. The lower density limit is set by the condition for self-polarization, nm sub(i)/epsilon 0 B 2 >> 1 or, which is equivalent, c 2 /v 2 sub(A) >> 1, where c is the velocity of light, and v sup(A) the Alfven velocity. The upper limit is presumably set by the requirement ωsub(e)tau(e) >> 1. The phenomenon is likely to be of importance e.g. for injection of plasma into magnetic bottles and in space and solar physics. The paper illustrates the comlexity of plasma flow phenomena and the importance of close contact between experimental and theoretical work. (author)

  10. Method for linearizing the potentiometric curves of precipitation titration in nonaqueous and aqueous-organic solutions

    International Nuclear Information System (INIS)

    Bykova, L.N.; Chesnokova, O.Ya.; Orlova, M.V.

    1995-01-01

    The method for linearizing the potentiometric curves of precipitation titration is studied for its application in the determination of halide ions (Cl - , Br - , I - ) in dimethylacetamide, dimethylformamide, in which titration is complicated by additional equilibrium processes. It is found that the method of linearization permits the determination of the titrant volume at the end point of titration to high accuracy in the case of titration curves without a potential jump in the proximity of the equivalent point (5 x 10 -5 M). 3 refs., 2 figs., 3 tabs

  11. CSI 2264: characterizing accretion-burst dominated light curves for young stars in NGC 2264

    International Nuclear Information System (INIS)

    Stauffer, John; Cody, Ann Marie; Rebull, Luisa; Carey, Sean; Baglin, Annie; Alencar, Silvia; Hillenbrand, Lynne A.; Carpenter, John; Findeisen, Krzysztof; Venuti, Laura; Bouvier, Jerome; Turner, Neal J.; Plavchan, Peter; Terebey, Susan; Morales-Calderón, María; Micela, Giusi; Flaccomio, Ettore; Song, Inseok; Gutermuth, Rob; Hartmann, Lee

    2014-01-01

    Based on more than four weeks of continuous high-cadence photometric monitoring of several hundred members of the young cluster NGC 2264 with two space telescopes, NASA's Spitzer and the CNES CoRoT (Convection, Rotation, and planetary Transits), we provide high-quality, multi-wavelength light curves for young stellar objects whose optical variability is dominated by short-duration flux bursts, which we infer are due to enhanced mass accretion rates. These light curves show many brief—several hours to one day—brightenings at optical and near-infrared wavelengths with amplitudes generally in the range of 5%-50% of the quiescent value. Typically, a dozen or more of these bursts occur in a 30 day period. We demonstrate that stars exhibiting this type of variability have large ultraviolet (UV) excesses and dominate the portion of the u – g versus g – r color-color diagram with the largest UV excesses. These stars also have large Hα equivalent widths, and either centrally peaked, lumpy Hα emission profiles or profiles with blueshifted absorption dips associated with disk or stellar winds. Light curves of this type have been predicted for stars whose accretion is dominated by Rayleigh-Taylor instabilities at the boundary between their magnetosphere and inner circumstellar disk, or where magneto-rotational instabilities modulate the accretion rate from the inner disk. Among the stars with the largest UV excesses or largest Hα equivalent widths, light curves with this type of variability greatly outnumber light curves with relatively smooth sinusoidal variations associated with long-lived hot spots. We provide quantitative statistics for the average duration and strength of the accretion bursts and for the fraction of the accretion luminosity associated with these bursts.

  12. Graphical interpretation of confidence curves in rankit plots

    DEFF Research Database (Denmark)

    Hyltoft Petersen, Per; Blaabjerg, Ole; Andersen, Marianne

    2004-01-01

    A well-known transformation from the bell-shaped Gaussian (normal) curve to a straight line in the rankit plot is investigated, and a tool for evaluation of the distribution of reference groups is presented. It is based on the confidence intervals for percentiles of the calculated Gaussian distri...

  13. Equivalent Sensor Radiance Generation and Remote Sensing from Model Parameters. Part 1; Equivalent Sensor Radiance Formulation

    Science.gov (United States)

    Wind, Galina; DaSilva, Arlindo M.; Norris, Peter M.; Platnick, Steven E.

    2013-01-01

    In this paper we describe a general procedure for calculating equivalent sensor radiances from variables output from a global atmospheric forecast model. In order to take proper account of the discrepancies between model resolution and sensor footprint the algorithm takes explicit account of the model subgrid variability, in particular its description of the probably density function of total water (vapor and cloud condensate.) The equivalent sensor radiances are then substituted into an operational remote sensing algorithm processing chain to produce a variety of remote sensing products that would normally be produced from actual sensor output. This output can then be used for a wide variety of purposes such as model parameter verification, remote sensing algorithm validation, testing of new retrieval methods and future sensor studies. We show a specific implementation using the GEOS-5 model, the MODIS instrument and the MODIS Adaptive Processing System (MODAPS) Data Collection 5.1 operational remote sensing cloud algorithm processing chain (including the cloud mask, cloud top properties and cloud optical and microphysical properties products.) We focus on clouds and cloud/aerosol interactions, because they are very important to model development and improvement.

  14. The S-curve for forecasting waste generation in construction projects.

    Science.gov (United States)

    Lu, Weisheng; Peng, Yi; Chen, Xi; Skitmore, Martin; Zhang, Xiaoling

    2016-10-01

    Forecasting construction waste generation is the yardstick of any effort by policy-makers, researchers, practitioners and the like to manage construction and demolition (C&D) waste. This paper develops and tests an S-curve model to indicate accumulative waste generation as a project progresses. Using 37,148 disposal records generated from 138 building projects in Hong Kong in four consecutive years from January 2011 to June 2015, a wide range of potential S-curve models are examined, and as a result, the formula that best fits the historical data set is found. The S-curve model is then further linked to project characteristics using artificial neural networks (ANNs) so that it can be used to forecast waste generation in future construction projects. It was found that, among the S-curve models, cumulative logistic distribution is the best formula to fit the historical data. Meanwhile, contract sum, location, public-private nature, and duration can be used to forecast construction waste generation. The study provides contractors with not only an S-curve model to forecast overall waste generation before a project commences, but also with a detailed baseline to benchmark and manage waste during the course of construction. The major contribution of this paper is to the body of knowledge in the field of construction waste generation forecasting. By examining it with an S-curve model, the study elevates construction waste management to a level equivalent to project cost management where the model has already been readily accepted as a standard tool. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Modeling Patterns of Activities using Activity Curves.

    Science.gov (United States)

    Dawadi, Prafulla N; Cook, Diane J; Schmitter-Edgecombe, Maureen

    2016-06-01

    Pervasive computing offers an unprecedented opportunity to unobtrusively monitor behavior and use the large amount of collected data to perform analysis of activity-based behavioral patterns. In this paper, we introduce the notion of an activity curve , which represents an abstraction of an individual's normal daily routine based on automatically-recognized activities. We propose methods to detect changes in behavioral routines by comparing activity curves and use these changes to analyze the possibility of changes in cognitive or physical health. We demonstrate our model and evaluate our change detection approach using a longitudinal smart home sensor dataset collected from 18 smart homes with older adult residents. Finally, we demonstrate how big data-based pervasive analytics such as activity curve-based change detection can be used to perform functional health assessment. Our evaluation indicates that correlations do exist between behavior and health changes and that these changes can be automatically detected using smart homes, machine learning, and big data-based pervasive analytics.

  16. TORREFACTION OF CELLULOSE: VALIDITY AND LIMITATION OF THE TEMPERATURE/DURATION EQUIVALENCE

    Directory of Open Access Journals (Sweden)

    Pin Lv,

    2012-06-01

    Full Text Available During torrefaction of biomass, equivalence between temperature and residence time is often reported, either in terms of the loss of mass or the alternation of properties. The present work proposes a rigorous investigation of this equivalence. Cellulose, as the main lignocellulosic biomass component, was treated under mild pyrolysis for 48 hours. Several couples of T-D (temperature-duration points were selected from TGA curves to obtain mass losses of 11.6%, 25%, 50%, 74.4%, and 86.7%. The corresponding residues were subjected to Fourier transform infrared spectroscopy for analysis. According to the FTIR results, a suitably accurate match to global T-D equivalence is exhibited up to 50% mass loss: in this domain, mass loss is well correlated to the treatment intensity (molecular composition of the residue except for slight differences in the production of C=C and C=O. For mass loss levels of 74.4% and 86.7%, distinct degradation mechanisms take place at different combinations of temperature and duration, and the correlation fails. Compared to the mass loss at 220°C and 250°C, the equivalent molecular composition can be achieved through treatment at 280°C with shorter treatment time and less depolymerization and oxidation. The main conclusion drawn is that mass loss can be used as a synthetic indicator of the treatment intensity in the temperature range of 220°C to 280°C up to a mass loss of 50%.

  17. Curved butterfly bileaflet prosthetic cardiac valve

    Science.gov (United States)

    McQueen, David M.; Peskin, Charles S.

    1991-06-25

    An annular valve body having a central passageway for the flow of blood therethrough with two curved leaflets each of which is pivotally supported on an accentric positioned axis in the central passageway for moving between a closed position and an open position. The leaflets are curved in a plane normal to the eccentric axis and positioned with the convex side of the leaflets facing each other when the leaflets are in the open position. Various parameters such as the curvature of the leaflets, the location of the eccentric axis, and the maximum opening angle of the leaflets are optimized according to the following performance criteria: maximize the minimum peak velocity through the valve, maximize the net stroke volume, and minimize the mean forward pressure difference, thereby reducing thrombosis and improving the hemodynamic performance.

  18. Determination of dose equivalent with tissue-equivalent proportional counters

    International Nuclear Information System (INIS)

    Dietze, G.; Schuhmacher, H.; Menzel, H.G.

    1989-01-01

    Low pressure tissue-equivalent proportional counters (TEPC) are instruments based on the cavity chamber principle and provide spectral information on the energy loss of single charged particles crossing the cavity. Hence such detectors measure absorbed dose or kerma and are able to provide estimates on radiation quality. During recent years TEPC based instruments have been developed for radiation protection applications in photon and neutron fields. This was mainly based on the expectation that the energy dependence of their dose equivalent response is smaller than that of other instruments in use. Recently, such instruments have been investigated by intercomparison measurements in various neutron and photon fields. Although their principles of measurements are more closely related to the definition of dose equivalent quantities than those of other existing dosemeters, there are distinct differences and limitations with respect to the irradiation geometry and the determination of the quality factor. The application of such instruments for measuring ambient dose equivalent is discussed. (author)

  19. Dirac equation on a curved surface

    Energy Technology Data Exchange (ETDEWEB)

    Brandt, F.T., E-mail: fbrandt@usp.br; Sánchez-Monroy, J.A., E-mail: antosan@usp.br

    2016-09-07

    The dynamics of Dirac particles confined to a curved surface is examined employing the thin-layer method. We perform a perturbative expansion to first-order and split the Dirac field into normal and tangential components to the surface. In contrast to the known behavior of second order equations like Schrödinger, Maxwell and Klein–Gordon, we find that there is no geometric potential for the Dirac equation on a surface. This implies that the non-relativistic limit does not commute with the thin-layer method. Although this problem can be overcome when second-order terms are retained in the perturbative expansion, this would preclude the decoupling of the normal and tangential degrees of freedom. Therefore, we propose to introduce a first-order term which rescues the non-relativistic limit and also clarifies the effect of the intrinsic and extrinsic curvatures on the dynamics of the Dirac particles. - Highlights: • The thin-layer method is employed to derive the Dirac equation on a curved surface. • A geometric potential is absent at least to first-order in the perturbative expansion. • The effects of the extrinsic curvature are included to rescue the non-relativistic limit. • The resulting Dirac equation is consistent with the Heisenberg uncertainty principle.

  20. Global equivalent magnetization of the oceanic lithosphere

    Science.gov (United States)

    Dyment, J.; Choi, Y.; Hamoudi, M.; Lesur, V.; Thebault, E.

    2015-11-01

    As a by-product of the construction of a new World Digital Magnetic Anomaly Map over oceanic areas, we use an original approach based on the global forward modeling of seafloor spreading magnetic anomalies and their comparison to the available marine magnetic data to derive the first map of the equivalent magnetization over the World's ocean. This map reveals consistent patterns related to the age of the oceanic lithosphere, the spreading rate at which it was formed, and the presence of mantle thermal anomalies which affects seafloor spreading and the resulting lithosphere. As for the age, the equivalent magnetization decreases significantly during the first 10-15 Myr after its formation, probably due to the alteration of crustal magnetic minerals under pervasive hydrothermal alteration, then increases regularly between 20 and 70 Ma, reflecting variations in the field strength or source effects such as the acquisition of a secondary magnetization. As for the spreading rate, the equivalent magnetization is twice as strong in areas formed at fast rate than in those formed at slow rate, with a threshold at ∼40 km/Myr, in agreement with an independent global analysis of the amplitude of Anomaly 25. This result, combined with those from the study of the anomalous skewness of marine magnetic anomalies, allows building a unified model for the magnetic structure of normal oceanic lithosphere as a function of spreading rate. Finally, specific areas affected by thermal mantle anomalies at the time of their formation exhibit peculiar equivalent magnetization signatures, such as the cold Australian-Antarctic Discordance, marked by a lower magnetization, and several hotspots, marked by a high magnetization.

  1. Topological Equivalence of Objects. Teacher's Guide for Use with Stretching and Bending. Working Paper No. 18a.

    Science.gov (United States)

    Shah, Sair Ali

    The notions of topological equivalence for one-, two-, and three-dimensional figures, as well as for graphs and networks, are developed for classroom use with children between the ages of three and ten. Properties of open and closed curves are also examined. This manual, addressed to the teacher, describes several activities related to each…

  2. In silico sampling reveals the effect of clustering and shows that the log-normal rank abundance curve is an artefact

    NARCIS (Netherlands)

    Neuteboom, J.H.; Struik, P.C.

    2005-01-01

    The impact of clustering on rank abundance, species-individual (S-N)and species-area curves was investigated using a computer programme for in silico sampling. In a rank abundance curve the abundances of species are plotted on log-scale against species sequence. In an S-N curve the number of species

  3. On some Closed Magnetic Curves on a 3-torus

    Energy Technology Data Exchange (ETDEWEB)

    Munteanu, Marian Ioan, E-mail: marian.ioan.munteanu@gmail.com [Alexandru Ioan Cuza University of Iaşi, Faculty of Mathematics (Romania); Nistor, Ana Irina, E-mail: ana.irina.nistor@gmail.com [Gh. Asachi Technical University of Iaşi, Department of Mathematics and Informatics (Romania)

    2017-06-15

    We consider two magnetic fields on the 3-torus obtained from two different contact forms on the Euclidean 3-space and we study when their corresponding normal magnetic curves are closed. We obtain periodicity conditions analogues to those for the closed geodesics on the torus.

  4. Distribution of Snow and Maximum Snow Water Equivalent Obtained by LANDSAT Data and Degree Day Method

    Science.gov (United States)

    Takeda, K.; Ochiai, H.; Takeuchi, S.

    1985-01-01

    Maximum snow water equivalence and snowcover distribution are estimated using several LANDSAT data taken in snowmelting season over a four year period. The test site is Okutadami-gawa Basin located in the central position of Tohoku-Kanto-Chubu District. The year to year normalization for snowmelt volume computation on the snow line is conducted by year to year correction of degree days using the snowcover percentage within the test basin obtained from LANDSAT data. The maximum snow water equivalent map in the test basin is generated based on the normalized snowmelt volume on the snow line extracted from four LANDSAT data taken in a different year. The snowcover distribution on an arbitrary day in snowmelting of 1982 is estimated from the maximum snow water equivalent map. The estimated snowcover is compared with the snowcover area extracted from NOAA-AVHRR data taken on the same day. The applicability of the snow estimation using LANDSAT data is discussed.

  5. Development of the curve of Spee.

    Science.gov (United States)

    Marshall, Steven D; Caspersen, Matthew; Hardinger, Rachel R; Franciscus, Robert G; Aquilino, Steven A; Southard, Thomas E

    2008-09-01

    Ferdinand Graf von Spee is credited with characterizing human occlusal curvature viewed in the sagittal plane. This naturally occurring phenomenon has clinical importance in orthodontics and restorative dentistry, yet we have little understanding of when, how, or why it develops. The purpose of this study was to expand our understanding by examining the development of the curve of Spee longitudinally in a sample of untreated subjects with normal occlusion from the deciduous dentition to adulthood. Records of 16 male and 17 female subjects from the Iowa Facial Growth Study were selected and examined. The depth of the curve of Spee was measured on their study models at 7 time points from ages 4 (deciduous dentition) to 26 (adult dentition) years. The Wilcoxon signed rank test was used to compare changes in the curve of Spee depth between time points. For each subject, the relative eruption of the mandibular teeth was measured from corresponding cephalometric radiographs, and its contribution to the developing curve of Spee was ascertained. In the deciduous dentition, the curve of Spee is minimal. At mean ages of 4.05 and 5.27 years, the average curve of Spee depths are 0.24 and 0.25 mm, respectively. With change to the transitional dentition, corresponding to the eruption of the mandibular permanent first molars and central incisors (mean age, 6.91 years), the curve of Spee depth increases significantly (P < 0.0001) to a mean maximum depth of 1.32 mm. The curve of Spee then remains essentially unchanged until eruption of the second molars (mean age, 12.38 years), when the depth increases (P < 0.0001) to a mean maximum depth of 2.17 mm. In the adolescent dentition (mean age, 16.21 years), the depth decreases slightly (P = 0.0009) to a mean maximum depth of 1.98 mm, and, in the adult dentition (mean age 26.98 years), the curve remains unchanged (P = 0.66), with a mean maximum depth of 2.02 mm. No significant differences in curve of Spee development were found between

  6. W-curve alignments for HIV-1 genomic comparisons.

    Directory of Open Access Journals (Sweden)

    Douglas J Cork

    2010-06-01

    Full Text Available The W-curve was originally developed as a graphical visualization technique for viewing DNA and RNA sequences. Its ability to render features of DNA also makes it suitable for computational studies. Its main advantage in this area is utilizing a single-pass algorithm for comparing the sequences. Avoiding recursion during sequence alignments offers advantages for speed and in-process resources. The graphical technique also allows for multiple models of comparison to be used depending on the nucleotide patterns embedded in similar whole genomic sequences. The W-curve approach allows us to compare large numbers of samples quickly.We are currently tuning the algorithm to accommodate quirks specific to HIV-1 genomic sequences so that it can be used to aid in diagnostic and vaccine efforts. Tracking the molecular evolution of the virus has been greatly hampered by gap associated problems predominantly embedded within the envelope gene of the virus. Gaps and hypermutation of the virus slow conventional string based alignments of the whole genome. This paper describes the W-curve algorithm itself, and how we have adapted it for comparison of similar HIV-1 genomes. A treebuilding method is developed with the W-curve that utilizes a novel Cylindrical Coordinate distance method and gap analysis method. HIV-1 C2-V5 env sequence regions from a Mother/Infant cohort study are used in the comparison.The output distance matrix and neighbor results produced by the W-curve are functionally equivalent to those from Clustal for C2-V5 sequences in the mother/infant pairs infected with CRF01_AE.Significant potential exists for utilizing this method in place of conventional string based alignment of HIV-1 genomes, such as Clustal X. With W-curve heuristic alignment, it may be possible to obtain clinically useful results in a short time-short enough to affect clinical choices for acute treatment. A description of the W-curve generation process, including a comparison

  7. W-curve alignments for HIV-1 genomic comparisons.

    Science.gov (United States)

    Cork, Douglas J; Lembark, Steven; Tovanabutra, Sodsai; Robb, Merlin L; Kim, Jerome H

    2010-06-01

    The W-curve was originally developed as a graphical visualization technique for viewing DNA and RNA sequences. Its ability to render features of DNA also makes it suitable for computational studies. Its main advantage in this area is utilizing a single-pass algorithm for comparing the sequences. Avoiding recursion during sequence alignments offers advantages for speed and in-process resources. The graphical technique also allows for multiple models of comparison to be used depending on the nucleotide patterns embedded in similar whole genomic sequences. The W-curve approach allows us to compare large numbers of samples quickly. We are currently tuning the algorithm to accommodate quirks specific to HIV-1 genomic sequences so that it can be used to aid in diagnostic and vaccine efforts. Tracking the molecular evolution of the virus has been greatly hampered by gap associated problems predominantly embedded within the envelope gene of the virus. Gaps and hypermutation of the virus slow conventional string based alignments of the whole genome. This paper describes the W-curve algorithm itself, and how we have adapted it for comparison of similar HIV-1 genomes. A treebuilding method is developed with the W-curve that utilizes a novel Cylindrical Coordinate distance method and gap analysis method. HIV-1 C2-V5 env sequence regions from a Mother/Infant cohort study are used in the comparison. The output distance matrix and neighbor results produced by the W-curve are functionally equivalent to those from Clustal for C2-V5 sequences in the mother/infant pairs infected with CRF01_AE. Significant potential exists for utilizing this method in place of conventional string based alignment of HIV-1 genomes, such as Clustal X. With W-curve heuristic alignment, it may be possible to obtain clinically useful results in a short time-short enough to affect clinical choices for acute treatment. A description of the W-curve generation process, including a comparison technique of

  8. Diagnostic efficacy for coronary in-stent patency with parameters defined on Hounsfield CT value-spatial profile curves

    International Nuclear Information System (INIS)

    Yamazaki, Tadashi; Suzuki, Jun-ichi; Shimamoto, Ryoichi; Tsuji, Taeko; Ohmoto-Sekine, Yuki; Morita, Toshihiro; Yamashita, Hiroshi; Honye, Junko; Nagai, Ryozo; Komatsu, Shuhei; Akahane, Masaaki; Ohtomo, Kuni

    2008-01-01

    Purpose: Hounsfield CT values across coronary CT angiograms constitute CT value-spatial profile curves. These CT profile curves are independent of window settings, and therefore, parameters derived from the curves can be used for objective anatomic analyses. Applicability of parameters derived from the curves to quantification of coronary in-stent patency has not yet been evaluated. Methods: Twenty-five CT value-spatial profile curves were delineated from 10 consecutive coronary stents to test correlation between the curve derived parameter (i.e., the minimum extreme value normalized by dividing by the maximum value of the curves obtained at neighboring outside of stents) and three intravascular ultrasound (IVUS) parameters. Results: Correlation coefficients between normalized minimum extreme value of CT value-spatial profile curves and three IVUS parameters (such as patent cross-sectional in-stent area, the percentage of patent cross-sectional in-stent area, and coronary artery intra-stent diameter) were 0.65 (p < 0.01), 0.44 (p < 0.05) and 0.51 (p < 0.05), respectively. Conclusions: CT parameters defined on Hounsfield CT value-spatial profile curves correlated significantly with IVUS parameters for quantitative coronary in-stent patency. A new approach with CT coronary angiography is therefore indicated for the noninvasive assessment of in-stent re-stenosis

  9. A general method to quantify quasi-equivalence in icosahedral viruses.

    Science.gov (United States)

    Damodaran, K V; Reddy, Vijay S; Johnson, John E; Brooks, Charles L

    2002-12-06

    A quantitative, atom-based, method is described for comparing protein subunit interfaces in icosahedral virus capsids with quasi-equivalent surface lattices. An integrated, normalized value (between 0 and 1) based on equivalent residue contacts (Q-score) is computed for every pair of subunit interactions and scores that are significantly above zero readily identify interfaces that are quasi-equivalent to each other. The method was applied to all quasi-equivalent capsid structures (T=3, 4, 7 and 13) in the Protein Data Bank and the Q-scores were interpreted in terms of their structural underpinnings. The analysis allowed classification of T=3 structures into three groups with architectures that resemble different polyhedra with icosahedral symmetry. The preference of subunits to form dimers in the T=4 human Hepatitis B virus capsid (HBV) was clearly reflected in high Q-scores of quasi-equivalent dimers. Interesting differences between the classical T=7 capsid and polyoma-like capsids were also identified. Application of the method to the outer-shell of the T=13 Blue tongue virus core (BTVC) highlighted the modest distortion between the interfaces of the general trimers and the strict trimers of VP7 subunits. Furthermore, the method identified the quasi 2-fold symmetry in the inner capsids of the BTV and reovirus cores. The results show that the Q-scores of various quasi-symmetries represent a "fingerprint" for a particular virus capsid architecture allowing particle classification into groups based on their underlying structural and geometric features.

  10. Response of cultured normal human mammary epithelial cells to X rays

    International Nuclear Information System (INIS)

    Yang, T.C.; Stampfer, M.R.; Smith, H.S.

    1983-01-01

    The effect of X rays on the reproductive death of cultured normal human mammary epithelial cells was examined. Techniques were developed for isolating and culturing normal human mammary epithelial cells which provide sufficient cells at second passage for radiation studies, and an efficient clonogenic assay suitable for measuring radiation survival curves. It was found that the survival curves for epithelial cells from normal breast tissue were exponential and had D 0 values of about 109-148 rad for 225 kVp X rays. No consistent change in cell radiosensitivity with the age of donor was observed, and no sublethal damage repair in these cells could be detected with the split-dose technique

  11. Growth curves in Down syndrome with congenital heart disease

    Directory of Open Access Journals (Sweden)

    Caroline D’Azevedo Sica

    Full Text Available SUMMARY Introduction: To assess dietary habits, nutritional status and food frequency in children and adolescents with Down syndrome (DS and congenital heart disease (CHD. Additionally, we attempted to compare body mass index (BMI classifications according to the World Health Organization (WHO curves and curves developed for individuals with DS. Method: Cross-sectional study including individuals with DS and CHD treated at a referral center for cardiology, aged 2 to 18 years. Weight, height, BMI, total energy and food frequency were measured. Nutritional status was assessed using BMI for age and gender, using curves for evaluation of patients with DS and those set by the WHO. Results: 68 subjects with DS and CHD were evaluated. Atrioventricular septal defect (AVSD was the most common heart disease (52.9%. There were differences in BMI classification between the curves proposed for patients with DS and those proposed by the WHO. There was an association between consumption of vitamin E and polyunsaturated fatty acids. Conclusion: Results showed that individuals with DS are mostly considered normal weight for age, when evaluated using specific curves for DS. Reviews on specific curves for DS would be the recommended practice for health professionals so as to avoid precipitated diagnosis of overweight and/or obesity in this population.

  12. Peak and Tail Scaling of Breakthrough Curves in Hydrologic Tracer Tests

    Science.gov (United States)

    Aquino, T.; Aubeneau, A. F.; Bolster, D.

    2014-12-01

    Power law tails, a marked signature of anomalous transport, have been observed in solute breakthrough curves time and time again in a variety of hydrologic settings, including in streams. However, due to the low concentrations at which they occur they are notoriously difficult to measure with confidence. This leads us to ask if there are other associated signatures of anomalous transport that can be sought. We develop a general stochastic transport framework and derive an asymptotic relation between the tail scaling of a breakthrough curve for a conservative tracer at a fixed downstream position and the scaling of the peak concentration of breakthrough curves as a function of downstream position, demonstrating that they provide equivalent information. We then quantify the relevant spatiotemporal scales for the emergence of this asymptotic regime, where the relationship holds, in the context of a very simple model that represents transport in an idealized river. We validate our results using random walk simulations. The potential experimental benefits and limitations of these findings are discussed.

  13. Temporary threshold shifts from exposures to equal equivalent continuous A-weighted sound pressure level

    DEFF Research Database (Denmark)

    Ordoñez, Rodrigo Pizarro; Hammershøi, Dorte

    2014-01-01

    the assumptions made using the A-weighting curve for the assessment of hearing damage. By modifying exposure ratings to compensate for the build-up of energy at mid and high-frequencies (above 1 kHz) due to the presence of the listener in the sound field and for the levels below an effect threshold that does...... not induce changes in hearing (equivalent quiet levels), ratings of the sound exposure that reflect the observed temporary changes in auditory function can be obtained.......According to existing methods for the assessment of hearing damage, signals with the same A-weighted equivalent level should pose the same hazard to the auditory system. As a measure of hazard, it is assumed that Temporary Thresholds Shifts (TTS) reflect the onset of alterations to the hearing...

  14. Derivation of Path Independent Coupled Mix Mode Cohesive Laws from Fracture Resistance Curves

    DEFF Research Database (Denmark)

    Goutianos, Stergios

    2016-01-01

    A generalised approach is presented to derive coupled mixed mode cohesive laws described with physical parameters such as peak traction, critical opening, fracture energy and cohesive shape. The approach is based on deriving mix mode fracture resistance curves from an effective mix mode cohesive...... law at different mode mixities. From the fracture resistance curves, the normal and shear stresses of the cohesive laws can be obtained by differentiation. Since, the mixed mode cohesive laws are obtained from a fracture resistance curve (potential function), path independence is automatically...

  15. Technical note: Equivalent genomic models with a residual polygenic effect.

    Science.gov (United States)

    Liu, Z; Goddard, M E; Hayes, B J; Reinhardt, F; Reents, R

    2016-03-01

    Routine genomic evaluations in animal breeding are usually based on either a BLUP with genomic relationship matrix (GBLUP) or single nucleotide polymorphism (SNP) BLUP model. For a multi-step genomic evaluation, these 2 alternative genomic models were proven to give equivalent predictions for genomic reference animals. The model equivalence was verified also for young genotyped animals without phenotypes. Due to incomplete linkage disequilibrium of SNP markers to genes or causal mutations responsible for genetic inheritance of quantitative traits, SNP markers cannot explain all the genetic variance. A residual polygenic effect is normally fitted in the genomic model to account for the incomplete linkage disequilibrium. In this study, we start by showing the proof that the multi-step GBLUP and SNP BLUP models are equivalent for the reference animals, when they have a residual polygenic effect included. Second, the equivalence of both multi-step genomic models with a residual polygenic effect was also verified for young genotyped animals without phenotypes. Additionally, we derived formulas to convert genomic estimated breeding values of the GBLUP model to its components, direct genomic values and residual polygenic effect. Third, we made a proof that the equivalence of these 2 genomic models with a residual polygenic effect holds also for single-step genomic evaluation. Both the single-step GBLUP and SNP BLUP models lead to equal prediction for genotyped animals with phenotypes (e.g., reference animals), as well as for (young) genotyped animals without phenotypes. Finally, these 2 single-step genomic models with a residual polygenic effect were proven to be equivalent for estimation of SNP effects, too. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  16. The kernel G1(x,x') and the quantum equivalence principle

    International Nuclear Information System (INIS)

    Ceccatto, H.; Foussats, A.; Giacomini, H.; Zandron, O.

    1981-01-01

    In this paper, it is re-examined the formulation of the quantum equivalence principle (QEP) and its compatibility with the conditions which must be fulfilled by the kernel G 1 (x,x') is discussed. It is also determined the base of solutions which give the particle model in a curved space-time in terms of Cauchy's data for such a kernel. Finally, it is analyzed the creation of particles in this model by studying the time evolution of creation and annihilation operators. This method is an alternative to one that uses Bogoliubov's transformation as a mechanism of creation. (author)

  17. Particles and Dirac-type operators on curved spaces

    International Nuclear Information System (INIS)

    Visinescu, Mihai

    2003-01-01

    We review the geodesic motion of pseudo-classical particles in curved spaces. Investigating the generalized Killing equations for spinning spaces, we express the constants of motion in terms of Killing-Yano tensors. Passing from the spinning spaces to the Dirac equation in curved backgrounds we point out the role of the Killing-Yano tensors in the construction of the Dirac-type operators. The general results are applied to the case of the four-dimensional Euclidean Taub-Newman-Unti-Tamburino space. From the covariantly constant Killing-Yano tensors of this space we construct three new Dirac-type operators which are equivalent with the standard Dirac operator. Finally the Runge-Lenz operator for the Dirac equation in this background is expressed in terms of the fourth Killing-Yano tensor which is not covariantly constant. As a rule the covariantly constant Killing-Yano tensors realize certain square roots of the metric tensor. Such a Killing-Yano tensor produces simultaneously a Dirac-type operator and the generator of a one-parameter Lie group connecting this operator with the standard Dirac one. On the other hand, the not covariantly constant Killing-Yano tensors are important in generating hidden symmetries. The presence of not covariantly constant Killing-Yano tensors implies the existence of non-standard supersymmetries in point particle theories on curved background. (author)

  18. Normal-Mode Analysis of Circular DNA at the Base-Pair Level. 2. Large-Scale Configurational Transformation of a Naturally Curved Molecule.

    Science.gov (United States)

    Matsumoto, Atsushi; Tobias, Irwin; Olson, Wilma K

    2005-01-01

    Fine structural and energetic details embedded in the DNA base sequence, such as intrinsic curvature, are important to the packaging and processing of the genetic material. Here we investigate the internal dynamics of a 200 bp closed circular molecule with natural curvature using a newly developed normal-mode treatment of DNA in terms of neighboring base-pair "step" parameters. The intrinsic curvature of the DNA is described by a 10 bp repeating pattern of bending distortions at successive base-pair steps. We vary the degree of intrinsic curvature and the superhelical stress on the molecule and consider the normal-mode fluctuations of both the circle and the stable figure-8 configuration under conditions where the energies of the two states are similar. To extract the properties due solely to curvature, we ignore other important features of the double helix, such as the extensibility of the chain, the anisotropy of local bending, and the coupling of step parameters. We compare the computed normal modes of the curved DNA model with the corresponding dynamical features of a covalently closed duplex of the same chain length constructed from naturally straight DNA and with the theoretically predicted dynamical properties of a naturally circular, inextensible elastic rod, i.e., an O-ring. The cyclic molecules with intrinsic curvature are found to be more deformable under superhelical stress than rings formed from naturally straight DNA. As superhelical stress is accumulated in the DNA, the frequency, i.e., energy, of the dominant bending mode decreases in value, and if the imposed stress is sufficiently large, a global configurational rearrangement of the circle to the figure-8 form takes place. We combine energy minimization with normal-mode calculations of the two states to decipher the configurational pathway between the two states. We also describe and make use of a general analytical treatment of the thermal fluctuations of an elastic rod to characterize the

  19. Weighted curve-fitting program for the HP 67/97 calculator

    International Nuclear Information System (INIS)

    Stockli, M.P.

    1983-01-01

    The HP 67/97 calculator provides in its standard equipment a curve-fit program for linear, logarithmic, exponential and power functions that is quite useful and popular. However, in more sophisticated applications, proper weights for data are often essential. For this purpose a program package was created which is very similar to the standard curve-fit program but which includes the weights of the data for proper statistical analysis. This allows accurate calculation of the uncertainties of the fitted curve parameters as well as the uncertainties of interpolations or extrapolations, or optionally the uncertainties can be normalized with chi-square. The program is very versatile and allows one to perform quite difficult data analysis in a convenient way with the pocket calculator HP 67/97

  20. Equivalent Dynamic Models.

    Science.gov (United States)

    Molenaar, Peter C M

    2017-01-01

    Equivalences of two classes of dynamic models for weakly stationary multivariate time series are discussed: dynamic factor models and autoregressive models. It is shown that exploratory dynamic factor models can be rotated, yielding an infinite set of equivalent solutions for any observed series. It also is shown that dynamic factor models with lagged factor loadings are not equivalent to the currently popular state-space models, and that restriction of attention to the latter type of models may yield invalid results. The known equivalent vector autoregressive model types, standard and structural, are given a new interpretation in which they are conceived of as the extremes of an innovating type of hybrid vector autoregressive models. It is shown that consideration of hybrid models solves many problems, in particular with Granger causality testing.

  1. Analysis of glow curves of TL readouts of CaSO4:Dy teflon based TLD badge in semiautomatic TLD badge reader

    International Nuclear Information System (INIS)

    Pradhan, S.M.; Sneha, C.; Adtani, M.M.

    2010-01-01

    The facility of glow curve storage and recall provided in the reader software is helpful for manual screening of the glow curves; however no further analysis is possible due to absence of numerical TL data at the sampling intervals. In the present study glow curves are digitized by modifying the reader software and then normalized to make them independent of the dose. The normalized glow curves are then analyzed by dividing them into five equal parts on time scale. This method of analysis is used to correlate the variation of total TL counts of the three discs with time elapsed post irradiation

  2. Power of tests for comparing trend curves with application to national immunization survey (NIS).

    Science.gov (United States)

    Zhao, Zhen

    2011-02-28

    To develop statistical tests for comparing trend curves of study outcomes between two socio-demographic strata across consecutive time points, and compare statistical power of the proposed tests under different trend curves data, three statistical tests were proposed. For large sample size with independent normal assumption among strata and across consecutive time points, the Z and Chi-square test statistics were developed, which are functions of outcome estimates and the standard errors at each of the study time points for the two strata. For small sample size with independent normal assumption, the F-test statistic was generated, which is a function of sample size of the two strata and estimated parameters across study period. If two trend curves are approximately parallel, the power of Z-test is consistently higher than that of both Chi-square and F-test. If two trend curves cross at low interaction, the power of Z-test is higher than or equal to the power of both Chi-square and F-test; however, at high interaction, the powers of Chi-square and F-test are higher than that of Z-test. The measurement of interaction of two trend curves was defined. These tests were applied to the comparison of trend curves of vaccination coverage estimates of standard vaccine series with National Immunization Survey (NIS) 2000-2007 data. Copyright © 2011 John Wiley & Sons, Ltd.

  3. Reliability Based Geometric Design of Horizontal Circular Curves

    Science.gov (United States)

    Rajbongshi, Pabitra; Kalita, Kuldeep

    2018-06-01

    Geometric design of horizontal circular curve primarily involves with radius of the curve and stopping sight distance at the curve section. Minimum radius is decided based on lateral thrust exerted on the vehicles and the minimum stopping sight distance is provided to maintain the safety in longitudinal direction of vehicles. Available sight distance at site can be regulated by changing the radius and middle ordinate at the curve section. Both radius and sight distance depend on design speed. Speed of vehicles at any road section is a variable parameter and therefore, normally the 98th percentile speed is taken as the design speed. This work presents a probabilistic approach for evaluating stopping sight distance, considering the variability of all input parameters of sight distance. It is observed that the 98th percentile sight distance value is much lower than the sight distance corresponding to 98th percentile speed. The distribution of sight distance parameter is also studied and found to follow a lognormal distribution. Finally, the reliability based design charts are presented for both plain and hill regions, and considering the effect of lateral thrust.

  4. Reliability Based Geometric Design of Horizontal Circular Curves

    Science.gov (United States)

    Rajbongshi, Pabitra; Kalita, Kuldeep

    2018-03-01

    Geometric design of horizontal circular curve primarily involves with radius of the curve and stopping sight distance at the curve section. Minimum radius is decided based on lateral thrust exerted on the vehicles and the minimum stopping sight distance is provided to maintain the safety in longitudinal direction of vehicles. Available sight distance at site can be regulated by changing the radius and middle ordinate at the curve section. Both radius and sight distance depend on design speed. Speed of vehicles at any road section is a variable parameter and therefore, normally the 98th percentile speed is taken as the design speed. This work presents a probabilistic approach for evaluating stopping sight distance, considering the variability of all input parameters of sight distance. It is observed that the 98th percentile sight distance value is much lower than the sight distance corresponding to 98th percentile speed. The distribution of sight distance parameter is also studied and found to follow a lognormal distribution. Finally, the reliability based design charts are presented for both plain and hill regions, and considering the effect of lateral thrust.

  5. On the electrical equivalent circuits of gravitational-wave antennas

    International Nuclear Information System (INIS)

    Pallottino, G.V.; Pizzella, G.; Rome Univ.

    1978-01-01

    The electrical equivalent circuit of a Weber gravitational-wave antenna with piezoelectric transducers is derived for the various longitudinal normal modes by using the Lagrangian formalism. The analysis is applied to the antenna without piezoelectric ceramics, as well as with one or more ceramics operated in both passive and active mode. Particular attention is given to the dissipation problem in order to obtain an expression of the overall merit factor directly related to the physics of the actual dissipation processes. As an example the results are applied to a cylindrical bar with two ceramics: one for calibrating the antenna, the other as sensor of the motion. The values of the physical parameters and of the pertinent parameters of the equivalent circuit for the small antenna (20 kg) and those (predicted) for the intermediate antenna (390 kg) of the Rome group are given in the appendix. (author)

  6. Right thoracic curvature in the normal spine

    Directory of Open Access Journals (Sweden)

    Masuda Keigo

    2011-01-01

    Full Text Available Abstract Background Trunk asymmetry and vertebral rotation, at times observed in the normal spine, resemble the characteristics of adolescent idiopathic scoliosis (AIS. Right thoracic curvature has also been reported in the normal spine. If it is determined that the features of right thoracic side curvature in the normal spine are the same as those observed in AIS, these findings might provide a basis for elucidating the etiology of this condition. For this reason, we investigated right thoracic curvature in the normal spine. Methods For normal spinal measurements, 1,200 patients who underwent a posteroanterior chest radiographs were evaluated. These consisted of 400 children (ages 4-9, 400 adolescents (ages 10-19 and 400 adults (ages 20-29, with each group comprised of both genders. The exclusion criteria were obvious chest and spinal diseases. As side curvature is minimal in normal spines and the range at which curvature is measured is difficult to ascertain, first the typical curvature range in scoliosis patients was determined and then the Cobb angle in normal spines was measured using the same range as the scoliosis curve, from T5 to T12. Right thoracic curvature was given a positive value. The curve pattern was organized in each collective three groups: neutral (from -1 degree to 1 degree, right (> +1 degree, and left ( Results In child group, Cobb angle in left was 120, in neutral was 125 and in right was 155. In adolescent group, Cobb angle in left was 70, in neutral was 114 and in right was 216. In adult group, Cobb angle in left was 46, in neutral was 102 and in right was 252. The curvature pattern shifts to the right side in the adolescent group (p Conclusions Based on standing chest radiographic measurements, a right thoracic curvature was observed in normal spines after adolescence.

  7. Complex curve of the two-matrix model and its tau-function

    International Nuclear Information System (INIS)

    Kazakov, Vladimir A; Marshakov, Andrei

    2003-01-01

    We study the Hermitian and normal two-matrix models in planar approximation for an arbitrary number of eigenvalue supports. Its planar graph interpretation is given. The study reveals a general structure of the underlying analytic complex curve, different from the hyperelliptic curve of the one-matrix model. The matrix model quantities are expressed through the periods of meromorphic generating differential on this curve and the partition function of the multiple support solution, as a function of filling numbers and coefficients of the matrix potential, is shown to be a quasiclassical tau-function. The relation to N = 1 supersymmetric Yang-Mills theories is discussed. A general class of solvable multi-matrix models with tree-like interactions is considered

  8. Normal left ventricular emptying in coronary artery disease at rest: analysis by radiographic and equilibrium radionuclide ventriculography

    International Nuclear Information System (INIS)

    Denenberg, B.S.; Makler, P.T.; Bove, A.A.; Spann, J.F.

    1981-01-01

    The volume ejected early in systole has been proposed as an indicator of abnormal left ventricular function that is present at rest in patients with coronary artery disease with a normal ejection fraction and normal wall motion. The volume ejected in systole was examined by calculating the percent change in ventricular volume using both computer-assisted analysis of biplane radiographic ventriculograms at 60 frames/s and equilibrium gated radionuclide ventriculograms. Ventricular emptying was examined with radiographic ventriculography in 33 normal patients and 23 patients with coronary artery disease and normal ejection fraction. Eight normal subjects and six patients with coronary artery disease had both radiographic ventriculography and equilibrium gated radionuclide ventriculography. In all patients, there was excellent correlation between the radiographic and radionuclide ventricular emptying curves (r . 0.971). There were no difference in the ventricular emptying curves of normal subjects and patients with coronary artery disease whether volumes were measured by radiographic or equilibrium gated radionuclide ventriculography. It is concluded that the resting ventricular emptying curves are identical in normal subjects and patients with coronary artery disease who have a normal ejection fraction and normal wall motion

  9. Analysis on the time/activity curve of salivary gland scintigraphy in salivary gland diseases; The correlation between the pattern of time/activity curve and the amount of saliva

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Kazufumi; Hosokawa, Yoichiro; Kaneko, Masanori; Ohmori, Keiichi; Minowa, Kazuyuki; Fukuda, Hiroshi; Yamasaki, Michio (Hokkaido Univ., Sapporo (Japan). School of Dentistry)

    1992-04-01

    Salivary gland scintigraphy with {sup 99m}TcO{sub 4}{sup -} is a simple method to evaluate salivary gland function and has been available as a technique using a time/activity curve for a number of years. But, there were few reports on the relationship between the various patterns of the time/activity curves and the salivary flow rate from the gland. This presents correlation between the time/activity curve pattern and the salivary flow rate from the parotid gland. Sixty-five patients complaining of xerostomia were examined. Sixty-two were female and 3 male (average age 45.6 years, range 17-69 years). Their diagnosis were 26 Sjoegren Syndrome, 28 suspicion of Sjoegren Syndrome and 11 parotiditis. The salivary flow rate from parotid gland was measured by stimulation with 10% citric acid using modified Carlson crittenden cup every 10 seconds for 5 min. 185 MBq {sup 99m}TcO{sub 4}{sup -} was injected intravenously and sequential scintigraphy was performed. Time/activity curves were recorded on film. Six kinds of basic patterns were as follows: normal pattern, median pattern, flat pattern and sloped pattern (Mita et al 1981), reaccumulation flat pattern and poor secretion (Stimulant secretory ratio: less than 70%) pattern by us. The amount of saliva was as follows: normal pattern (n=31), 5.4+0.4 ml; reaccumulation flat pattern (n=3), 4.2+0.6 ml; poor secretion pattern (n=18), 4.1+0.5 ml; median pattern (n=20), 3.5+0.5 ml; flat pattern (n=11), 2.6+0.5 ml and sloped pattern (n=1), 1.5 ml. Normal pattern versus poor secretion pattern, median pattern and flat pattern in the salivary flow rate were statistically significant as determined by Students' t-test. We assessed the correlation between the pattern of time/activity curve in the salivary gland scintigraphy and the amount of saliva. (author).

  10. Dynamical response of the Galileo Galilei on the ground rotor to test the equivalence principle: Theory, simulation, and experiment. I. The normal modes

    International Nuclear Information System (INIS)

    Comandi, G.L.; Chiofalo, M.L.; Toncelli, R.; Bramanti, D.; Polacco, E.; Nobili, A.M.

    2006-01-01

    Recent theoretical work suggests that violation of the equivalence principle might be revealed in a measurement of the fractional differential acceleration η between two test bodies-of different compositions, falling in the gravitational field of a source mass--if the measurement is made to the level of η≅10 -13 or better. This being within the reach of ground based experiments gives them a new impetus. However, while slowly rotating torsion balances in ground laboratories are close to reaching this level, only an experiment performed in a low orbit around the Earth is likely to provide a much better accuracy. We report on the progress made with the 'Galileo Galilei on the ground' (GGG) experiment, which aims to compete with torsion balances using an instrument design also capable of being converted into a much higher sensitivity space test. In the present and following articles (Part I and Part II), we demonstrate that the dynamical response of the GGG differential accelerometer set into supercritical rotation-in particular, its normal modes (Part I) and rejection of common mode effects (Part II)-can be predicted by means of a simple but effective model that embodies all the relevant physics. Analytical solutions are obtained under special limits, which provide the theoretical understanding. A simulation environment is set up, obtaining a quantitative agreement with the available experimental data on the frequencies of the normal modes and on the whirling behavior. This is a needed and reliable tool for controlling and separating perturbative effects from the expected signal, as well as for planning the optimization of the apparatus

  11. Characterizations of Space Curves According to Bishop Darboux Vector in Euclidean 3-Space E3

    OpenAIRE

    Huseyin KOCAYIGIT; Ali OZDEMIR

    2014-01-01

    In this paper, we obtained some characterizations of space curves according to Bihop frame in Euclidean 3-space E3 by using Laplacian operator and Levi-Civita connection. Furthermore, we gave the general differential equations which characterize the space curves according to the Bishop Darboux vector and the normal Bishop Darboux vector.

  12. On the backreaction of scalar and spinor quantum fields in curved spacetimes

    International Nuclear Information System (INIS)

    Hack, Thomas-Paul

    2010-10-01

    In the first instance, the present work is concerned with generalising constructions and results in quantum field theory on curved spacetimes from the well-known case of the Klein-Gordon field to Dirac fields. To this end, the enlarged algebra of observables of the Dirac field is constructed in the algebraic framework. This algebra contains normal-ordered Wick polynomials in particular, and an extended analysis of one of its elements, the stress-energy tensor, is performed. Based on detailed calculations of the Hadamard coe?cients of the Dirac field, it is found that a local, covariant, and covariantly conserved construction of the stress-energy tensor is possible. Additionally, the mathematically sound Hadamard regularisation prescription of the stress-energy tensor is compared to the mathematically less rigorous DeWitt-Schwinger regularisation. It is found that both prescriptions are essentially equivalent, particularly, it turns out to be possible to formulate the DeWitt-Schwinger prescription in a well-defined way. While the aforementioned results hold in generic curved spacetimes, particular attention is also devoted to a specific class of Robertson-Walker spacetimes with a lightlike Big Bang hypersurface. Employing holographic methods, Hadamard states for the Klein-Gordon and the Dirac field are constructed. These states are preferred in the sense that they constitute asymptotic equilibrium states in the limit to the Big Bang hypersurface. Finally, solutions of the semiclassical Einstein equation for quantum fields of arbitrary spin are analysed in the flat Robertson-Walker case. One finds that these solutions explain the measured supernova Ia data as good as the ΛCDM model. Hence, one arrives at a natural explanation of dark energy and a simple quantum model of cosmological dark matter. (orig.)

  13. On the backreaction of scalar and spinor quantum fields in curved spacetimes

    Energy Technology Data Exchange (ETDEWEB)

    Hack, Thomas-Paul

    2010-10-15

    In the first instance, the present work is concerned with generalising constructions and results in quantum field theory on curved spacetimes from the well-known case of the Klein-Gordon field to Dirac fields. To this end, the enlarged algebra of observables of the Dirac field is constructed in the algebraic framework. This algebra contains normal-ordered Wick polynomials in particular, and an extended analysis of one of its elements, the stress-energy tensor, is performed. Based on detailed calculations of the Hadamard coe?cients of the Dirac field, it is found that a local, covariant, and covariantly conserved construction of the stress-energy tensor is possible. Additionally, the mathematically sound Hadamard regularisation prescription of the stress-energy tensor is compared to the mathematically less rigorous DeWitt-Schwinger regularisation. It is found that both prescriptions are essentially equivalent, particularly, it turns out to be possible to formulate the DeWitt-Schwinger prescription in a well-defined way. While the aforementioned results hold in generic curved spacetimes, particular attention is also devoted to a specific class of Robertson-Walker spacetimes with a lightlike Big Bang hypersurface. Employing holographic methods, Hadamard states for the Klein-Gordon and the Dirac field are constructed. These states are preferred in the sense that they constitute asymptotic equilibrium states in the limit to the Big Bang hypersurface. Finally, solutions of the semiclassical Einstein equation for quantum fields of arbitrary spin are analysed in the flat Robertson-Walker case. One finds that these solutions explain the measured supernova Ia data as good as the {lambda}CDM model. Hence, one arrives at a natural explanation of dark energy and a simple quantum model of cosmological dark matter. (orig.)

  14. An investigation on vulnerability assessment of steel structures with thin steel shear wall through development of fragility curves

    Directory of Open Access Journals (Sweden)

    Mohsen Gerami

    2017-02-01

    Full Text Available Fragility curves play an important role in damage assessment of buildings. Probability of damage induction to the structure against seismic events can be investigated upon generation of afore mentioned curves. In current research 360 time history analyses have been carried out on structures of 3, 10 and 20 story height and subsequently fragility curves have been adopted. The curves are developed based on two indices of inter story drifts and equivalent strip axial strains of the shear wall. Time history analysis is carried out in Perform 3d considering 10 far field seismograms and 10 near fields. Analysis of low height structures revealed that they are more vulnerable in accelerations lower than 0.8 g in near field earthquakes because of higher mode effects. Upon the generated fragility curves it was observed that middle and high structures have more acceptable performance and lower damage levels compared to low height structures in both near and far field seismic hazards.

  15. EQUIVALENCE VERSUS NON-EQUIVALENCE IN ECONOMIC TRANSLATION

    Directory of Open Access Journals (Sweden)

    Cristina, Chifane

    2012-01-01

    Full Text Available This paper aims at highlighting the fact that “equivalence” represents a concept worth revisiting and detailing upon when tackling the translation process of economic texts both from English into Romanian and from Romanian into English. Far from being exhaustive, our analysis will focus upon the problems arising from the lack of equivalence at the word level. Consequently, relevant examples from the economic field will be provided to account for the following types of non-equivalence at word level: culturespecific concepts; the source language concept is not lexicalised in the target language; the source language word is semantically complex; differences in physical and interpersonal perspective; differences in expressive meaning; differences in form; differences in frequency and purpose of using specific forms and the use of loan words in the source text. Likewise, we shall illustrate a number of translation strategies necessary to deal with the afore-mentioned cases of non-equivalence: translation by a more general word (superordinate; translation by a more neutral/less expressive word; translation by cultural substitution; translation using a loan word or loan word plus explanation; translation by paraphrase using a related word; translation by paraphrase using unrelated words; translation by omission and translation by illustration.

  16. Antimelanogenic Efficacy of Melasolv (3,4,5-Trimethoxycinnamate Thymol Ester) in Melanocytes and Three-Dimensional Human Skin Equivalent.

    Science.gov (United States)

    Lee, John Hwan; Lee, Eun-Soo; Bae, Il-Hong; Hwang, Jeong-Ah; Kim, Se-Hwa; Kim, Dae-Yong; Park, Nok-Hyun; Rho, Ho Sik; Kim, Yong Jin; Oh, Seong-Geun; Lee, Chang Seok

    2017-01-01

    Excessive melanogenesis often causes unaesthetic hyperpigmentation. In a previous report, our group introduced a newly synthesized depigmentary agent, Melasolv™ (3,4,5-trimethoxycinnamate thymol ester). In this study, we demonstrated the significant whitening efficacy of Melasolv using various melanocytes and human skin equivalents as in vitro experimental systems. The depigmentary effect of Melasolv was tested in melan-a cells (immortalized normal murine melanocytes), α-melanocyte-stimulating hormone (α-MSH)-stimulated B16 murine melanoma cells, primary normal human melanocytes (NHMs), and human skin equivalent (MelanoDerm). The whitening efficacy of Melasolv was further demonstrated by photography, time-lapse microscopy, Fontana-Masson (F&M) staining, and 2-photon microscopy. Melasolv significantly inhibited melanogenesis in the melan-a and α-MSH-stimulated B16 cells. In human systems, Melasolv also clearly showed a whitening effect in NHMs and human skin equivalent, reflecting a decrease in melanin content. F&M staining and 2-photon microscopy revealed that Melasolv suppressed melanin transfer into multiple epidermal layers from melanocytes as well as melanin synthesis in human skin equivalent. Our study showed that Melasolv clearly exerts a whitening effect on various melanocytes and human skin equivalent. These results suggest the possibility that Melasolv can be used as a depigmentary agent to treat pigmentary disorders as well as an active ingredient in cosmetics to increase whitening efficacy. © 2017 S. Karger AG, Basel.

  17. Radioactive waste equivalence

    International Nuclear Information System (INIS)

    Orlowski, S.; Schaller, K.H.

    1990-01-01

    The report reviews, for the Member States of the European Community, possible situations in which an equivalence concept for radioactive waste may be used, analyses the various factors involved, and suggests guidelines for the implementation of such a concept. Only safety and technical aspects are covered. Other aspects such as commercial ones are excluded. Situations where the need for an equivalence concept has been identified are processes where impurities are added as a consequence of the treatment and conditioning process, the substitution of wastes from similar waste streams due to the treatment process, and exchange of waste belonging to different waste categories. The analysis of factors involved and possible ways for equivalence evaluation, taking into account in particular the chemical, physical and radiological characteristics of the waste package, and the potential risks of the waste form, shows that no simple all-encompassing equivalence formula may be derived. Consequently, a step-by-step approach is suggested, which avoids complex evaluations in the case of simple exchanges

  18. Equivalence principles and electromagnetism

    Science.gov (United States)

    Ni, W.-T.

    1977-01-01

    The implications of the weak equivalence principles are investigated in detail for electromagnetic systems in a general framework. In particular, it is shown that the universality of free-fall trajectories (Galileo weak equivalence principle) does not imply the validity of the Einstein equivalence principle. However, the Galileo principle plus the universality of free-fall rotation states does imply the Einstein principle.

  19. New recommendations for dose equivalent

    International Nuclear Information System (INIS)

    Bengtsson, G.

    1985-01-01

    In its report 39, the International Commission on Radiation Units and Measurements (ICRU), has defined four new quantities for the determination of dose equivalents from external sources: the ambient dose equivalent, the directional dose equivalent, the individual dose equivalent, penetrating and the individual dose equivalent, superficial. The rationale behind these concepts and their practical application are discussed. Reference is made to numerical values of these quantities which will be the subject of a coming publication from the International Commission on Radiological Protection, ICRP. (Author)

  20. Analysis of interacting quantum field theory in curved spacetime

    International Nuclear Information System (INIS)

    Birrell, N.D.; Taylor, J.G.

    1980-01-01

    A detailed analysis of interacting quantized fields propagating in a curved background spacetime is given. Reduction formulas for S-matrix elements in terms of vacuum Green's functions are derived, special attention being paid to the possibility that the ''in'' and ''out'' vacuum states may not be equivalent. Green's functions equations are obtained and a diagrammatic representation for them given, allowing a formal, diagrammatic renormalization to be effected. Coordinate space techniques for showing renormalizability are developed in Minkowski space, for lambdaphi 3 /sub() 4,6/ field theories. The extension of these techniques to curved spacetimes is considered. It is shown that the possibility of field theories becoming nonrenormalizable there cannot be ruled out, although, allowing certain modifications to the theory, phi 3 /sub( 4 ) is proven renormalizable in a large class of spacetimes. Finally particle production from the vacuum by the gravitational field is discussed with particular reference to Schwarzschild spacetime. We shed some light on the nonlocalizability of the production process and on the definition of the S matrix for such processes

  1. Can Low-Resolution Airborne Laser Scanning Data Be Used to Model Stream Rating Curves?

    Directory of Open Access Journals (Sweden)

    Steve W. Lyon

    2015-03-01

    Full Text Available This pilot study explores the potential of using low-resolution (0.2 points/m2 airborne laser scanning (ALS-derived elevation data to model stream rating curves. Rating curves, which allow the functional translation of stream water depth into discharge, making them integral to water resource monitoring efforts, were modeled using a physics-based approach that captures basic geometric measurements to establish flow resistance due to implicit channel roughness. We tested synthetically thinned high-resolution (more than 2 points/m2 ALS data as a proxy for low-resolution data at a point density equivalent to that obtained within most national-scale ALS strategies. Our results show that the errors incurred due to the effect of low-resolution versus high-resolution ALS data were less than those due to flow measurement and empirical rating curve fitting uncertainties. As such, although there likely are scale and technical limitations to consider, it is theoretically possible to generate rating curves in a river network from ALS data of the resolution anticipated within national-scale ALS schemes (at least for rivers with relatively simple geometries. This is promising, since generating rating curves from ALS scans would greatly enhance our ability to monitor streamflow by simplifying the overall effort required.

  2. Can low-resolution airborne laser scanning data be used to model stream rating curves?

    Science.gov (United States)

    Lyon, Steve; Nathanson, Marcus; Lam, Norris; Dahlke, Helen; Rutzinger, Martin; Kean, Jason W.; Laudon, Hjalmar

    2015-01-01

    This pilot study explores the potential of using low-resolution (0.2 points/m2) airborne laser scanning (ALS)-derived elevation data to model stream rating curves. Rating curves, which allow the functional translation of stream water depth into discharge, making them integral to water resource monitoring efforts, were modeled using a physics-based approach that captures basic geometric measurements to establish flow resistance due to implicit channel roughness. We tested synthetically thinned high-resolution (more than 2 points/m2) ALS data as a proxy for low-resolution data at a point density equivalent to that obtained within most national-scale ALS strategies. Our results show that the errors incurred due to the effect of low-resolution versus high-resolution ALS data were less than those due to flow measurement and empirical rating curve fitting uncertainties. As such, although there likely are scale and technical limitations to consider, it is theoretically possible to generate rating curves in a river network from ALS data of the resolution anticipated within national-scale ALS schemes (at least for rivers with relatively simple geometries). This is promising, since generating rating curves from ALS scans would greatly enhance our ability to monitor streamflow by simplifying the overall effort required.

  3. Stereoscopic visualization in curved spacetime: seeing deep inside a black hole

    International Nuclear Information System (INIS)

    Hamilton, Andrew J S; Polhemus, Gavin

    2010-01-01

    Stereoscopic visualization adds an additional dimension to the viewer's experience, giving them a sense of distance. In a general relativistic visualization, distance can be measured in a variety of ways. We argue that the affine distance, which matches the usual notion of distance in flat spacetime, is a natural distance to use in curved spacetime. As an example, we apply affine distance to the visualization of the interior of a black hole. Affine distance is not the distance perceived with normal binocular vision in curved spacetime. However, the failure of binocular vision is simply a limitation of animals that have evolved in flat spacetime, not a fundamental obstacle to depth perception in curved spacetime. Trinocular vision would provide superior depth perception.

  4. Three dimensions of the survival curve: horizontalization, verticalization, and longevity extension.

    Science.gov (United States)

    Cheung, Siu Lan Karen; Robine, Jean-Marie; Tu, Edward Jow-Ching; Caselli, Graziella

    2005-05-01

    Three dimensions of the survival curve have been developed: (1) "horizontalization," which corresponds to how long a cohort and how many survivors can live before aging-related deaths significantly decrease the proportion of survivors; (2) "verticalization," which corresponds to how concentrated aging-related ("normal") deaths are around the modal age at death (M); and (3) "longevity extension," which corresponds to how far the highest normal life durations can exceed M. Our study shows that the degree of horizontalization increased relatively less than the degree of verticalization in Hong Kong from 1976 to 2001. After age normalization, the highest normal life durations moved closer to M, implying that the increase in human longevity is meeting some resistance.

  5. Determination of isodose curves in Radiotherapy using an Alanine/ESR dosemeter

    International Nuclear Information System (INIS)

    Chen, F.; Baffa, O.; Graeff, C.F.O.

    1998-01-01

    It was studied the possible use of an Alanine/ESR dosemeter in the isodose curves mapping in normal treatments of Radiotherapy. It was manufactured a lot of 150 dosemeters with base in a mixture of D-L Alanine dust (80 %) and paraffin (20 %). Each dosemeter has 4.7 mm diameter and 12 mm length. A group of 100 dosemeters of the lot were arranged inside 50 holes of the slice 25 of the phantom Rando Man. The phantom irradiation was realized in two opposed projections (AP and PA) in Co-60 equipment. A group of 15 dosemeters was take of the same lot for obtaining the calibration curve in a 1-20 Gy range. After irradiation the signal of each dosemeter was measured in an ESR spectrometer operating in the X-band (∼ 9.5 GHz) and the wideness of Alanine ESR spectra central line was correlated with the radiation dose. The wideness dose calibration curve resulted linear with a correlation coefficient 0.9996. The isodose curves obtained show a profile enough similar at comparing with the theoretical curves. (Author)

  6. Statistical and biophysical aspects of survival curve

    International Nuclear Information System (INIS)

    Kellerer, A.M.

    1980-01-01

    Statistic fluctuation in a series of consequently taken survival curves of asynchronous cells of a hamster of the V79 line during X-ray irradiation, are considered. In each of the experiments fluctuations are close to those expected on the basis of the Poisson distribution. The fluctuation of cell sensitivity in different experiments of one series can reach 10%. The normalization of each experiment in mean values permits to obtain the ''idealized'' survival curve. The survival logarithm in this curve is proportional to the absorbed dose and its square only at low radiation doses. Such proportionality in V lab 79 cells in the late S-phase is observed at all doses. Using the microdosimetric approach, the distance where the interaction of radiolysis products or subinjury takes place to make the dependence of injury on the dose non-linear, is determined. In the case of interaction distances of 10-100 nm, the linear component is shown to become comparable in value with the linear injury component at doses of the order of several hundred rad only in the case, when the interaction distance is close to micrometre [ru

  7. Study on soil-water retention curves for loess aerated zone

    International Nuclear Information System (INIS)

    Guo Zede; Cheng Jinru; Deng An; Masayuki Mukai; Hideo Kamiyama

    2000-01-01

    The author introduces the measuring method and results of soil-water retention curves of 46 samples taken from ground surface to water table of 28 m depth at CIRP's Field Test Site. The results indicate that the soil-water retention characteristics vary significantly with depth, and the loess-aerated zone at the site can be divided into five layers. From the results, unsaturated hydraulic parameters are deduced, such as conductivity, specific water capacity and equivalent pore diameter. The water velocity calculated from these parameters is satisfactorily consistent with that one obtained from 3 H tracing test carried out at the site

  8. AC and DC electrical behavior of MWCNT/epoxy nanocomposite near percolation threshold: Equivalent circuits and percolation limits

    Science.gov (United States)

    Alizadeh Sahraei, Abolfazl; Ayati, Moosa; Baniassadi, Majid; Rodrigue, Denis; Baghani, Mostafa; Abdi, Yaser

    2018-03-01

    This study attempts to comprehensively investigate the effects of multi-walled carbon nanotubes (MWCNTs) on the AC and DC electrical conductivity of epoxy nanocomposites. The samples (0.2, 0.3, and 0.5 wt. % MWCNT) were produced using a combination of ultrason and shear mixing methods. DC measurements were performed by continuous measurement of the current-voltage response and the results were analyzed via a numerical percolation approach, while for the AC behavior, the frequency response was studied by analyzing phase difference and impedance in the 10 Hz to 0.2 MHz frequency range. The results showed that the dielectric parameters, including relative permittivity, impedance phase, and magnitude, present completely different behaviors for the frequency range and MWCNT weight fractions studied. To better understand the nanocomposites electrical behavior, equivalent electric circuits were also built for both DC and AC modes. The DC equivalent networks were developed based on the current-voltage curves, while the AC equivalent circuits were proposed by using an optimization problem according to the impedance magnitude and phase at different frequencies. The obtained equivalent electrical circuits were found to be highly useful tools to understand the physical mechanisms involved in MWCNT filled polymer nanocomposites.

  9. SU-F-T-02: Estimation of Radiobiological Doses (BED and EQD2) of Single Fraction Electronic Brachytherapy That Equivalent to I-125 Eye Plaque: By Using Linear-Quadratic and Universal Survival Curve Models

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Y; Waldron, T; Pennington, E [University Of Iowa, College of Medicine, Iowa City, IA (United States)

    2016-06-15

    Purpose: To test the radiobiological impact of hypofractionated choroidal melanoma brachytherapy, we calculated single fraction equivalent doses (SFED) of the tumor that equivalent to 85 Gy of I125-BT for 20 patients. Corresponding organs-at-risks (OARs) doses were estimated. Methods: Twenty patients treated with I125-BT were retrospectively examined. The tumor SFED values were calculated from tumor BED using a conventional linear-quadratic (L-Q) model and an universal survival curve (USC). The opposite retina (α/β = 2.58), macula (2.58), optic disc (1.75), and lens (1.2) were examined. The % doses of OARs over tumor doses were assumed to be the same as for a single fraction delivery. The OAR SFED values were converted into BED and equivalent dose in 2 Gy fraction (EQD2) by using both L-Q and USC models, then compared to I125-BT. Results: The USC-based BED and EQD2 doses of the macula, optic disc, and the lens were on average 118 ± 46% (p < 0.0527), 126 ± 43% (p < 0.0354), and 112 ± 32% (p < 0.0265) higher than those of I125-BT, respectively. The BED and EQD2 doses of the opposite retina were 52 ± 9% lower than I125-BT. The tumor SFED values were 25.2 ± 3.3 Gy and 29.1 ± 2.5 Gy when using USC and LQ models which can be delivered within 1 hour. All BED and EQD2 values using L-Q model were significantly larger when compared to the USC model (p < 0.0274) due to its large single fraction size (> 14 Gy). Conclusion: The estimated single fraction doses were feasible to be delivered within 1 hour using a high dose rate source such as electronic brachytherapy (eBT). However, the estimated OAR doses using eBT were 112 ∼ 118% higher than when using the I125-BT technique. Continued exploration of alternative dose rate or fractionation schedules should be followed.

  10. The effects of environmental deuterium on normal and neoplastic cultured cell development

    International Nuclear Information System (INIS)

    Bild, W.; Schuller, T.; Zhihai, Qin; Blankenstein, T.; Nastasa, V.; Haulica, I.

    2000-01-01

    The powdered culture media (RPMI - 1640) were reconstituted either with normal distilled water (150 ppm deuterium) either with deuterium - depleted water (DDW) in various concentrations (30, 60, 90 ppm) and sterilized by filtration with 0.2 μm filters. The cell lines used were NIH (normal mouse fibroblasts), RAG (mouse renal carcinoma) and TS/A (mouse mammary adenocarcinoma). In auxiliary tests, BAIBC mouse splenocytes in direct culture were used, stimulated for growth with concanavalin A or LPS (bacterial lipopolysaccharide). The estimation of the growth was made using the MTT assay or direct counting with trypan blue exclusion. The following results were obtained: Deuterium - depleted water had a stimulating effect on cell growth, the most important stimulating action being from the 90 ppm deuterium-water. The growth curves show, in a first phase, a stimulation of the rapid -growing neoplastic cells, followed by a slower growth of the normal cells. Amiloride 100 mM blocking of the Na + /K + membrane pump did not affect the cell growth curves, while the lansoprazole 100 mM blocking of the K + /H + ATP-ase brought the growth curves at the level of those with normal water. This might show an eventual involvement of the K + /H + antiport in the stimulating effects of the DDW. (authors)

  11. Delay Discounting Rates Are Temporally Stable in an Equivalent Present Value Procedure Using Theoretical and Area under the Curve Analyses

    Science.gov (United States)

    Harrison, Justin; McKay, Ryan

    2012-01-01

    Temporal discounting rates have become a popular dependent variable in social science research. While choice procedures are commonly employed to measure discounting rates, equivalent present value (EPV) procedures may be more sensitive to experimental manipulation. However, their use has been impeded by the absence of test-retest reliability data.…

  12. ASYMPT - a program to calculate asymptotics of hyperspherical potential curves and adiabatic potentials

    International Nuclear Information System (INIS)

    Abrashkevich, A.G.; Puzynin, I.V.; Vinitskij, S.I.

    1997-01-01

    A FORTRAN 77 program is presented which calculates asymptotics of potential curves and adiabatic potentials with an accuracy of O(ρ -2 ) in the framework of the hyperspherical adiabatic (HSA) approach. It is shown that matrix elements of the equivalent operator corresponding to the perturbation ρ -2 have a simple form in the basis of the Coulomb parabolic functions in the body-fixed frame and can be easily computed for high values of total orbital momentum and threshold number. The second-order corrections to the adiabatic curves are obtained as the solutions of the corresponding secular equation. The asymptotic potentials obtained can be used for the calculation of the energy levels and radial wave functions of two-electron systems in the adiabatic and coupled-channel approximations of the HSA approach

  13. Theoretical determination of transit time locus curves for ultrasonic pulse echo testing - ALOK. Pt. 4

    International Nuclear Information System (INIS)

    Grohs, B.

    1983-01-01

    The ALOK-technique allows the simultaneous detection of flaws and their evaluation with respect to type, location and dimension by interpretation of the transit time behaviour during scanning of the reflector. The accuracy of information obtained by means of this technique can be further improved both during interference elimination and reconstruction owing to the ability of exact calculation of possible transit time locus curves of given reflectors. The mathematical solution of transit time locus curve calculations refers here to pulse echo testing in consideration of the refraction of sound on the forward wedge/test object - interface. The method of solving the problem is equivalent to the Fermat's principle in optics. (orig.) [de

  14. Test for the statistical significance of differences between ROC curves

    International Nuclear Information System (INIS)

    Metz, C.E.; Kronman, H.B.

    1979-01-01

    A test for the statistical significance of observed differences between two measured Receiver Operating Characteristic (ROC) curves has been designed and evaluated. The set of observer response data for each ROC curve is assumed to be independent and to arise from a ROC curve having a form which, in the absence of statistical fluctuations in the response data, graphs as a straight line on double normal-deviate axes. To test the significance of an apparent difference between two measured ROC curves, maximum likelihood estimates of the two parameters of each curve and the associated parameter variances and covariance are calculated from the corresponding set of observer response data. An approximate Chi-square statistic with two degrees of freedom is then constructed from the differences between the parameters estimated for each ROC curve and from the variances and covariances of these estimates. This statistic is known to be truly Chi-square distributed only in the limit of large numbers of trials in the observer performance experiments. Performance of the statistic for data arising from a limited number of experimental trials was evaluated. Independent sets of rating scale data arising from the same underlying ROC curve were paired, and the fraction of differences found (falsely) significant was compared to the significance level, α, used with the test. Although test performance was found to be somewhat dependent on both the number of trials in the data and the position of the underlying ROC curve in the ROC space, the results for various significance levels showed the test to be reliable under practical experimental conditions

  15. A NEW DOUBLE-SLIT CURVED WALL-JET (CWJ) BURNER FOR STABILIZING TURBULENT PREMIXED AND NON-PREMIXED FLAMES

    KAUST Repository

    Mansour, Morkous S.

    2015-06-30

    A novel double-slit curved wall-jet (CWJ) burner was proposed and employed, which utilizes the Coanda effect by supplying fuel and air as annular-inward jets over a curved surface. We investigated the stabilization characteristics and structure of methane/air, and propane/air turbulent premixed and non-premixed flames with varying global equivalence ratio, , and Reynolds number, Re. Simultaneous time-resolved measurements of particle image velocimetry and planar laser-induced fluorescence of OH radicals were conducted. The burner showed potential for stable operation for methane flames with relatively large fuel loading and overall rich conditions. These have a non-sooting nature. However, propane flames exhibit stable mode for a wider range of equivalence ratio and Re. Mixing characteristics in the cold flow of non-premixed cases were first examined using acetone fluorescence technique, indicating substantial transport between the fuel and air by exhibiting appreciable premixing conditions.PIV measurements revealed that velocity gradients in the shear layers at the boundaries of the annularjets generate the turbulence, enhanced with the collisions in the interaction jet, IJ,region. Turbulent mean and rms velocities were influenced significantly by Re and high rms turbulent velocities are generated within the recirculation zone improving the flame stabilization in this burner.Premixed and non-premixed flames with high equivalence ratio were found to be more resistant to local extinction and exhibited a more corrugated and folded nature, particularly at high Re. For flames with low equivalence ratio, the processes of local quenching at IJ region and of re-ignition within merged jet region maintained these flames further downstream particularly for non-premixed methane flame, revealing a strong intermittency.

  16. Correspondences. Equivalence relations

    International Nuclear Information System (INIS)

    Bouligand, G.M.

    1978-03-01

    We comment on sections paragraph 3 'Correspondences' and paragraph 6 'Equivalence Relations' in chapter II of 'Elements de mathematique' by N. Bourbaki in order to simplify their comprehension. Paragraph 3 exposes the ideas of a graph, correspondence and map or of function, and their composition laws. We draw attention to the following points: 1) Adopting the convention of writting from left to right, the composition law for two correspondences (A,F,B), (U,G,V) of graphs F, G is written in full generality (A,F,B)o(U,G,V) = (A,FoG,V). It is not therefore assumed that the co-domain B of the first correspondence is identical to the domain U of the second (EII.13 D.7), (1970). 2) The axiom of choice consists of creating the Hilbert terms from the only relations admitting a graph. 3) The statement of the existence theorem of a function h such that f = goh, where f and g are two given maps having the same domain (of definition), is completed if h is more precisely an injection. Paragraph 6 considers the generalisation of equality: First, by 'the equivalence relation associated with a map f of a set E identical to (x is a member of the set E and y is a member of the set E and x:f = y:f). Consequently, every relation R(x,y) which is equivalent to this is an equivalence relation in E (symmetrical, transitive, reflexive); then R admits a graph included in E x E, etc. Secondly, by means of the Hilbert term of a relation R submitted to the equivalence. In this last case, if R(x,y) is separately collectivizing in x and y, theta(x) is not the class of objects equivalent to x for R (EII.47.9), (1970). The interest of bringing together these two subjects, apart from this logical order, resides also in the fact that the theorem mentioned in 3) can be expressed by means of the equivalence relations associated with the functions f and g. The solutions of the examples proposed reveal their simplicity [fr

  17. A standard curve based method for relative real time PCR data processing

    Directory of Open Access Journals (Sweden)

    Krause Andreas

    2005-03-01

    Full Text Available Abstract Background Currently real time PCR is the most precise method by which to measure gene expression. The method generates a large amount of raw numerical data and processing may notably influence final results. The data processing is based either on standard curves or on PCR efficiency assessment. At the moment, the PCR efficiency approach is preferred in relative PCR whilst the standard curve is often used for absolute PCR. However, there are no barriers to employ standard curves for relative PCR. This article provides an implementation of the standard curve method and discusses its advantages and limitations in relative real time PCR. Results We designed a procedure for data processing in relative real time PCR. The procedure completely avoids PCR efficiency assessment, minimizes operator involvement and provides a statistical assessment of intra-assay variation. The procedure includes the following steps. (I Noise is filtered from raw fluorescence readings by smoothing, baseline subtraction and amplitude normalization. (II The optimal threshold is selected automatically from regression parameters of the standard curve. (III Crossing points (CPs are derived directly from coordinates of points where the threshold line crosses fluorescence plots obtained after the noise filtering. (IV The means and their variances are calculated for CPs in PCR replicas. (V The final results are derived from the CPs' means. The CPs' variances are traced to results by the law of error propagation. A detailed description and analysis of this data processing is provided. The limitations associated with the use of parametric statistical methods and amplitude normalization are specifically analyzed and found fit to the routine laboratory practice. Different options are discussed for aggregation of data obtained from multiple reference genes. Conclusion A standard curve based procedure for PCR data processing has been compiled and validated. It illustrates that

  18. Contribution to the boiling curve of sodium

    International Nuclear Information System (INIS)

    Schins, H.E.J.

    1975-01-01

    Sodium in a pool was preheated to saturation temperatures at system pressures of 200, 350 and 500 torr. A test section of normal stainless steel was then extra heated by means of the conical fitting condenser zone of a heat pipe. Measurements were made of heat transfer fluxes, q in W/cm 2 , as a function of wall excess temperature above saturation, THETA = Tsub(w) - Tsub(s) in 0 C, both, in natural convection and in boiling regimes. These measurements make it possible to select the Subbotin natural convection and nucleate boiling curves among other variants proposed in literature. Further it is empirically demonstrated on water that the minimum film boiling point corresponds to the homogeneous nucleation temperature calculated by the Doering formula. Assuming that the minimum film boiling point of sodium can be obtained in the same manner, it is then possible to give an appoximate boiling curve of sodium for the use in thermal interaction studies. At 1 atm the heat transfer fluxes q versus wall temperatures THETA are for a point on the natural convection curve 0.3 W/cm 2 and 2 0 C; for start of boiling 1.6 W/cm 2 and 6 0 C; for peak heat flux 360 W/cm 2 and 37 0 C; for minimum film boiling 30 W/cm 2 and 905 0 C and for a point on the film boiling curve 160 W/cm 2 and 2,000 0 C. (orig.) [de

  19. Variation of indoor radon concentration and ambient dose equivalent rate in different outdoor and indoor environments

    Energy Technology Data Exchange (ETDEWEB)

    Stojanovska, Zdenka; Janevik, Emilija; Taleski, Vaso [Goce Delcev University, Faculty of Medical Sciences, Stip (Macedonia, The Former Yugoslav Republic of); Boev, Blazo [Goce Delcev University, Faculty of Natural and Technical Sciences, Stip (Macedonia, The Former Yugoslav Republic of); Zunic, Zora S. [University of Belgrade, Institute of Nuclear Sciences ' ' Vinca' ' , Belgrade (Serbia); Ivanova, Kremena; Tsenova, Martina [National Center of Radiobiology and Radiation Protection, Sofia (Bulgaria); Ristova, Mimoza [University in Ss. Cyril and Methodius, Faculty of Natural Sciences and Mathematic, Institute of Physics, Skopje (Macedonia, The Former Yugoslav Republic of); Ajka, Sorsa [Croatian Geological Survey, Zagreb (Croatia); Bossew, Peter [German Federal Office for Radiation Protection, Berlin (Germany)

    2016-05-15

    Subject of this study is an investigation of the variations of indoor radon concentration and ambient dose equivalent rate in outdoor and indoor environments of 40 dwellings, 31 elementary schools and five kindergartens. The buildings are located in three municipalities of two, geologically different, areas of the Republic of Macedonia. Indoor radon concentrations were measured by nuclear track detectors, deployed in the most occupied room of the building, between June 2013 and May 2014. During the deploying campaign, indoor and outdoor ambient dose equivalent rates were measured simultaneously at the same location. It appeared that the measured values varied from 22 to 990 Bq/m{sup 3} for indoor radon concentrations, from 50 to 195 nSv/h for outdoor ambient dose equivalent rates, and from 38 to 184 nSv/h for indoor ambient dose equivalent rates. The geometric mean value of indoor to outdoor ambient dose equivalent rates was found to be 0.88, i.e. the outdoor ambient dose equivalent rates were on average higher than the indoor ambient dose equivalent rates. All measured can reasonably well be described by log-normal distributions. A detailed statistical analysis of factors which influence the measured quantities is reported. (orig.)

  20. Calculation methods for determining dose equivalent

    International Nuclear Information System (INIS)

    Endres, G.W.R.; Tanner, J.E.; Scherpelz, R.I.; Hadlock, D.E.

    1987-11-01

    A series of calculations of neutron fluence as a function of energy in an anthropomorphic phantom was performed to develop a system for determining effective dose equivalent for external radiation sources. Critical organ dose equivalents are calculated and effective dose equivalents are determined using ICRP-26 [1] methods. Quality factors based on both present definitions and ICRP-40 definitions are used in the analysis. The results of these calculations are presented and discussed. The effective dose equivalent determined using ICRP-26 methods is significantly smaller than the dose equivalent determined by traditional methods. No existing personnel dosimeter or health physics instrument can determine effective dose equivalent. At the present time, the conversion of dosimeter response to dose equivalent is based on calculations for maximal or ''cap'' values using homogeneous spherical or cylindrical phantoms. The evaluated dose equivalent is, therefore, a poor approximation of the effective dose equivalent as defined by ICRP Publication 26. 3 refs., 2 figs., 1 tab

  1. Effective dose equivalent

    International Nuclear Information System (INIS)

    Huyskens, C.J.; Passchier, W.F.

    1988-01-01

    The effective dose equivalent is a quantity which is used in the daily practice of radiation protection as well as in the radiation hygienic rules as measure for the health risks. In this contribution it is worked out upon which assumptions this quantity is based and in which cases the effective dose equivalent can be used more or less well. (H.W.)

  2. Determination of regional cerebral blood flow curves and parameters by computed γ camera

    International Nuclear Information System (INIS)

    Zhu Guohong

    1988-01-01

    Regional CBF curves and parameters were determined in 236 subjects by Sigma 438/MCS 560 computed γ camera. Each subject was given 99m TcO 4 -370 MBq intravenously. Four CBF curves and three parameters were derived by the computer.The results from 39 normal subjects, 22 patients with cerebral embolism, 53 patients with cerebrovascular sclerosis, 56 patients with diseases of cervical vertebrae, 10 patients with concussion and 5 patients with cerebral arteritis were analyzed

  3. Inter-observer and intra-observer agreement on interpretation of uroflowmetry curves of kindergarten children.

    Science.gov (United States)

    Chang, Shang-Jen; Yang, Stephen S D

    2008-12-01

    To evaluate the inter-observer and intra-observer agreement on the interpretation of uroflowmetry curves of children. Healthy kindergarten children were enrolled for evaluation of uroflowmetry. Uroflowmetry curves were classified as bell-shaped, tower, plateau, staccato and interrupted. Only the bell-shaped curves were regarded as normal. Two urodynamists evaluated the curves independently after reviewing the definitions of the different types of uroflowmetry curve. The senior urodynamist evaluated the curves twice 3 months apart. The final conclusion was made when consensus was reached. Agreement among observers was analyzed using kappa statistics. Of 190 uroflowmetry curves eligible for analysis, the intra-observer agreement in interpreting each type of curve and interpreting normalcy vs abnormality was good (kappa=0.71 and 0.68, respectively). Very good inter-observer agreement (kappa=0.81) on normalcy and good inter-observer agreement (kappa=0.73) on types of uroflowmetry were observed. Poor inter-observer agreement existed on the classification of specific types of abnormal uroflowmetry curves (kappa=0.07). Uroflowmetry is a good screening tool for normalcy of kindergarten children, while not a good tool to define the specific types of abnormal uroflowmetry.

  4. PHARMACOKINETIC VARIATIONS OF OFLOXACIN IN NORMAL AND FEBRILE RABBITS

    Directory of Open Access Journals (Sweden)

    M. AHMAD, H. RAZA, G. MURTAZA AND N. AKHTAR

    2008-12-01

    Full Text Available The influence of experimentally Escherichia coli-induced fever (EEIF on the pharmacokinetics of ofloxacin was evaluated. Ofloxacin was administered @ 20 mg.kg-1 body weight intravenously to a group of eight healthy rabbits and compared these results to values in same eight rabbits with EEIF. Pharmacokinetic parameters of ofloxacin in normal and febrile rabbits were determined by using two compartment open kinetic model. Peak plasma level (Cmax and area under the plasma concentration-time curve (AUC0-α in normal and febrile rabbits did not differ (P>0.05. However, area under first moment of plasma concentration-time curve (AUMC0-α in febrile rabbits was significantly (P<0.05 higher than that in normal rabbits. Mean values for elimination rate constant (Ke, elimination half life (t1/2β and apparent volume of distribution (Vd were significantly (P<0.05 lower in febrile rabbits compared to normal rabbits, while mean residence time (MRT and total body clearance (Cl of ofloxacin did not show any significant difference in the normal and febrile rabbits. Clinical significance of the above results can be related to the changes in the volume of distribution and elimination half life that illustrates an altered steady state in febrile condition; hence, the need for an adjustment of dosage regimen in EEIF is required.

  5. Equivalence relations of AF-algebra extensions

    Indian Academy of Sciences (India)

    In this paper, we consider equivalence relations of *-algebra extensions and describe the relationship between the isomorphism equivalence and the unitary equivalence. We also show that a certain group homomorphism is the obstruction for these equivalence relations to be the same.

  6. Deriving Snow-Cover Depletion Curves for Different Spatial Scales from Remote Sensing and Snow Telemetry Data

    Science.gov (United States)

    Fassnacht, Steven R.; Sexstone, Graham A.; Kashipazha, Amir H.; Lopez-Moreno, Juan Ignacio; Jasinski, Michael F.; Kampf, Stephanie K.; Von Thaden, Benjamin C.

    2015-01-01

    During the melting of a snowpack, snow water equivalent (SWE) can be correlated to snow-covered area (SCA) once snow-free areas appear, which is when SCA begins to decrease below 100%. This amount of SWE is called the threshold SWE. Daily SWE data from snow telemetry stations were related to SCA derived from moderate-resolution imaging spectro radiometer images to produce snow-cover depletion curves. The snow depletion curves were created for an 80,000 sq km domain across southern Wyoming and northern Colorado encompassing 54 snow telemetry stations. Eight yearly snow depletion curves were compared, and it is shown that the slope of each is a function of the amount of snow received. Snow-cover depletion curves were also derived for all the individual stations, for which the threshold SWE could be estimated from peak SWE and the topography around each station. A stations peak SWE was much more important than the main topographic variables that included location, elevation, slope, and modelled clear sky solar radiation. The threshold SWE mostly illustrated inter-annual consistency.

  7. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population.

    Science.gov (United States)

    Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A; Ono, Yutaka

    2016-01-01

    Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern.

  8. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population

    Directory of Open Access Journals (Sweden)

    Shinichiro Tomitaka

    2016-10-01

    Full Text Available Background Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Methods Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items. The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. Results The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. Discussion The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an

  9. The energy-momentum operator in curved space-time

    International Nuclear Information System (INIS)

    Brown, M.R.; Ottewill, A.C.

    1983-01-01

    It is argued that the only meaningful geometrical measure of the energy-momentum of states of matter described by a free quantum field theory in a general curved space-time is that provided by a normal ordered energy-momentum operator. The finite expectation values of this operator are contrasted with the conventional renormalized expectation values and it is further argued that the use of renormalization theory is inappropriate in this context. (author)

  10. Mars seasonal polar caps as a test of the equivalence principle

    International Nuclear Information System (INIS)

    Rubincam, David Parry

    2011-01-01

    The seasonal polar caps of Mars can be used to test the equivalence principle in general relativity. The north and south caps, which are composed of carbon dioxide, wax and wane with the seasons. If the ratio of the inertial (passive) to gravitational (active) masses of the caps differs from the same ratio for the rest of Mars, then the equivalence principle fails, Newton's third law fails, and the caps will pull Mars one way and then the other with a force aligned with the planet's spin axis. This leads to a secular change in Mars's along-track position in its orbit about the Sun, and to a secular change in the orbit's semimajor axis. The caps are a poor Eoetvoes test of the equivalence principle, being 4 orders-of-magnitude weaker than laboratory tests and 7 orders-of-magnitude weaker than that found by lunar laser ranging; the reason is the small mass of the caps compared to Mars as a whole. The principal virtue of using Mars is that the caps contain carbon, an element not normally considered in such experiments. The Earth with its seasonal snow cover can also be used for a similar test.

  11. Estimating the Lactation Curve on a.m./p.m. Milkings in Dairy Cows

    Directory of Open Access Journals (Sweden)

    Ludovic Toma Cziszter

    2013-10-01

    Full Text Available A pilot study was conducted in order to assess the effect of a.m./p.m. milkings on the shape of the lactation curve during a normal lactation. Data from a.m. and p.m. milkings from 86 Romanian Spotted cows were used. Cows calved during January, February and March 2011 and concluded their lactations by the end of February 2012. Results showed that there was a difference between morning and evening milkings regarding the shape of the lactation curve. The shape of the lactation curve for morning milking was more resembling to the shape of the lactation curve for total daily milk. Modelling the lactation curve with gamma incomplete function led to a milk production estimate very close to the real production, although the model overestimated the yield in early lactation and underestimated it in middle lactation. If alternative milkings are going to use for milk yield estimation it is preferable to measure the evening milking at the beginning.

  12. Existence of a common growth curve for silt-sized quartz OSL of loess from different continents

    International Nuclear Information System (INIS)

    Lai Zhongping; Brueckner, Helmut; Zoeller, Ludwig; Fuelling, Alexander

    2007-01-01

    Recent publications revealed different opinions regarding the existence of a common growth curve (CGC) for OSL of quartz. In the current study, 18 loess samples were collected from four continents (Asia, America, Africa, and Europe) in order to further examine this issue. Except the three samples from Chile in South America, 15 samples display similar dose-response curves up to a regeneration dose of 200 Gy using the SAR protocol, suggesting the existence of a global CGC for loess from different continents. For samples with equivalent doses (D e ) from ∼10 to ∼170Gy, the D e s determined by the CGC are in good agreement with the D e s by the SAR protocol. The Chilean samples posses a growth curve that differs from the CGC, showing much lower saturation doses. We suggest that it may be due to contamination with heavy minerals

  13. Testing statistical hypotheses of equivalence

    CERN Document Server

    Wellek, Stefan

    2010-01-01

    Equivalence testing has grown significantly in importance over the last two decades, especially as its relevance to a variety of applications has become understood. Yet published work on the general methodology remains scattered in specialists' journals, and for the most part, it focuses on the relatively narrow topic of bioequivalence assessment.With a far broader perspective, Testing Statistical Hypotheses of Equivalence provides the first comprehensive treatment of statistical equivalence testing. The author addresses a spectrum of specific, two-sided equivalence testing problems, from the

  14. A Simple yet Accurate Method for Students to Determine Asteroid Rotation Periods from Fragmented Light Curve Data

    Science.gov (United States)

    Beare, R. A.

    2008-01-01

    Professional astronomers use specialized software not normally available to students to determine the rotation periods of asteroids from fragmented light curve data. This paper describes a simple yet accurate method based on Microsoft Excel[R] that enables students to find periods in asteroid light curve and other discontinuous time series data of…

  15. Distribution of Different Sized Ocular Surface Vessels in Diabetics and Normal Individuals.

    Science.gov (United States)

    Banaee, Touka; Pourreza, Hamidreza; Doosti, Hassan; Abrishami, Mojtaba; Ehsaei, Asieh; Basiry, Mohsen; Pourreza, Reza

    2017-01-01

    To compare the distribution of different sized vessels using digital photographs of the ocular surface of diabetic and normal individuals. In this cross-sectional study, red-free conjunctival photographs of diabetic and normal individuals, aged 30-60 years, were taken under defined conditions and analyzed using a Radon transform-based algorithm for vascular segmentation. The image areas occupied by vessels (AOV) of different diameters were calculated. The main outcome measure was the distribution curve of mean AOV of different sized vessels. Secondary outcome measures included total AOV and standard deviation (SD) of AOV of different sized vessels. Two hundred and sixty-eight diabetic patients and 297 normal (control) individuals were included, differing in age (45.50 ± 5.19 vs. 40.38 ± 6.19 years, P distribution curves of mean AOV differed between patients and controls (smaller AOV for larger vessels in patients; P distribution curve of vessels compared to controls. Presence of diabetes mellitus is associated with contraction of larger vessels in the conjunctiva. Smaller vessels dilate with diabetic retinopathy. These findings may be useful in the photographic screening of diabetes mellitus and retinopathy.

  16. Study of the variation of the E-I curves in the superconducting to normal transition of Bi-2212 textured ceramics by Pb addition

    Directory of Open Access Journals (Sweden)

    Sotelo, A.

    2006-06-01

    Full Text Available Vitreous cylinders with compositions Bi2-xPbxSr2CaCu2Oy, (x = 0, 0.2, 0.4 and 0.6 were prepared and used as precursors to fabricate textured bars through a laser floating zone melting method (LFZ. The resulting textured cylindrical bars were annealed, followed by their electrical characterization. The microstructure was determined and correlated with the electrical measured properties. The influence of Pb doping on the sharpness of the superconducting to normal transition on the E-I curves has been determined. The sharpest transitions have been obtained for samples doped with 0.4Pb.

    Se han preparado precursores de tipo vítreo en forma de cilindro con composiciones nominales Bi2-xPbxSr2CaCu2Oy, con x = 0, 0.2, 0.4 y 0.6. Estos cilindros se han utilizado como precursores para fabricar barras texturadas por medio de una técnica de fusión zonal inducida por láser (LFZ. Estas barras texturadas se recocieron a diferentes temperaturas y se caracterizaron eléctricamente. Además, se examinó su microestructura para correlacionarla con las propiedades eléctricas medidas. La variación de la transición del estado superconductor al normal se ha relacionado con el dopaje con Pb a través de las curvas E-I. Las mejores transiciones se han obtenido para muestras dopadas con 0.4 Pb.

  17. Some fundamental questions about R-curves

    International Nuclear Information System (INIS)

    Kolednik, O.

    1992-01-01

    With the help of two simple thought experiments it is demonstrated that there exist two physically different types of fracture toughness. The crack-growth toughness, which is identical to the Griffith crack growth resistance, R, is a measure of the non-reversible energy which is needed to produce an increment of new crack area. The size of R is reflected by the slopes of the R-curves commonly used. So an increasing J-Δa-curve does not mean that the crack-growth resistance increases. The fracture initiation toughness, J i , is a normalized total energy (related to the ligament area) which must be put into the specimen up to fracture initiation. Only for ideally brittle materials R and J i have equal sizes. For small-scale yielding a relationship exists between R and J i , ao a one-parameter description of fracture processes is applicable. For large-scale yielding R and J i are not strictly related and both parameters are necessary to describe the fracture process. (orig.) [de

  18. Light Curve Variations of AR Lacertae

    Directory of Open Access Journals (Sweden)

    Il-Seong Nha

    1991-12-01

    Full Text Available Sixteen unitary Light curves of AR Lac in B and V are made at Yonsei University Observatory in the period of 1980-1988. Some overview findings of light variations are made. (1 The light variations outside eclipse follow none of the wave migration patterns reported by previous investigators. (2 Complicated shapes outside eclipse are apparently much reduced in the light curves of 1983-1984. This suggests that, in the future, AR Lac has a chance to attain a normal state with mo complicated interactions. (3 The depths of the primary and the secondary mid-eclipses are changing year-to-year. (4 The K0 star, the larger component, has brightened by 0.m14 V, while the G2 star has shown a fluctuation of about 0.m05 in V. (5 The B-V values at primary mid-eclipse have no correlation with the depth variations. (6 Independently of the increase of maximum brightness, the B-V colors in the non-eclipsed phases changed slightly over the years.

  19. Information Leakage from Logically Equivalent Frames

    Science.gov (United States)

    Sher, Shlomi; McKenzie, Craig R. M.

    2006-01-01

    Framing effects are said to occur when equivalent frames lead to different choices. However, the equivalence in question has been incompletely conceptualized. In a new normative analysis of framing effects, we complete the conceptualization by introducing the notion of information equivalence. Information equivalence obtains when no…

  20. 21 CFR 26.9 - Equivalence determination.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Equivalence determination. 26.9 Section 26.9 Food... Specific Sector Provisions for Pharmaceutical Good Manufacturing Practices § 26.9 Equivalence determination... document insufficient evidence of equivalence, lack of opportunity to assess equivalence or a determination...

  1. Trend analyses with river sediment rating curves

    Science.gov (United States)

    Warrick, Jonathan A.

    2015-01-01

    Sediment rating curves, which are fitted relationships between river discharge (Q) and suspended-sediment concentration (C), are commonly used to assess patterns and trends in river water quality. In many of these studies it is assumed that rating curves have a power-law form (i.e., C = aQb, where a and b are fitted parameters). Two fundamental questions about the utility of these techniques are assessed in this paper: (i) How well to the parameters, a and b, characterize trends in the data? (ii) Are trends in rating curves diagnostic of changes to river water or sediment discharge? As noted in previous research, the offset parameter, a, is not an independent variable for most rivers, but rather strongly dependent on b and Q. Here it is shown that a is a poor metric for trends in the vertical offset of a rating curve, and a new parameter, â, as determined by the discharge-normalized power function [C = â (Q/QGM)b], where QGM is the geometric mean of the Q values sampled, provides a better characterization of trends. However, these techniques must be applied carefully, because curvature in the relationship between log(Q) and log(C), which exists for many rivers, can produce false trends in â and b. Also, it is shown that trends in â and b are not uniquely diagnostic of river water or sediment supply conditions. For example, an increase in â can be caused by an increase in sediment supply, a decrease in water supply, or a combination of these conditions. Large changes in water and sediment supplies can occur without any change in the parameters, â and b. Thus, trend analyses using sediment rating curves must include additional assessments of the time-dependent rates and trends of river water, sediment concentrations, and sediment discharge.

  2. Intercomparison of personnel dosimetry for thermal neutron dose equivalent in neutron and gamma-ray mixed fields

    International Nuclear Information System (INIS)

    Ogawa, Yoshihiro

    1985-01-01

    In order to consider the problems concerned with personnel dosimetry using film badges and TLDs, an intercomparison of personnel dosimetry, especially dose equivalent responses of personnel dosimeters to thermal neutron, was carried out in five different neutron and gamma-ray mixed fields at KUR and UTR-KINKI from the practical point of view. For the estimation of thermal neutron dose equivalent, it may be concluded that each personnel dosimeter has good performances in the precision, that is, the standard deviations in the measured values by individual dosimeter were within 24 %, and the dose equivalent responses to thermal neutron were almost independent on cadmium ratio and gamma-ray contamination. However, the relative thermal neutron dose equivalent of individual dosimeter normalized to the ICRP recommended value varied considerably and a difference of about 4 times was observed among the dosimeters. From the results obtained, it is suggested that the standardization of calibration factors and procedures is required from the practical point of radiation protection and safety. (author)

  3. Editorial: New operational dose equivalent quantities

    International Nuclear Information System (INIS)

    Harvey, J.R.

    1985-01-01

    The ICRU Report 39 entitled ''Determination of Dose Equivalents Resulting from External Radiation Sources'' is briefly discussed. Four new operational dose equivalent quantities have been recommended in ICRU 39. The 'ambient dose equivalent' and the 'directional dose equivalent' are applicable to environmental monitoring and the 'individual dose equivalent, penetrating' and the 'individual dose equivalent, superficial' are applicable to individual monitoring. The quantities should meet the needs of day-to-day operational practice, while being acceptable to those concerned with metrological precision, and at the same time be used to give effective control consistent with current perceptions of the risks associated with exposure to ionizing radiations. (U.K.)

  4. Greenhouse gas emission curves for advanced biofuel supply chains

    Science.gov (United States)

    Daioglou, Vassilis; Doelman, Jonathan C.; Stehfest, Elke; Müller, Christoph; Wicke, Birka; Faaij, Andre; van Vuuren, Detlef P.

    2017-12-01

    Most climate change mitigation scenarios that are consistent with the 1.5-2 °C target rely on a large-scale contribution from biomass, including advanced (second-generation) biofuels. However, land-based biofuel production has been associated with substantial land-use change emissions. Previous studies show a wide range of emission factors, often hiding the influence of spatial heterogeneity. Here we introduce a spatially explicit method for assessing the supply of advanced biofuels at different emission factors and present the results as emission curves. Dedicated crops grown on grasslands, savannahs and abandoned agricultural lands could provide 30 EJBiofuel yr-1 with emission factors less than 40 kg of CO2-equivalent (CO2e) emissions per GJBiofuel (for an 85-year time horizon). This increases to 100 EJBiofuel yr-1 for emission factors less than 60 kgCO2e GJBiofuel-1. While these results are uncertain and depend on model assumptions (including time horizon, spatial resolution, technology assumptions and so on), emission curves improve our understanding of the relationship between biofuel supply and its potential contribution to climate change mitigation while accounting for spatial heterogeneity.

  5. Nonparametric estimation of age-specific reference percentile curves with radial smoothing.

    Science.gov (United States)

    Wan, Xiaohai; Qu, Yongming; Huang, Yao; Zhang, Xiao; Song, Hanping; Jiang, Honghua

    2012-01-01

    Reference percentile curves represent the covariate-dependent distribution of a quantitative measurement and are often used to summarize and monitor dynamic processes such as human growth. We propose a new nonparametric method based on a radial smoothing (RS) technique to estimate age-specific reference percentile curves assuming the underlying distribution is relatively close to normal. We compared the RS method with both the LMS and the generalized additive models for location, scale and shape (GAMLSS) methods using simulated data and found that our method has smaller estimation error than the two existing methods. We also applied the new method to analyze height growth data from children being followed in a clinical observational study of growth hormone treatment, and compared the growth curves between those with growth disorders and the general population. Copyright © 2011 Elsevier Inc. All rights reserved.

  6. Blasting Vibration Safety Criterion Analysis with Equivalent Elastic Boundary: Based on Accurate Loading Model

    Directory of Open Access Journals (Sweden)

    Qingwen Li

    2015-01-01

    Full Text Available In the tunnel and underground space engineering, the blasting wave will attenuate from shock wave to stress wave to elastic seismic wave in the host rock. Also, the host rock will form crushed zone, fractured zone, and elastic seismic zone under the blasting loading and waves. In this paper, an accurate mathematical dynamic loading model was built. And the crushed zone as well as fractured zone was considered as the blasting vibration source thus deducting the partial energy for cutting host rock. So this complicated dynamic problem of segmented differential blasting was regarded as an equivalent elastic boundary problem by taking advantage of Saint-Venant’s Theorem. At last, a 3D model in finite element software FLAC3D accepted the constitutive parameters, uniformly distributed mutative loading, and the cylindrical attenuation law to predict the velocity curves and effective tensile curves for calculating safety criterion formulas of surrounding rock and tunnel liner after verifying well with the in situ monitoring data.

  7. Growth curves of preschool children in the northeast of iran: a population based study using quantile regression approach.

    Science.gov (United States)

    Payande, Abolfazl; Tabesh, Hamed; Shakeri, Mohammad Taghi; Saki, Azadeh; Safarian, Mohammad

    2013-01-14

    Growth charts are widely used to assess children's growth status and can provide a trajectory of growth during early important months of life. The objectives of this study are going to construct growth charts and normal values of weight-for-age for children aged 0 to 5 years using a powerful and applicable methodology. The results compare with the World Health Organization (WHO) references and semi-parametric LMS method of Cole and Green. A total of 70737 apparently healthy boys and girls aged 0 to 5 years were recruited in July 2004 for 20 days from those attending community clinics for routine health checks as a part of a national survey. Anthropometric measurements were done by trained health staff using WHO methodology. The nonparametric quantile regression method obtained by local constant kernel estimation of conditional quantiles curves using for estimation of curves and normal values. The weight-for-age growth curves for boys and girls aged from 0 to 5 years were derived utilizing a population of children living in the northeast of Iran. The results were similar to the ones obtained by the semi-parametric LMS method in the same data. Among all age groups from 0 to 5 years, the median values of children's weight living in the northeast of Iran were lower than the corresponding values in WHO reference data. The weight curves of boys were higher than those of girls in all age groups. The differences between growth patterns of children living in the northeast of Iran versus international ones necessitate using local and regional growth charts. International normal values may not properly recognize the populations at risk for growth problems in Iranian children. Quantile regression (QR) as a flexible method which doesn't require restricted assumptions, proposed for estimation reference curves and normal values.

  8. The conspicuous absence of normal graphite grains in the small magellanic cloud

    International Nuclear Information System (INIS)

    Bromage, G.E.; Nandy, K.

    1983-05-01

    The simplest dust model that accurately predicts the normal Galactic interstellar extinction, also fits the normal SMC curve derived from visible and IUE observations. Only one parameter value is different: the usual graphite contribution is at least a factor of seven weaker in the SMC. Some possible explanations are discussed. (author)

  9. Mixed field dose equivalent measuring instruments

    International Nuclear Information System (INIS)

    Brackenbush, L.W.; McDonald, J.C.; Endres, G.W.R.; Quam, W.

    1985-01-01

    In the past, separate instruments have been used to monitor dose equivalent from neutrons and gamma rays. It has been demonstrated that it is now possible to measure simultaneously neutron and gamma dose with a single instrument, the tissue equivalent proportional counter (TEPC). With appropriate algorithms dose equivalent can also be determined from the TEPC. A simple ''pocket rem meter'' for measuring neutron dose equivalent has already been developed. Improved algorithms for determining dose equivalent for mixed fields are presented. (author)

  10. Characterization of revenue equivalence

    NARCIS (Netherlands)

    Heydenreich, B.; Müller, R.; Uetz, Marc Jochen; Vohra, R.

    2009-01-01

    The property of an allocation rule to be implementable in dominant strategies by a unique payment scheme is called revenue equivalence. We give a characterization of revenue equivalence based on a graph theoretic interpretation of the incentive compatibility constraints. The characterization holds

  11. Equivalence Between Squirrel Cage and Sheet Rotor Induction Motor

    Science.gov (United States)

    Dwivedi, Ankita; Singh, S. K.; Srivastava, R. K.

    2016-06-01

    Due to topological changes in dual stator induction motor and high cost of its fabrication, it is convenient to replace the squirrel cage rotor with a composite sheet rotor. For an experimental machine, the inner and outer stator stampings are normally available whereas the procurement of rotor stampings is quite cumbersome and is not always cost effective. In this paper, the equivalence between sheet/solid rotor induction motor and squirrel cage induction motor has been investigated using layer theory of electrical machines, so as to enable one to utilize sheet/solid rotor in dual port experimental machines.

  12. Radiosensitivity of normal human epidermal cells in culture

    International Nuclear Information System (INIS)

    Dover, R.; Potten, C.S.

    1983-01-01

    Using an in vitro culture system the authors have derived #betta#-radiation survival curves over a dose range 0-8 Gy for the clonogenic cells of normal human epidermis. The culture system used allows the epidermal cells to stratify and form a multi-layered sheet of keratinizing cells. The cultures appear to be a very good model for epidermis in vivo. The survival curves show a population which is apparently more sensitive than murine epidermis in vivo. It remains unclear whether this is an intrinsic difference between the species or is a consequence of the in vitro cultivation of the human cells. (author)

  13. Equivalent glycemic load (EGL: a method for quantifying the glycemic responses elicited by low carbohydrate foods

    Directory of Open Access Journals (Sweden)

    Spolar Matt

    2006-08-01

    Full Text Available Abstract Background Glycemic load (GL is used to quantify the glycemic impact of high-carbohydrate (CHO foods, but cannot be used for low-CHO foods. Therefore, we evaluated the accuracy of equivalent-glycemic-load (EGL, a measure of the glycemic impact of low-CHO foods defined as the amount of CHO from white-bread (WB with the same glycemic impact as one serving of food. Methods Several randomized, cross-over trials were performed by a contract research organization using overnight-fasted healthy subjects drawn from a pool of 63 recruited from the general population by newspaper advertisement. Incremental blood-glucose response area-under-the-curve (AUC elicited by 0, 5, 10, 20, 35 and 50 g CHO portions of WB (WB-CHO and 3, 5, 10 and 20 g glucose were measured. EGL values of the different doses of glucose and WB and 4 low-CHO foods were determined as: EGL = (F-B/M, where F is AUC after food and B is y-intercept and M slope of the regression of AUC on grams WB-CHO. The dose-response curves of WB and glucose were used to derive an equation to estimate GL from EGL, and the resulting values compared to GL calculated from the glucose dose-response curve. The accuracy of EGL was assessed by comparing the GL (estimated from EGL values of the 4 doses of oral-glucose with the amounts actually consumed. Results Over 0–50 g WB-CHO (n = 10, the dose-response curve was non-linear, but over the range 0–20 g the curve was indistinguishable from linear, with AUC after 0, 5, 10 and 20 g WB-CHO, 10 ± 1, 28 ± 2, 58 ± 5 and 100 ± 6 mmol × min/L, differing significantly from each other (n = 48. The difference between GL values estimated from EGL and those calculated from the dose-response curve was 0 g (95% confidence-interval, ± 0.5 g. The difference between the GL values of the 4 doses of glucose estimated from EGL, and the amounts of glucose actually consumed was 0.2 g (95% confidence-interval, ± 1 g. Conclusion EGL, a measure of the glycemic impact of

  14. The consequences of non-normality

    International Nuclear Information System (INIS)

    Hip, I.; Lippert, Th.; Neff, H.; Schilling, K.; Schroers, W.

    2002-01-01

    The non-normality of Wilson-type lattice Dirac operators has important consequences - the application of the usual concepts from the textbook (hermitian) quantum mechanics should be reconsidered. This includes an appropriate definition of observables and the refinement of computational tools. We show that the truncated singular value expansion is the optimal approximation to the inverse operator D -1 and we prove that due to the γ 5 -hermiticity it is equivalent to γ 5 times the truncated eigenmode expansion of the hermitian Wilson-Dirac operator

  15. Characterization of Revenue Equivalence

    NARCIS (Netherlands)

    Heydenreich, Birgit; Müller, Rudolf; Uetz, Marc Jochen; Vohra, Rakesh

    2008-01-01

    The property of an allocation rule to be implementable in dominant strategies by a unique payment scheme is called \\emph{revenue equivalence}. In this paper we give a characterization of revenue equivalence based on a graph theoretic interpretation of the incentive compatibility constraints. The

  16. SERIAL ULTRASOUND TO ESTIMATE FETAL GROWTH CURVES IN SOUTHERN TAMANDUA (TAMANDUA TETRADACTYLA).

    Science.gov (United States)

    Thompson, Rachel; Wolf, Tiffany M; Robertson, Heather; Colburn, Margarita Woc; Moreno, Alexis; Moresco, Anneke; Napier, Anne Elise; Nofs, Sally A

    2017-06-01

    From 2012 to 2015, 16 pregnancies were monitored by ultrasonography in nine tamanduas ( Tamandua tetradactyla ) housed in three zoological facilities. Sonographic measurements were recorded to establish fetal growth curves using thoracic and skull landmarks described for giant anteaters ( Myrmecophaga tridactyla ). All pregnancies resulted in the uncomplicated delivery of healthy offspring, thus gestational development was considered normal. These data may be used as a reference for normal fetal development with potential for estimating parturition date in the absence of breeding data.

  17. Quantification of the equivalence principle

    International Nuclear Information System (INIS)

    Epstein, K.J.

    1978-01-01

    Quantitative relationships illustrate Einstein's equivalence principle, relating it to Newton's ''fictitious'' forces arising from the use of noninertial frames, and to the form of the relativistic time dilatation in local Lorentz frames. The equivalence principle can be interpreted as the equivalence of general covariance to local Lorentz covariance, in a manner which is characteristic of Riemannian and pseudo-Riemannian geometries

  18. Mapping of isoexposure curves for evaluation of equivalent environmental doses for radiodiagnostic mobile equipment; Mapeamento de curvas de isoexposicao para avaliacao de equivalente de dose ambiente para equipamentos moveis de radiodiagnostico

    Energy Technology Data Exchange (ETDEWEB)

    Bacelar, Alexandre, E-mail: abacelar@hcpa.ufrgs.b [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Hospital de Clinicas. Setor de Fisica Medica e Radioprotecao; Andrade, Jose Rodrigo Mendes, E-mail: jose.andrade@santacasa.tche.b [Irmandade da Santa Casa de Misericordia de Porto Alegre, RS (Brazil). Servico de Atencao a Saude e Qualidade de Vida; Fischer, Andreia Caroline Fischer da Silveira; Accurso, Andre; Hoff, Gabriela, E-mail: andreia.silveira.001@acad.pucrs.b, E-mail: andre.accurso@acad.pucrs.b [Pontificia Univ. Catolica do Rio Grande do Sul (PUC/RS), Porto Alegre, RS (Brazil). Grupo de Experimentacao e Simulacao Computacional em Fisica Medica

    2011-10-26

    This paper generates iso exposure curves in areas where the mobile radiodiagnostic equipment are used for evaluation of iso kerma map and the environment equivalent dose (H{sup *}(d)). It was used a Shimadzu mobile equipment and two Siemens, with non anthropomorphic scatter. The exposure was measured in a mesh of 4.20 x 4.20 square meter in steps of 30 cm, at half height from the scatterer. The calculation of H{sup *}(d) were estimated for a worker present in all the procedures in a period of 11 months, being considered 3.55 m As/examination and 44.5 procedures/month (adult UTI) and 3.16 m As/examination and 20.1 procedure/month (pediatric UTI), and 3.16 m As/examination and 20.1 procedure/month (pediatric UTI). It was observed that there exist points where the H{sup *}(d) was over the limit established for the free area inside the radius of 30 cm from the central beam of radiation in the case of pediatric UTI and 60 cm for adult UTI. The points localized 2.1 m from the center presented values lower than 25% of those limit

  19. The principle of equivalence reconsidered: assessing the relevance of the principle of equivalence in prison medicine.

    Science.gov (United States)

    Jotterand, Fabrice; Wangmo, Tenzin

    2014-01-01

    In this article we critically examine the principle of equivalence of care in prison medicine. First, we provide an overview of how the principle of equivalence is utilized in various national and international guidelines on health care provision to prisoners. Second, we outline some of the problems associated with its applications, and argue that the principle of equivalence should go beyond equivalence to access and include equivalence of outcomes. However, because of the particular context of the prison environment, third, we contend that the concept of "health" in equivalence of health outcomes needs conceptual clarity; otherwise, it fails to provide a threshold for healthy states among inmates. We accomplish this by examining common understandings of the concepts of health and disease. We conclude our article by showing why the conceptualization of diseases as clinical problems provides a helpful approach in the delivery of health care in prison.

  20. On the operator equivalents

    International Nuclear Information System (INIS)

    Grenet, G.; Kibler, M.

    1978-06-01

    A closed polynomial formula for the qth component of the diagonal operator equivalent of order k is derived in terms of angular momentum operators. The interest in various fields of molecular and solid state physics of using such a formula in connection with symmetry adapted operator equivalents is outlined

  1. 78 FR 67360 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Five New Equivalent Methods

    Science.gov (United States)

    2013-11-12

    ... Methods: Designation of Five New Equivalent Methods AGENCY: Office of Research and Development; Environmental Protection Agency (EPA). ACTION: Notice of the designation of five new equivalent methods for...) has designated, in accordance with 40 CFR Part 53, five new equivalent methods, one for measuring...

  2. 77 FR 60985 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Three New Equivalent Methods

    Science.gov (United States)

    2012-10-05

    ... Methods: Designation of Three New Equivalent Methods AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of three new equivalent methods for monitoring ambient air quality. SUMMARY... equivalent methods, one for measuring concentrations of PM 2.5 , one for measuring concentrations of PM 10...

  3. Normalized inverse characterization of sound absorbing rigid porous media.

    Science.gov (United States)

    Zieliński, Tomasz G

    2015-06-01

    This paper presents a methodology for the inverse characterization of sound absorbing rigid porous media, based on standard measurements of the surface acoustic impedance of a porous sample. The model parameters need to be normalized to have a robust identification procedure which fits the model-predicted impedance curves with the measured ones. Such a normalization provides a substitute set of dimensionless (normalized) parameters unambiguously related to the original model parameters. Moreover, two scaling frequencies are introduced, however, they are not additional parameters and for different, yet reasonable, assumptions of their values, the identification procedure should eventually lead to the same solution. The proposed identification technique uses measured and computed impedance curves for a porous sample not only in the standard configuration, that is, set to the rigid termination piston in an impedance tube, but also with air gaps of known thicknesses between the sample and the piston. Therefore, all necessary analytical formulas for sound propagation in double-layered media are provided. The methodology is illustrated by one numerical test and by two examples based on the experimental measurements of the acoustic impedance and absorption of porous ceramic samples of different thicknesses and a sample of polyurethane foam.

  4. Scintiscanning of arthritis and analysis of build-up curves

    International Nuclear Information System (INIS)

    Yamagishi, Tsuneo; Omori, Shigeo; Miyawaki, Haruo; Maniwa, Masato; Yoshizaki, Kenichi

    1975-01-01

    In the present study 40 knee joints with rheumatoid arthritis, 23 knee joints with osteoarthrosis deformans, 3 knee joints with non-synovitis, one knee joint with pyogenic arthritis and 4 normal knee joints were scanned. By analysis of build-up curves obtained immediately after the intravenous injection of sup(99m)Tc-pertechnetate, the rate of accumulation of radioactivity (t 1/2) in the affected joints was simultaneously estimated in order to compare them with clinical findings. 1. Scintiscanning of arthritis, rheumatoid arthritis, osteoarthrosis deformans of the knee joint, non-specific synovitis, and pyogenic arthritis of the knee joint, yielded a positive scan for all of the joint diseases. 2. In the scintigram of healthy knee joints, there are no areas of RI accumulation or right to left difference. 3. In some instances abnormal uptake of RI was seen on scintigrams of arthritis even after normal clinical and laboratory findings had been achieved with therapy. 4. sup(99m)Tc-pertechnetate, a radionuclide with a short half-life, allows repeated scans and provides a useful radiologic means of evaluating therapeutic course and effectiveness. 5. Analysis of build-up curves revealed that the rate of accumulation of RI was faster in rheumatoid arthritis than in osteoarthrosis deformans. (auth.)

  5. Maximally Informative Stimuli and Tuning Curves for Sigmoidal Rate-Coding Neurons and Populations

    Science.gov (United States)

    McDonnell, Mark D.; Stocks, Nigel G.

    2008-08-01

    A general method for deriving maximally informative sigmoidal tuning curves for neural systems with small normalized variability is presented. The optimal tuning curve is a nonlinear function of the cumulative distribution function of the stimulus and depends on the mean-variance relationship of the neural system. The derivation is based on a known relationship between Shannon’s mutual information and Fisher information, and the optimality of Jeffrey’s prior. It relies on the existence of closed-form solutions to the converse problem of optimizing the stimulus distribution for a given tuning curve. It is shown that maximum mutual information corresponds to constant Fisher information only if the stimulus is uniformly distributed. As an example, the case of sub-Poisson binomial firing statistics is analyzed in detail.

  6. Investigation of Equivalent Circuit for PEMFC Assessment

    International Nuclear Information System (INIS)

    Myong, Kwang Jae

    2011-01-01

    Chemical reactions occurring in a PEMFC are dominated by the physical conditions and interface properties, and the reactions are expressed in terms of impedance. The performance of a PEMFC can be simply diagnosed by examining the impedance because impedance characteristics can be expressed by an equivalent electrical circuit. In this study, the characteristics of a PEMFC are assessed using the AC impedance and various equivalent circuits such as a simple equivalent circuit, equivalent circuit with a CPE, equivalent circuit with two RCs, and equivalent circuit with two CPEs. It was found in this study that the characteristics of a PEMFC could be assessed using impedance and an equivalent circuit, and the accuracy was highest for an equivalent circuit with two CPEs

  7. Reliability assessment based on small samples of normal distribution

    International Nuclear Information System (INIS)

    Ma Zhibo; Zhu Jianshi; Xu Naixin

    2003-01-01

    When the pertinent parameter involved in reliability definition complies with normal distribution, the conjugate prior of its distributing parameters (μ, h) is of normal-gamma distribution. With the help of maximum entropy and the moments-equivalence principles, the subjective information of the parameter and the sampling data of its independent variables are transformed to a Bayesian prior of (μ,h). The desired estimates are obtained from either the prior or the posterior which is formed by combining the prior and sampling data. Computing methods are described and examples are presented to give demonstrations

  8. Obtaining the conversion curve of CT numbers to electron density from the effective energy of the CT using the dummy SEFM

    International Nuclear Information System (INIS)

    Martin-Viera Cueto, J. A.; Garcia Pareja, S.; Benitez Villegas, E. M.; Moreno Saiz, E. M.; Bodineau Gil, C.; Caudepon Moreno, F.

    2011-01-01

    The objective of this work is to obtain the conversion curve of Hounsfield units (A) versus electron densities using a mannequin with different tissue equivalent materials. This provides for the effective energy beam CT and is used to characterize the linear coefficients of absorption of different materials that comprise the dummy.

  9. Mars Seasonal Polar Caps as a Test of the Equivalence Principle

    Science.gov (United States)

    Rubincam, Daivd Parry

    2011-01-01

    The seasonal polar caps of Mars can be used to test the equivalence principle in general relativity. The north and south caps, which are composed of carbon dioxide, wax and wane with the seasons. If the ratio of the inertial to gravitational masses of the caps differs from the same ratio for the rest of Mars, then the equivalence principle fails, Newton's third law fails, and the caps will pull Mars one way and then the other with a force aligned with the planet's spin axis. This leads to a secular change in Mars's along-track position in its orbit about the Sun, and to a secular change in the orbit's semimajor axis. The caps are a poor E6tv6s test of the equivalence principle, being 4 orders-of-magnitude weaker than laboratory tests and 7 orders-of-magnitude weaker than that found by lunar laser ranging; the reason is the small mass of the caps compared to Mars as a whole. The principal virtue of using Mars is that the caps contain carbon, an element not normally considered in such experiments. The Earth with its seasonal snow cover can also be used for a similar test.

  10. Normal modes of vibration in nickel

    Energy Technology Data Exchange (ETDEWEB)

    Birgeneau, R J [Yale Univ., New Haven, Connecticut (United States); Cordes, J [Cambridge Univ., Cambridge (United Kingdom); Dolling, G; Woods, A D B

    1964-07-01

    The frequency-wave-vector dispersion relation, {nu}(q), for the normal vibrations of a nickel single crystal at 296{sup o}K has been measured for the [{zeta}00], [{zeta}00], [{zeta}{zeta}{zeta}], and [0{zeta}1] symmetric directions using inelastic neutron scattering. The results can be described in terms of the Born-von Karman theory of lattice dynamics with interactions out to fourth-nearest neighbors. The shapes of the dispersion curves are very similar to those of copper, the normal mode frequencies in nickel being about 1.24 times the corresponding frequencies in copper. The fourth-neighbor model was used to calculate the frequency distribution function g({nu}) and related thermodynamic properties. (author)

  11. Dosimetric characteristics of water equivalent for two solid water phantoms

    International Nuclear Information System (INIS)

    Wang Jianhua; Wang Xun; Ren Jiangping

    2011-01-01

    Objective: To investigate the water equivalent of two solid water phantoms. Methods: The X-ray and electron beam depth-ion curves were measured in water and two solid water phantoms, RW3 and Virtual Water. The water-equivalency correction factors for the two solid water phantoms were compared. We measured and calculated the range sealing factors and the fluence correction factors for the two solid water phantoms in the case of electron beams. Results: The average difference between the measured ionization in solid water phantoms and water was 0.42% and 0.16% on 6 MV X-ray (t=-6.15, P=0.001 and t=-1.65, P=0.419) and 0.21% and 0.31% on 10 MV X-ray (t=1.728, P=0.135 and t=-2.296, P=0.061), with 17.4% and 14.5% on 6 MeV electron beams (t=-1.37, P=0.208 and t=-1.47, P=0.179) and 7.0% and 6.0% on 15 MeV electron beams (t=-0.58, P=0.581 and t=-0.90, P=0.395). The water-equivalency correction factors for the two solid water phantoms varied slightly largely, F=58.54, P=0.000 on 6 MV X-ray, F=0.211, P=0.662 on 10 MV X-ray, F=0.97, P=0.353 on 6 MeV electron beams, F=0.14, P=0.717 on 15 MeV electron beams. However, they were almost equal to 1 near the reference depths. The two solid water phantoms showed a similar tread of C pl increasing (F=26.40, P=0.014) and h pl decreasing (F=7.45, P=0.072) with increasing energy. Conclusion: The solid water phantom should undergo a quality control test before being clinical use. (authors)

  12. Linking fluorescence induction curve and biomass in herbicide screening.

    Science.gov (United States)

    Christensen, Martin G; Teicher, Harald B; Streibig, Jens C

    2003-12-01

    A suite of dose-response bioassays with white mustard (Sinapis alba L) and sugar beet (Beta vulgaris L) in the greenhouse and with three herbicides was used to analyse how the fluorescence induction curves (Kautsky curves) were affected by the herbicides. Bentazone, a photosystem II (PSII) inhibitor, completely blocked the normal fluorescence decay after the P-step. In contrast, fluorescence decay was still obvious for flurochloridone, a PDS inhibitor, and glyphosate, an EPSP inhibitor, which indicated that PSII inhibition was incomplete. From the numerous parameters that can be derived from OJIP-steps of the Kautsky curve the relative changes at the J-step [Fvj = (Fm - Fj)/Fm] was selected to be a common response parameter for the herbicides and yielded consistent dose-response relationships. Four hours after treatment, the response Fvj on the doses of bentazone and flurochloridone could be measured. For glyphosate, the changes of the Kautsky curve could similarly be detected 4 h after treatment in sugar beet, but only after 24 hs in S alba. The best prediction of biomass in relation to Fvj was found for bentazone. The experiments were conducted between May and August 2002 and showed that the ambient temperature and solar radiation in the greenhouse could affect dose-response relationships. If the Kautsky curve parameters should be used to predict the outcome of herbicide screening experiments in the greenhouse, where ambient radiation and temperature can only partly be controlled, it is imperative that the chosen fluorescence parameters can be used to predict accurately the resulting biomass used in classical bioassays.

  13. Logically automorphically equivalent knowledge bases

    OpenAIRE

    Aladova, Elena; Plotkin, Tatjana

    2017-01-01

    Knowledge bases theory provide an important example of the field where applications of universal algebra and algebraic logic look very natural, and their interaction with practical problems arising in computer science might be very productive. In this paper we study the equivalence problem for knowledge bases. Our interest is to find out how the informational equivalence is related to the logical description of knowledge. Studying various equivalences of knowledge bases allows us to compare d...

  14. Stenting for curved lesions using a novel curved balloon: Preliminary experimental study.

    Science.gov (United States)

    Tomita, Hideshi; Higaki, Takashi; Kobayashi, Toshiki; Fujii, Takanari; Fujimoto, Kazuto

    2015-08-01

    Stenting may be a compelling approach to dilating curved lesions in congenital heart diseases. However, balloon-expandable stents, which are commonly used for congenital heart diseases, are usually deployed in a straight orientation. In this study, we evaluated the effect of stenting with a novel curved balloon considered to provide better conformability to the curved-angled lesion. In vitro experiments: A Palmaz Genesis(®) stent (Johnson & Johnson, Cordis Co, Bridgewater, NJ, USA) mounted on the Goku(®) curve (Tokai Medical Co. Nagoya, Japan) was dilated in vitro to observe directly the behavior of the stent and balloon assembly during expansion. Animal experiment: A short Express(®) Vascular SD (Boston Scientific Co, Marlborough, MA, USA) stent and a long Express(®) Vascular LD stent (Boston Scientific) mounted on the curved balloon were deployed in the curved vessel of a pig to observe the effect of stenting in vivo. In vitro experiments: Although the stent was dilated in a curved fashion, stent and balloon assembly also rotated conjointly during expansion of its curved portion. In the primary stenting of the short stent, the stent was dilated with rotation of the curved portion. The excised stent conformed to the curved vessel. As the long stent could not be negotiated across the mid-portion with the balloon in expansion when it started curving, the mid-portion of the stent failed to expand fully. Furthermore, the balloon, which became entangled with the stent strut, could not be retrieved even after complete deflation. This novel curved balloon catheter might be used for implantation of the short stent in a curved lesion; however, it should not be used for primary stenting of the long stent. Post-dilation to conform the stent to the angled vessel would be safer than primary stenting irrespective of stent length. Copyright © 2014 Japanese College of Cardiology. Published by Elsevier Ltd. All rights reserved.

  15. An Advantage of the Equivalent Velocity Spectroscopy for Femtsecond Pulse Radiolysis

    CERN Document Server

    Kondoh, Takafumi; Tagawa, Seiichi; Tomosada, Hiroshi; Yang Jin Feng; Yoshida, Yoichi

    2005-01-01

    For studies of electron beam induced ultra-fast reaction process, femtosecond(fs) pulse radiolysis is under construction. To realize fs time resolution, fs electron and analyzing light pulses and their jitter compensation system are needed. About a 100fs electron pulse was generated by a photocathode RF gun linac and a magnetic pulse compressor. Synchronized Ti: Sapphire laser have a puleswidth about 160fs. And, it is significant to avoid degradation of time resolution caused by velocity difference between electron and analyzing light in a sample. In the 'Equivalent velocity spectroscopy' method, incident analyzing light is slant toward electron beam with an angle associated with refractive index of sample. Then, to overlap light wave front and electron pulse shape, electron pulse shape is slanted toward the direction of travel. As a result of the equivalent velocity spectroscopy for hydrated electrons, using slanted electron pulse shape, optical absorption rise time was about 1.4ps faster than normal electro...

  16. Use of the Master Curve methodology for real three dimensional cracks

    International Nuclear Information System (INIS)

    Wallin, Kim

    2007-01-01

    At VTT, development work has been in progress for 15 years to develop and validate testing and analysis methods applicable for fracture resistance determination from small material samples. The VTT approach is a holistic approach by which to determine static, dynamic and crack arrest fracture toughness properties either directly or by correlations from small material samples. The development work has evolved a testing standard for fracture toughness testing in the transition region. The standard, known as the Master Curve standard is in a way 'first of a kind', since it includes guidelines on how to properly treat the test data for use in structural integrity assessment. No standard, so far, has done this. The standard is based on the VTT approach, but presently, the VTT approach goes beyond the standard. Key components in the standard are statistical expressions for describing the data scatter, and for predicting a specimens size (crack front length) effect and an expression (Master Curve) for the fracture toughness temperature dependence. The standard and the approach, it is based upon, can be considered to represent the state of the art of small specimen fracture toughness characterization. Normally, the Master Curve parameters are determined using test specimens with 'straight' crack fronts and comparatively uniform stress state along the crack front. This enables the use of a single K I value and single constraint value to describe the whole specimen. For a real crack in a structure, this is usually not the case. Normally, both K I and constraint vary along the crack front and in the case of a thermal shock, even the temperature will vary along the crack front. A proper means of applying the Master Curve methodology for such cases is presented here

  17. Use of the master curve methodology for real three dimensional cracks

    International Nuclear Information System (INIS)

    Wallin, K.; Rintamaa, R.

    2005-01-01

    At VTT, development work has been in progress for 15 years to develop and validate testing and analysis methods applicable for fracture resistance determination from small material samples. The VTT approach is a holistic approach by which to determine static, dynamic and crack arrest fracture toughness properties either directly or by correlations from small material samples. The development work has evolved a testing standard for fracture toughness testing in the transition region. The standard, known as the Master Curve standard is in a way 'first of a kind', since it includes guidelines on how to properly treat the test data for use in structural integrity assessment. No standard, so far, has done this. The standard is based on the VTT approach, but presently, the VTT approach goes beyond the standard. Key components in the standard are statistical expressions for describing the data scatter, and for predicting a specimen's size (crack front length) effect and an expression (Master Curve) for the fracture toughness temperature dependence. The standard and the approach it is based upon can be considered to represent the state of the art of small specimen fracture toughness characterization. Normally, the Master Curve parameters are determined using test specimens with 'straight' crack fronts and comparatively uniform stress state along the crack front. This enables the use of a single KI value and single constraint value to describe the whole specimen. For a real crack in a structure, this is usually not the case. Normally, both KI and constraint varies along the crack front and in the case of a thermal shock, even the temperature will vary along the crack front. A proper means of applying the Master Curve methodology for such cases is presented here. (authors)

  18. Curve Boxplot: Generalization of Boxplot for Ensembles of Curves.

    Science.gov (United States)

    Mirzargar, Mahsa; Whitaker, Ross T; Kirby, Robert M

    2014-12-01

    In simulation science, computational scientists often study the behavior of their simulations by repeated solutions with variations in parameters and/or boundary values or initial conditions. Through such simulation ensembles, one can try to understand or quantify the variability or uncertainty in a solution as a function of the various inputs or model assumptions. In response to a growing interest in simulation ensembles, the visualization community has developed a suite of methods for allowing users to observe and understand the properties of these ensembles in an efficient and effective manner. An important aspect of visualizing simulations is the analysis of derived features, often represented as points, surfaces, or curves. In this paper, we present a novel, nonparametric method for summarizing ensembles of 2D and 3D curves. We propose an extension of a method from descriptive statistics, data depth, to curves. We also demonstrate a set of rendering and visualization strategies for showing rank statistics of an ensemble of curves, which is a generalization of traditional whisker plots or boxplots to multidimensional curves. Results are presented for applications in neuroimaging, hurricane forecasting and fluid dynamics.

  19. JUMPING THE CURVE

    Directory of Open Access Journals (Sweden)

    René Pellissier

    2012-01-01

    Full Text Available This paper explores the notion ofjump ing the curve,following from Handy 's S-curve onto a new curve with new rules policies and procedures. . It claims that the curve does not generally lie in wait but has to be invented by leadership. The focus of this paper is the identification (mathematically and inferentially ofthat point in time, known as the cusp in catastrophe theory, when it is time to change - pro-actively, pre-actively or reactively. These three scenarios are addressed separately and discussed in terms ofthe relevance ofeach.

  20. Equivalence of Gyn GEC-ESTRO guidelines for image guided cervical brachytherapy with EUD-based dose prescription

    International Nuclear Information System (INIS)

    Shaw, William; Rae, William ID; Alber, Markus L

    2013-01-01

    To establish a generalized equivalent uniform dose (gEUD) -based prescription method for Image Guided Brachytherapy (IGBT) that reproduces the Gyn GEC-ESTRO WG (GGE) prescription for cervix carcinoma patients on CT images with limited soft tissue resolution. The equivalence of two IGBT planning approaches was investigated in 20 patients who received external beam radiotherapy (EBT) and 5 concomitant high dose rate IGBT treatments. The GGE planning strategy based on dose to the most exposed 2 cm 3 (D2cc) was used to derive criteria for the gEUD-based planning of the bladder and rectum. The safety of gEUD constraints in terms of GGE criteria was tested by maximizing dose to the gEUD constraints for individual fractions. The gEUD constraints of 3.55 Gy for the rectum and 5.19 Gy for the bladder were derived. Rectum and bladder gEUD-maximized plans resulted in D2cc averages very similar to the initial GGE criteria. Average D2ccs and EUDs from the full treatment course were comparable for the two techniques within both sets of normal tissue constraints. The same was found for the tumor doses. The derived gEUD criteria for normal organs result in GGE-equivalent IGBT treatment plans. The gEUD-based planning considers the entire dose distribution of organs in contrast to a single dose-volume-histogram point

  1. Validation of curve-fitting method for blood retention of 99mTc-GSA. Comparison with blood sampling method

    International Nuclear Information System (INIS)

    Ha-Kawa, Sang Kil; Suga, Yutaka; Kouda, Katsuyasu; Ikeda, Koshi; Tanaka, Yoshimasa

    1997-01-01

    We investigated a curve-fitting method for the rate of blood retention of 99m Tc-galactosyl serum albumin (GSA) as a substitute for the blood sampling method. Seven healthy volunteers and 27 patients with liver disease underwent 99m Tc-GSA scanning. After normalization of the y-intercept as 100 percent, a biexponential regression curve for the precordial time-activity curve provided the percent injected dose (%ID) of 99m Tc-GSA in the blood without blood sampling. The discrepancy between %ID obtained by the curve-fitting method and that by the multiple blood samples was minimal in normal volunteers 3.1±2.1% (mean±standard deviation, n=77 sampling). Slightly greater discrepancy was observed in patients with liver disease (7.5±6.1%, n=135 sampling). The %ID at 15 min after injection obtained from the fitted curve was significantly greater in patients with liver cirrhosis than in the controls (53.2±11.6%, n=13; vs. 31.9±2.8%, n=7, p 99m Tc-GSA and the plasma retention rate for indocyanine green (r=-0.869, p 99m Tc-GSA and could be a substitute for the blood sampling method. (author)

  2. Dual Smarandache Curves of a Timelike Curve lying on Unit dual Lorentzian Sphere

    OpenAIRE

    Kahraman, Tanju; Hüseyin Ugurlu, Hasan

    2016-01-01

    In this paper, we give Darboux approximation for dual Smarandache curves of time like curve on unit dual Lorentzian sphere. Firstly, we define the four types of dual Smarandache curves of a timelike curve lying on dual Lorentzian sphere.

  3. Three-generation neutrino oscillations in curved spacetime

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yu-Hao, E-mail: yhzhang1994@gmail.com; Li, Xue-Qian, E-mail: lixq@nankai.edu.cn

    2016-10-15

    Three-generation MSW effect in curved spacetime is studied and a brief discussion on the gravitational correction to the neutrino self-energy is given. The modified mixing parameters and corresponding conversion probabilities of neutrinos after traveling through celestial objects of constant densities are obtained. The method to distinguish between the normal hierarchy and inverted hierarchy is discussed in this framework. Due to the gravitational redshift of energy, in some extreme situations, the resonance energy of neutrinos might be shifted noticeably and the gravitational effect on the self-energy of neutrino becomes significant at the vicinities of spacetime singularities.

  4. UV-radiation and skin cancer dose effect curves

    International Nuclear Information System (INIS)

    Henriksen, T.; Dahlback, A.; Larsen, S.H.

    1988-08-01

    Norwegian skin cancer data were used in an attempt to arrive at the dose effect relationship for UV-carcinogenesis. The Norwegian population is relatively homogenous with regard to skin type and live in a country where the annual effective UV-dose varies by approximately 40 percent. Four different regions of the country, each with a broadness of 1 o in latitude (approximately 111 km), were selected . The annual effective UV-doses for these regions were calculated assuming normal ozone conditions throughout the year. The incidence of malignant melanoma and non-melanoma skin cancer (mainly basal cell carcinoma) in these regions were considered and compared to the annual UV-doses. For both these types of cancer a quadratic dose effect curve seems to be valid. Depletions of the ozone layer results in larger UV-doses which in turn may yield more skin cancer. The dose effect curves suggest that the incidence rate will increase by an ''amplification factor'' of approximately 2

  5. Stress analysis in curved composites due to thermal loading

    Science.gov (United States)

    Polk, Jared Cornelius

    of such a problem. It was ascertained and proven that the general, non-modified (original) version of classical lamination theory cannot be used for an analytical solution for a simply curved beam or any other structure that would require rotations of laminates out their planes in space. Finite element analysis was used to ascertain stress variations in a simply curved beam. It was verified that these solutions reduce to the flat beam solutions as the radius of curvature of the beams tends to infinity. MATLAB was used to conduct the classical lamination theory numerical analysis. A MATLAB program was written to conduct the finite element analysis for the flat and curved beams, isotropic and composite. It does not require incompatibility techniques used in mechanics of isotropic materials for indeterminate structures that are equivalent to fixed-beam problems. Finally, it has the ability to enable the user to define and create unique elements not accessible in commercial software, and modify finite element procedures to take advantage of new paradigms.

  6. Aerosol lung inhalation scintigraphy in normal subjects

    Energy Technology Data Exchange (ETDEWEB)

    Sui, Osamu; Shimazu, Hideki

    1985-03-01

    We previously reported basic and clinical evaluation of aerosol lung inhalation scintigraphy with /sup 99m/Tc-millimicrosphere albumin (milli MISA) and concluded aerosol inhalation scintigraphy with /sup 99m/Tc-milli MISA was useful for routine examination. But central airway deposit of aerosol particles was found in not only the patients with chronic obstructive pulmonary disease (COPD) but also normal subjects. So we performed aerosol inhalation scintigraphy in normal subjects and evaluated their scintigrams. The subjects had normal values of FEVsub(1.0)% (more than 70%) in lung function tests, no abnormal findings in chest X-ray films and no symptoms and signs. The findings of aerosol inhalation scintigrams in them were classified into 3 patterns; type I: homogeneous distribution without central airway deposit, type II: homogeneous distribution with central airway deposit, type III: inhomogeneous distribution. These patterns were compared with lung function tests. There was no significant correlation between type I and type II in lung function tests. Type III was different from type I and type II in inhomogeneous distribution. This finding showed no correlation with %VC, FEVsub(1.0)%, MMF, V radical50 and V radical50/V radical25, but good correlation with V radical25 in a maximum forced expiratory flow-volume curve. Flow-volume curve is one of the sensitive methods in early detection of COPD, so inhomogeneous distribution of type III is considered to be due to small airway dysfunction.

  7. 46 CFR 175.540 - Equivalents.

    Science.gov (United States)

    2010-10-01

    ... Safety Management (ISM) Code (IMO Resolution A.741(18)) for the purpose of determining that an equivalent... Organization (IMO) “Code of Safety for High Speed Craft” as an equivalent to compliance with applicable...

  8. Angle β of greater than 80° at the start of spirometry may identify high-quality flow volume curves.

    Science.gov (United States)

    Lian, Ningfang; Li, Li; Ren, Weiying; Jiang, Zhilong; Zhu, Lei

    2017-04-01

    The American Thoracic Society (ATS) and European Respiratory Society (ERS) emphasize a satisfactory start in maximal expiratory flow-volume (MEFV) curves and highlight subjective parameters: performance without hesitation and expiration with maximum force. We described a new parameter, angle β for characterization of the start to the MEFV curve. Subjects completed the MEFV curve at least three times and at least two curves met ATS/ERS quality. Subjects were divided into normal, restrictive and obstructive groups according to pulmonary function test results. The tangent line was drawn at the start of the MEFV curve's ascending limb to the x-axis and the angle β between the tangent line and x-axis was obtained. The relationships between tangent of β, pulmonary function parameters (PFPs) and anthropometric data were assessed. The MEFV curves with insufficient explosion at the start were considered as poor-quality MEFV curves. In 998 subjects with high-quality spirometry, although PFP varied in relation to the three aspects: the angle β and its tangent were similar (P > 0.05), the tangent of β did not correlate with PFP or anthropometric measurements (P > 0.05) and the lower limit of normal (LLN) of the angle β was 80° in the group with high-quality spirometry (P < 0.05). Angle β derived from poor-quality MEFV curves was smaller than that from good quality one (P < 0.05). Angle β may function as a parameter to assess the expiratory efforts, which can be used to assess the quality of the MEFV curve start. © 2016 Asian Pacific Society of Respirology.

  9. Wijsman Orlicz Asymptotically Ideal -Statistical Equivalent Sequences

    Directory of Open Access Journals (Sweden)

    Bipan Hazarika

    2013-01-01

    in Wijsman sense and present some definitions which are the natural combination of the definition of asymptotic equivalence, statistical equivalent, -statistical equivalent sequences in Wijsman sense. Finally, we introduce the notion of Cesaro Orlicz asymptotically -equivalent sequences in Wijsman sense and establish their relationship with other classes.

  10. The performance of low pressure tissue-equivalent chambers and a new method for parameterising the dose equivalent

    International Nuclear Information System (INIS)

    Eisen, Y.

    1986-01-01

    The performance of Rossi-type spherical tissue-equivalent chambers with equivalent diameters between 0.5 μm and 2 μm was tested experimentally using monoenergetic and polyenergetic neutron sources in the energy region of 10 keV to 14.5 MeV. In agreement with theoretical predictions both chambers failed to provide LET information at low neutron energies. A dose equivalent algorithm was derived that utilises the event distribution but does not attempt to correlate event size with LET. The algorithm was predicted theoretically and confirmed by experiment. The algorithm that was developed determines the neutron dose equivalent, from the data of the 0.5 μm chamber, to better than +-20% over the energy range of 30 keV to 14.5 MeV. The same algorithm also determines the dose equivalent from the data of the 2 μm chamber to better than +-20% over the energy range of 60 keV to 14.5 MeV. The efficiency of the chambers is 33 counts per μSv, or equivalently about 10 counts s -1 per mSv.h -1 . This efficiency enables the measurement of dose equivalent rates above 1 mSv.h -1 for an integration period of 3 s. Integrated dose equivalents can be measured as low as 1 μSv. (author)

  11. ECM using Edwards curves

    DEFF Research Database (Denmark)

    Bernstein, Daniel J.; Birkner, Peter; Lange, Tanja

    2013-01-01

    -arithmetic level are as follows: (1) use Edwards curves instead of Montgomery curves; (2) use extended Edwards coordinates; (3) use signed-sliding-window addition-subtraction chains; (4) batch primes to increase the window size; (5) choose curves with small parameters and base points; (6) choose curves with large...

  12. Numerical modeling of the effect of surface topology on the saturated pool nucleate boiling curve

    International Nuclear Information System (INIS)

    Unal, C.; Pasamehmetoglu, K.O.

    1993-01-01

    A numerical study of saturated pool nucleate boiling with an emphasis on the effect of surface topography is presented. The numerical model consisted of solving the three-dimensional transient heat conduction equation within the heater subjected to nucleate boiling over its upper surface. The surface topography model considered the distribution of the cavity and cavity angles based on exponential and normal probability functions. Parametric results showed that the saturated nucleate boiling curve shifted left and became steeper with an increase in the mean cavity radius. The boiling curve was found to be sensitive to the selection of how many cavities were selected for each octagonal cell. A small variation in the statistical parameters, especially cavity radii for smooth surfaces, resulted in noticeable differences in wall superheat for a given heat flux. This result indicated that while the heat transfer coefficient increased with cavity radii, the cavity radii or height alone was not sufficient to characterize the boiling curve. It also suggested that statistical experimental data should consider large samples to characterize the surface topology. The boiling curve shifted to the right when the cavity angle was obtained using a normal distribution. This effect became less important when the number of cavities for each cell was increasing because the probability of the potential cavity with a larger radius in each cell was increased. When the contact angle of the fluid decreased for a given mean cavity radii, the boiling curve shifted to the right. This shift was more pronounced at smaller mean cavity radii and decreased with increasing mean cavity radii

  13. Equivalence in Bilingual Lexicography: Criticism and Suggestions*

    Directory of Open Access Journals (Sweden)

    Herbert Ernst Wiegand

    2011-10-01

    Full Text Available

    Abstract: A reminder of general problems in the formation of terminology, as illustrated by theGerman Äquivalence (Eng. equivalence and äquivalent (Eng. equivalent, is followed by a critical discussionof the concept of equivalence in contrastive lexicology. It is shown that especially the conceptof partial equivalence is contradictory in its different manifestations. Consequently attemptsare made to give a more precise indication of the concept of equivalence in the metalexicography,with regard to the domain of the nominal lexicon. The problems of especially the metalexicographicconcept of partial equivalence as well as that of divergence are fundamentally expounded.In conclusion the direction is indicated to find more appropriate metalexicographic versions of theconcept of equivalence.

    Keywords: EQUIVALENCE, LEXICOGRAPHIC EQUIVALENT, PARTIAL EQUIVALENCE,CONGRUENCE, DIVERGENCE, CONVERGENCE, POLYDIVERGENCE, SYNTAGM-EQUIVALENCE,ZERO EQUIVALENCE, CORRESPONDENCE

    Abstrakt: Äquivalenz in der zweisprachigen Lexikographie: Kritik und Vorschläge.Nachdem an allgemeine Probleme der Begriffsbildung am Beispiel von dt. Äquivalenzund dt. äquivalent erinnert wurde, wird zunächst auf Äquivalenzbegriffe in der kontrastiven Lexikologiekritisch eingegangen. Es wird gezeigt, dass insbesondere der Begriff der partiellen Äquivalenzin seinen verschiedenen Ausprägungen widersprüchlich ist. Sodann werden Präzisierungenzu den Äquivalenzbegriffen in der Metalexikographie versucht, die sich auf den Bereich der Nennlexikbeziehen. Insbesondere der metalexikographische Begriff der partiellen Äquivalenz sowie derder Divergenz werden grundsätzlich problematisiert. In welche Richtung man gehen kann, umangemessenere metalexikographische Fassungen des Äquivalenzbegriffs zu finden, wird abschließendangedeutet.

    Stichwörter: ÄQUIVALENZ, LEXIKOGRAPHISCHES ÄQUIVALENT, PARTIELLE ÄQUIVALENZ,KONGRUENZ, DIVERGENZ, KONVERGENZ, POLYDIVERGENZ

  14. Two R curves for partially stabilized zirconia

    International Nuclear Information System (INIS)

    Rose, L.R.F.; Swain, M.V.

    1986-01-01

    The enhanced fracture toughness due to stress-induced transformation can be explained from two view points: (1) the increase can be attributed to the need to supply a work of transformation, or (2) the transformation can be considered to result in internal stresses which oppose crack opening. Experimental results for magnesia-partially-stabilized zirconia are presented for the two experimental measures of toughness corresponding to these two viewpoints, namely (1) the specific work of fracture, R, and (2) the nominal stress intensity factor, K/sup R/. It is observed that these two measures are not equivalent during the initial stage of R-curve behavior, prior to reaching steady-state cracking. The theoretical reason for this difference is discussed. In particular, it is noted that the usual definition for the crack extension force does not correspond to the experimentally measured work of fracture in the presence of stress-induced (or pre-existing) sources of internal stress

  15. Nonequilibrium recombination after a curved shock wave

    Science.gov (United States)

    Wen, Chihyung; Hornung, Hans

    2010-02-01

    The effect of nonequilibrium recombination after a curved two-dimensional shock wave in a hypervelocity dissociating flow of an inviscid Lighthill-Freeman gas is considered. An analytical solution is obtained with the effective shock values derived by Hornung (1976) [5] and the assumption that the flow is ‘quasi-frozen’ after a thin dissociating layer near the shock. The solution gives the expression of dissociation fraction as a function of temperature on a streamline. A rule of thumb can then be provided to check the validity of binary scaling for experimental conditions and a tool to determine the limiting streamline that delineates the validity zone of binary scaling. The effects on the nonequilibrium chemical reaction of the large difference in free stream temperature between free-piston shock tunnel and equivalent flight conditions are discussed. Numerical examples are presented and the results are compared with solutions obtained with two-dimensional Euler equations using the code of Candler (1988) [10].

  16. Normal Hg uptake values in children under 4 years old

    International Nuclear Information System (INIS)

    Raynaud, C.

    1976-01-01

    At birth the child's kidney is anatomically and functionally immature and the Hg uptake rate is only a quarter that of an adult. At 12 months this value is already 3/4 that of the adult and the final normal mature values are reached between 3 and 4 years. A curve of normal values for children below 4 years old is proposed, though being based on a small number of measurements only it must be taken as provisional [fr

  17. Implementation of the Master Curve method in ProSACC

    Energy Technology Data Exchange (ETDEWEB)

    Feilitzen, Carl von; Sattari-Far, Iradj [Inspecta Technology AB, Stockholm (Sweden)

    2012-03-15

    Cleavage fracture toughness data display normally large amount of statistical scatter in the transition region. The cleavage toughness data in this region is specimen size-dependent, and should be treated statistically rather than deterministically. Master Curve methodology is a procedure for mechanical testing and statistical analysis of fracture toughness of ferritic steels in the transition region. The methodology accounts for temperature and size dependence of fracture toughness. Using the Master Curve methodology for evaluation of the fracture toughness in the transition region releases the overconservatism that has been observed in using the ASME-KIC curve. One main advantage of using the Master Curve methodology is possibility to use small Charpy-size specimens to determine fracture toughness. Detailed description of the Master Curve methodology is given by Sattari-Far and Wallin [2005). ProSACC is a suitable program in using for structural integrity assessments of components containing crack like defects and for defect tolerance analysis. The program gives possibilities to conduct assessments based on deterministic or probabilistic grounds. The method utilized in ProSACC is based on the R6-method developed at Nuclear Electric plc, Milne et al [1988]. The basic assumption in this method is that fracture in a cracked body can be described by two parameters Kr and Lr. The parameter Kr is the ratio between the stress intensity factor and the fracture toughness of the material. The parameter Lr is the ratio between applied load and the plastic limit load of the structure. The ProSACC assessment results are therefore highly dependent on the applied fracture toughness value in the assessment. In this work, the main options of the Master Curve methodology are implemented in the ProSACC program. Different options in evaluating Master Curve fracture toughness from standard fracture toughness testing data or impact testing data are considered. In addition, the

  18. Implementation of the Master Curve method in ProSACC

    International Nuclear Information System (INIS)

    Feilitzen, Carl von; Sattari-Far, Iradj

    2012-03-01

    Cleavage fracture toughness data display normally large amount of statistical scatter in the transition region. The cleavage toughness data in this region is specimen size-dependent, and should be treated statistically rather than deterministically. Master Curve methodology is a procedure for mechanical testing and statistical analysis of fracture toughness of ferritic steels in the transition region. The methodology accounts for temperature and size dependence of fracture toughness. Using the Master Curve methodology for evaluation of the fracture toughness in the transition region releases the overconservatism that has been observed in using the ASME-KIC curve. One main advantage of using the Master Curve methodology is possibility to use small Charpy-size specimens to determine fracture toughness. Detailed description of the Master Curve methodology is given by Sattari-Far and Wallin [2005). ProSACC is a suitable program in using for structural integrity assessments of components containing crack like defects and for defect tolerance analysis. The program gives possibilities to conduct assessments based on deterministic or probabilistic grounds. The method utilized in ProSACC is based on the R6-method developed at Nuclear Electric plc, Milne et al [1988]. The basic assumption in this method is that fracture in a cracked body can be described by two parameters Kr and Lr. The parameter Kr is the ratio between the stress intensity factor and the fracture toughness of the material. The parameter Lr is the ratio between applied load and the plastic limit load of the structure. The ProSACC assessment results are therefore highly dependent on the applied fracture toughness value in the assessment. In this work, the main options of the Master Curve methodology are implemented in the ProSACC program. Different options in evaluating Master Curve fracture toughness from standard fracture toughness testing data or impact testing data are considered. In addition, the

  19. Technological change in energy systems. Learning curves, logistic curves and input-output coefficients

    International Nuclear Information System (INIS)

    Pan, Haoran; Koehler, Jonathan

    2007-01-01

    Learning curves have recently been widely adopted in climate-economy models to incorporate endogenous change of energy technologies, replacing the conventional assumption of an autonomous energy efficiency improvement. However, there has been little consideration of the credibility of the learning curve. The current trend that many important energy and climate change policy analyses rely on the learning curve means that it is of great importance to critically examine the basis for learning curves. Here, we analyse the use of learning curves in energy technology, usually implemented as a simple power function. We find that the learning curve cannot separate the effects of price and technological change, cannot reflect continuous and qualitative change of both conventional and emerging energy technologies, cannot help to determine the time paths of technological investment, and misses the central role of R and D activity in driving technological change. We argue that a logistic curve of improving performance modified to include R and D activity as a driving variable can better describe the cost reductions in energy technologies. Furthermore, we demonstrate that the top-down Leontief technology can incorporate the bottom-up technologies that improve along either the learning curve or the logistic curve, through changing input-output coefficients. An application to UK wind power illustrates that the logistic curve fits the observed data better and implies greater potential for cost reduction than the learning curve does. (author)

  20. Application of Fourier Analysis to the ventricular volume curve in a digital system using radioisotopic vetricylography. Study of the diastolic function

    International Nuclear Information System (INIS)

    Ricke, F.; Gonzalez, P.; Pruzzo, R.; Nagel, J.

    1987-01-01

    To assess diastolic and systolic ventricular function, a computerized method was developed using Fourier analysis on left ventricular time activity curves. The ventricular raw curve obtained from radionuclide gate blood pool imaging was substituted by a four harmonics curve. Valuable parameters were then calculated specially peak ejection rate, filling fraction and peak filling rate, which allowed clear-cut differentiation normal subjects from patients with left ventricular hypertrophy. (author)

  1. Radioligand assays - methods and applications. IV. Uniform regression of hyperbolic and linear radioimmunoassay calibration curves

    Energy Technology Data Exchange (ETDEWEB)

    Keilacker, H; Becker, G; Ziegler, M; Gottschling, H D [Zentralinstitut fuer Diabetes, Karlsburg (German Democratic Republic)

    1980-10-01

    In order to handle all types of radioimmunoassay (RIA) calibration curves obtained in the authors' laboratory in the same way, they tried to find a non-linear expression for their regression which allows calibration curves with different degrees of curvature to be fitted. Considering the two boundary cases of the incubation protocol they derived a hyperbolic inverse regression function: x = a/sub 1/y + a/sub 0/ + asub(-1)y/sup -1/, where x is the total concentration of antigen, asub(i) are constants, and y is the specifically bound radioactivity. An RIA evaluation procedure based on this function is described providing a fitted inverse RIA calibration curve and some statistical quality parameters. The latter are of an order which is normal for RIA systems. There is an excellent agreement between fitted and experimentally obtained calibration curves having a different degree of curvature.

  2. Biological clearance and committed dose equivalent in pulmonary region from inhaled radioaerosols for lung scanning

    Energy Technology Data Exchange (ETDEWEB)

    Soni, P.S.; Sharma, S.M.; Raghunath, B.; Somasundaram, S.

    1987-01-01

    Biological clearance half-lives (Tsub(b)) of different /sup 99/Tcsup(m)-labelled compounds from each lung have been determined, after administering the radioaerosol to normal subjects using the BARC dry aerosol generation and inhalation system. Based on these experimental clearance half-lives, the committed dose equivalent to the lungs has been computed using both the ICRP lung model and MIRD-11 values.

  3. Biological clearance and committed dose equivalent in pulmonary region from inhaled radioaerosols for lung scanning

    International Nuclear Information System (INIS)

    Soni, P.S.; Sharma, S.M.; Raghunath, B.; Somasundaram, S.

    1987-01-01

    Biological clearance half-lives (Tsub(b)) of different 99 Tcsup(m)-labelled compounds from each lung have been determined, after administering the radioaerosol to normal subjects using the BARC dry aerosol generation and inhalation system. Based on these experimental clearance half-lives, the committed dose equivalent to the lungs has been computed using both the ICRP lung model and MIRD-11 values. (author)

  4. SAPONIFICATION EQUIVALENT OF DASAMULA TAILA

    OpenAIRE

    Saxena, R. B.

    1994-01-01

    Saponification equivalent values of Dasamula taila are very useful for the technical and analytical work. It gives the mean molecular weight of the glycerides and acids present in Dasamula Taila. Saponification equivalent values of Dasamula taila are reported in different packings.

  5. Saponification equivalent of dasamula taila.

    Science.gov (United States)

    Saxena, R B

    1994-07-01

    Saponification equivalent values of Dasamula taila are very useful for the technical and analytical work. It gives the mean molecular weight of the glycerides and acids present in Dasamula Taila. Saponification equivalent values of Dasamula taila are reported in different packings.

  6. Some spectral equivalences between Schroedinger operators

    International Nuclear Information System (INIS)

    Dunning, C; Hibberd, K E; Links, J

    2008-01-01

    Spectral equivalences of the quasi-exactly solvable sectors of two classes of Schroedinger operators are established, using Gaudin-type Bethe ansatz equations. In some instances the results can be extended leading to full isospectrality. In this manner we obtain equivalences between PT-symmetric problems and Hermitian problems. We also find equivalences between some classes of Hermitian operators

  7. Gauge equivalence of the Gross Pitaevskii equation and the equivalent Heisenberg spin chain

    Science.gov (United States)

    Radha, R.; Kumar, V. Ramesh

    2007-11-01

    In this paper, we construct an equivalent spin chain for the Gross-Pitaevskii equation with quadratic potential and exponentially varying scattering lengths using gauge equivalence. We have then generated the soliton solutions for the spin components S3 and S-. We find that the spin solitons for S3 and S- can be compressed for exponentially growing eigenvalues while they broaden out for decaying eigenvalues.

  8. Diagnostic value of curved multiplanar reformatted images in multislice CT for the detection of resectable pancreatic ductal adenocarcinoma

    International Nuclear Information System (INIS)

    Fukushima, Hiromichi; Takada, Akira; Mori, Yoshimi; Suzuki, Kojiro; Sawaki, Akiko; Iwano, Shingo; Satake, Hiroko; Ota, Toyohiro; Ishigaki, Takeo; Itoh, Shigeki; Ikeda, Mitsuru

    2006-01-01

    The purpose of this study was to assess the usefulness of curved multiplanar reformatted (MPR) images obtained by multislice CT for the depiction of the main pancreatic duct (MPD) and detection of resectable pancreatic ductal adenocarcinoma. This study included 28 patients with pancreatic carcinoma (size range 12-40 mm) and 22 without. Curved MPR images with 0.5-mm continuous slices were generated along the long axis of the pancreas from pancreatic-phase images with a 0.5- or 1-mm slice thickness. Seven blinded readers independently interpreted three sets of images (axial images, curved MPR images, and both axial and curved MPR images) in scrolling mode. The depiction of the MPD and the diagnostic performance for the detection of carcinoma were statistically compared among these images. MPR images were significantly superior to axial images in depicting the MPD, and the use of both axial and MPR images resulted in further significant improvements. For the detection of carcinoma, MPR images were equivalent to axial images, and the diagnostic performance was significantly improved by the use of both axial and MPR images. High-resolution curved MPR images can improve the depiction of the MPD and the diagnostic performance for the detection of carcinoma compared with axial images alone. (orig.)

  9. Ignition Delay of Combustible Materials in Normoxic Equivalent Environments

    Science.gov (United States)

    McAllister, Sara; Fernandez-Pello, Carlos; Ruff, Gary; Urban, David

    2009-01-01

    Material flammability is an important factor in determining the pressure and composition (fraction of oxygen and nitrogen) of the atmosphere in the habitable volume of exploration vehicles and habitats. The method chosen in this work to quantify the flammability of a material is by its ease of ignition. The ignition delay time was defined as the time it takes a combustible material to ignite after it has been exposed to an external heat flux. Previous work in the Forced Ignition and Spread Test (FIST) apparatus has shown that the ignition delay in the currently proposed space exploration atmosphere (approximately 58.6 kPa and32% oxygen concentration) is reduced by 27% compared to the standard atmosphere used in the Space Shuttle and Space Station. In order to determine whether there is a safer environment in terms of material flammability, a series of piloted ignition delay tests using polymethylmethacrylate (PMMA) was conducted in the FIST apparatus to extend the work over a range of possible exploration atmospheres. The exploration atmospheres considered were the normoxic equivalents, i.e. reduced pressure conditions with a constant partial pressure of oxygen. The ignition delay time was seen to decrease as the pressure was reduced along the normoxic curve. The minimum ignition delay observed in the normoxic equivalent environments was nearly 30% lower than in standard atmospheric conditions. The ignition delay in the proposed exploration atmosphere is only slightly larger than this minimum. Interms of material flammability, normoxic environments with a higher pressure relative to the proposed pressure would be desired.

  10. Measurement of activated rCBF by the 133Xe inhalation technique: a comparison of total versus partial curve analysis

    International Nuclear Information System (INIS)

    Leli, D.A.; Katholi, C.R.; Hazelrig, J.B.; Falgout, J.C.; Hannay, H.J.; Wilson, E.M.; Wills, E.L.; Halsey, J.H. Jr.

    1985-01-01

    An initial assessment of the differential sensitivity of total versus partial curve analysis in estimating task related focal changes in cortical blood flow measured by the 133 Xe inhalation technique was accomplished by comparing the patterns during the performance of two sensorimotor tasks by normal subjects. The validity of these patterns was evaluated by comparing them to the activation patterns expected from activation studies with the intra-arterial technique and the patterns expected from neuropsychological research literature. Subjects were 10 young adult nonsmoking healthy male volunteers. They were administered two tasks having identical sensory and cognitive components but different response requirements (oral versus manual). The regional activation patterns produced by the tasks varied with the method of curve analysis. The activation produced by the two tasks was very similar to that predicted from the research literature only for total curve analysis. To the extent that the predictions are correct, these data suggest that the 133 Xe inhalation technique is more sensitive to regional flow changes when flow parameters are estimated from the total head curve. The utility of the total head curve analysis will be strengthened if similar sensitivity is demonstrated in future studies assessing normal subjects and patients with neurological and psychiatric disorders

  11. Comparison of the Pentacam equivalent keratometry reading and IOL Master keratometry measurement in intraocular lens power calculations.

    Science.gov (United States)

    Karunaratne, Nicholas

    2013-12-01

    To compare the accuracy of the Pentacam Holladay equivalent keratometry readings with the IOL Master 500 keratometry in calculating intraocular lens power. Non-randomized, prospective clinical study conducted in private practice. Forty-five consecutive normal patients undergoing cataract surgery. Forty-five consecutive patients had Pentacam equivalent keratometry readings at the 2-, 3 and 4.5-mm corneal zone and IOL Master keratometry measurements prior to cataract surgery. For each Pentacam equivalent keratometry reading zone and IOL Master measurement the difference between the observed and expected refractive error was calculated using the Holladay 2 and Sanders, Retzlaff and Kraff theoretic (SRKT) formulas. Mean keratometric value and mean absolute refractive error. There was a statistically significantly difference between the mean keratometric values of the IOL Master, Pentacam equivalent keratometry reading 2-, 3- and 4.5-mm measurements (P variance). There was no statistically significant difference between the mean absolute refraction error for the IOL Master and equivalent keratometry readings 2 mm, 3 mm and 4.5 mm zones for either the Holladay 2 formula (P = 0.14) or SRKT formula (P = 0.47). The lowest mean absolute refraction error for Holladay 2 equivalent keratometry reading was the 4.5 mm zone (mean 0.25 D ± 0.17 D). The lowest mean absolute refraction error for SRKT equivalent keratometry reading was the 4.5 mm zone (mean 0.25 D ± 0.19 D). Comparing the absolute refraction error of IOL Master and Pentacam equivalent keratometry reading, best agreement was with Holladay 2 and equivalent keratometry reading 4.5 mm, with mean of the difference of 0.02 D and 95% limits of agreement of -0.35 and 0.39 D. The IOL Master keratometry and Pentacam equivalent keratometry reading were not equivalent when used only for corneal power measurements. However, the keratometry measurements of the IOL Master and Pentacam equivalent keratometry reading 4.5 mm may be

  12. The approximation of the normal distribution by means of chaotic expression

    International Nuclear Information System (INIS)

    Lawnik, M

    2014-01-01

    The approximation of the normal distribution by means of a chaotic expression is achieved by means of Weierstrass function, where, for a certain set of parameters, the density of the derived recurrence renders good approximation of the bell curve

  13. Pinning of a curved flux line by macroscopic inclusions in a type II superconductor

    International Nuclear Information System (INIS)

    Shehata, L.N.; Saif, A.G.

    1983-08-01

    The pinning force is calculated as a function of the distance between a curved (or straight) flux line and the centre of a macroscopic superconducting (or normal) ellipsoidal inclusion. When the ellipsoidal tends to a spherical inclusion the results agree with those previously obtained. (author)

  14. A study on lead equivalent

    International Nuclear Information System (INIS)

    Lin Guanxin

    1991-01-01

    A study on the rules in which the lead equivalent of lead glass changes with the energy of X rays or γ ray is described. The reason of this change is discussed and a new testing method of lead equivalent is suggested

  15. Receiver operating characteristic (ROC) curves and the definition of threshold levels to diagnose coronary artery disease on electrocardiographic stress testing. Part I: The use of ROC curves in diagnostic medicine and electrocardiographic markers of ischaemia.

    Science.gov (United States)

    Barnabei, Luca; Marazìa, Stefania; De Caterina, Raffaele

    2007-11-01

    A common problem in diagnostic medicine, when performing a diagnostic test, is to obtain an accurate discrimination between 'normal' cases and cases with disease, owing to the overlapping distributions of these populations. In clinical practice, it is exceedingly rare that a chosen cut point will achieve perfect discrimination between normal cases and those with disease, and one has to select the best compromise between sensitivity and specificity by comparing the diagnostic performance of different tests or diagnostic criteria available. Receiver operating characteristic (or receiver operator characteristic, ROC) curves allow systematic and intuitively appealing descriptions of the diagnostic performance of a test and a comparison of the performance of different tests or diagnostic criteria. This review will analyse the basic principles underlying ROC curves and their specific application to the choice of optimal parameters on exercise electrocardiographic (ECG) stress testing. Part I will focus on theoretical description and analysis along with reviewing the common problems related to the diagnosis of myocardial ischaemia by means of exercise ECG stress testing. Part II will be devoted to applying ROC curves to available diagnostic criteria through the analysis of ECG stress test parameters.

  16. A novel method of calculating the energy deposition curve of nanosecond pulsed surface dielectric barrier discharge

    International Nuclear Information System (INIS)

    He, Kun; Wang, Xinying; Lu, Jiayu; Cui, Quansheng; Pang, Lei; Di, Dongxu; Zhang, Qiaogen

    2015-01-01

    To obtain the energy deposition curve is very important in the fields to which nanosecond pulse dielectric barrier discharges (NPDBDs) are applied. It helps the understanding of the discharge physics and fast gas heating. In this paper, an equivalent circuit model, composed of three capacitances, is introduced and a method of calculating the energy deposition curve is proposed for a nanosecond pulse surface dielectric barrier discharge (NPSDBD) plasma actuator. The capacitance C d and the energy deposition curve E R are determined by mathematically proving that the mapping from C d to E R is bijective and numerically searching one C d that satisfies the requirement for E R to be a monotonically non-decreasing function. It is found that the value of capacitance C d varies with the amplitude of applied pulse voltage due to the change of discharge area and is dependent on the polarity of applied voltage. The bijectiveness of the mapping from C d to E R in nanosecond pulse volumetric dielectric barrier discharge (NPVDBD) is demonstrated and the feasibility of the application of the new method to NPVDBD is validated. This preliminarily shows a high possibility of developing a unified approach to calculate the energy deposition curve in NPDBD. (paper)

  17. Computerised curve deconvolution of TL/OSL curves using a popular spreadsheet program.

    Science.gov (United States)

    Afouxenidis, D; Polymeris, G S; Tsirliganis, N C; Kitis, G

    2012-05-01

    This paper exploits the possibility of using commercial software for thermoluminescence and optically stimulated luminescence curve deconvolution analysis. The widely used software package Microsoft Excel, with the Solver utility has been used to perform deconvolution analysis to both experimental and reference glow curves resulted from the GLOw Curve ANalysis INtercomparison project. The simple interface of this programme combined with the powerful Solver utility, allows the analysis of complex stimulated luminescence curves into their components and the evaluation of the associated luminescence parameters.

  18. Computerised curve deconvolution of TL/OSL curves using a popular spreadsheet program

    International Nuclear Information System (INIS)

    Afouxenidis, D.; Polymeris, G. S.; Tsirliganis, N. C.; Kitis, G.

    2012-01-01

    This paper exploits the possibility of using commercial software for thermoluminescence and optically stimulated luminescence curve deconvolution analysis. The widely used software package Microsoft Excel, with the Solver utility has been used to perform deconvolution analysis to both experimental and reference glow curves resulted from the Glow Curve Analysis Intercomparison project. The simple interface of this programme combined with the powerful Solver utility, allows the analysis of complex stimulated luminescence curves into their components and the evaluation of the associated luminescence parameters. (authors)

  19. Cell survival of human tumor cells compared with normal fibroblasts following 60Co gamma irradiation

    International Nuclear Information System (INIS)

    Lloyd, E.L.; Henning, C.B.; Reynolds, S.D.; Holmblad, G.L.; Trier, J.E.

    1982-01-01

    Three tumor cell lines, two of which were shown to be HeLa cells, were irradiated with 60 Co gamma irradiation, together with two cell cultures of normal human diploid fibroblasts. Cell survival was studied in three different experiments over a dose range of 2 to 14 gray. All the tumor cell lines showed a very wide shoulder in the dose response curves in contrast to the extremely narrow shoulder of the normal fibroblasts. In addition, the D/sub o/ values for the tumor cell lines were somewhat greater. These two characteristics of the dose response curves resulted in up to 2 orders of magnitude less sensitivity for cell inactivation of HeLa cells when compared with normal cells at high doses (10 gray). Because of these large differences, the extrapolation of results from the irradiation of HeLa cells concerning the mechanisms of normal cell killing should be interpreted with great caution

  20. Analytical and numerical construction of equivalent cables.

    Science.gov (United States)

    Lindsay, K A; Rosenberg, J R; Tucker, G

    2003-08-01

    The mathematical complexity experienced when applying cable theory to arbitrarily branched dendrites has lead to the development of a simple representation of any branched dendrite called the equivalent cable. The equivalent cable is an unbranched model of a dendrite and a one-to-one mapping of potentials and currents on the branched model to those on the unbranched model, and vice versa. The piecewise uniform cable, with a symmetrised tri-diagonal system matrix, is shown to represent the canonical form for an equivalent cable. Through a novel application of the Laplace transform it is demonstrated that an arbitrary branched model of a dendrite can be transformed to the canonical form of an equivalent cable. The characteristic properties of the equivalent cable are extracted from the matrix for the transformed branched model. The one-to-one mapping follows automatically from the construction of the equivalent cable. The equivalent cable is used to provide a new procedure for characterising the location of synaptic contacts on spinal interneurons.

  1. Establishing Substantial Equivalence: Transcriptomics

    Science.gov (United States)

    Baudo, María Marcela; Powers, Stephen J.; Mitchell, Rowan A. C.; Shewry, Peter R.

    Regulatory authorities in Western Europe require transgenic crops to be substantially equivalent to conventionally bred forms if they are to be approved for commercial production. One way to establish substantial equivalence is to compare the transcript profiles of developing grain and other tissues of transgenic and conventionally bred lines, in order to identify any unintended effects of the transformation process. We present detailed protocols for transcriptomic comparisons of developing wheat grain and leaf material, and illustrate their use by reference to our own studies of lines transformed to express additional gluten protein genes controlled by their own endosperm-specific promoters. The results show that the transgenes present in these lines (which included those encoding marker genes) did not have any significant unpredicted effects on the expression of endogenous genes and that the transgenic plants were therefore substantially equivalent to the corresponding parental lines.

  2. On uncertainties in definition of dose equivalent

    International Nuclear Information System (INIS)

    Oda, Keiji

    1995-01-01

    The author has entertained always the doubt that in a neutron field, if the measured value of the absorbed dose with a tissue equivalent ionization chamber is 1.02±0.01 mGy, may the dose equivalent be taken as 10.2±0.1 mSv. Should it be 10.2 or 11, but the author considers it is 10 or 20. Even if effort is exerted for the precision measurement of absorbed dose, if the coefficient being multiplied to it is not precise, it is meaningless. [Absorbed dose] x [Radiation quality fctor] = [Dose equivalent] seems peculiar. How accurately can dose equivalent be evaluated ? The descriptions related to uncertainties in the publications of ICRU and ICRP are introduced, which are related to radiation quality factor, the accuracy of measuring dose equivalent and so on. Dose equivalent shows the criterion for the degree of risk, or it is considered only as a controlling quantity. The description in the ICRU report 1973 related to dose equivalent and its unit is cited. It was concluded that dose equivalent can be considered only as the absorbed dose being multiplied by a dimensionless factor. The author presented the questions. (K.I.)

  3. Curved wall-jet burner for synthesizing titania and silica nanoparticles

    KAUST Repository

    Ismail, Mohamed

    2015-01-01

    A novel curved wall-jet (CWJ) burner was designed for flame synthesis, by injecting precursors through a center tube and by supplying fuel/air mixtures as an annular-inward jet for rapid mixing of the precursors in the reaction zone. Titanium dioxide (TiO2) and silicon dioxide (SiO2) nanoparticles were produced in ethylene (C2H4)/air premixed flames using titanium tetraisopropoxide (TTIP) and hexamethyldisiloxane (HMDSO) as the precursors, respectively. Particle image velocimetry measurements confirmed that the precursors can be injected into the flames without appreciably affecting flow structure. The nanoparticles were characterized using X-ray diffraction, Raman spectroscopy, the Brunauer-Emmett-Teller (BET) method, and high-resolution transmission electron microscopy. In the case of TiO2, the phase of nanoparticles could be controlled by adjusting the equivalence ratio, while the particle size was dependent on the precursor loading rate and the flame temperature. The synthesized TiO2 nanoparticles exhibited high crystallinity and the anatase phase was dominant at high equivalence ratios (φ > 1.3). In the case of SiO2, the particle size could be controlled from 11 to 18 nm by adjusting the precursor loading rate. © 2014 The Combustion Institute. Published by Elsevier Inc. All rights reserved.

  4. Is It Time to Change Our Reference Curve for Femur Length? Using the Z-Score to Select the Best Chart in a Chinese Population

    Science.gov (United States)

    Yang, Huixia; Wei, Yumei; Su, Rina; Wang, Chen; Meng, Wenying; Wang, Yongqing; Shang, Lixin; Cai, Zhenyu; Ji, Liping; Wang, Yunfeng; Sun, Ying; Liu, Jiaxiu; Wei, Li; Sun, Yufeng; Zhang, Xueying; Luo, Tianxia; Chen, Haixia; Yu, Lijun

    2016-01-01

    Objective To use Z-scores to compare different charts of femur length (FL) applied to our population with the aim of identifying the most appropriate chart. Methods A retrospective study was conducted in Beijing. Fifteen hospitals in Beijing were chosen as clusters using a systemic cluster sampling method, in which 15,194 pregnant women delivered from June 20th to November 30th, 2013. The measurements of FL in the second and third trimester were recorded, as well as the last measurement obtained before delivery. Based on the inclusion and exclusion criteria, we identified FL measurements from 19996 ultrasounds from 7194 patients between 11 and 42 weeks gestation. The FL data were then transformed into Z-scores that were calculated using three series of reference equations obtained from three reports: Leung TN, Pang MW et al (2008); Chitty LS, Altman DG et al (1994); and Papageorghiou AT et al (2014). Each Z-score distribution was presented as the mean and standard deviation (SD). Skewness and kurtosis and were compared with the standard normal distribution using the Kolmogorov-Smirnov test. The histogram of their distributions was superimposed on the non-skewed standard normal curve (mean = 0, SD = 1) to provide a direct visual impression. Finally, the sensitivity and specificity of each reference chart for identifying fetuses 95th percentile (based on the observed distribution of Z-scores) were calculated. The Youden index was also listed. A scatter diagram with the 5th, 50th, and 95th percentile curves calculated from and superimposed on each reference chart was presented to provide a visual impression. Results The three Z-score distribution curves appeared to be normal, but none of them matched the expected standard normal distribution. In our study, the Papageorghiou reference curve provided the best results, with a sensitivity of 100% for identifying fetuses with measurements 95th percentile, and specificities of 99.9% and 81.5%, respectively. Conclusions It

  5. Equivalent linear and nonlinear site response analysis for design and risk assessment of safety-related nuclear structures

    International Nuclear Information System (INIS)

    Bolisetti, Chandrakanth; Whittaker, Andrew S.; Mason, H. Benjamin; Almufti, Ibrahim; Willford, Michael

    2014-01-01

    Highlights: • Performed equivalent linear and nonlinear site response analyses using industry-standard numerical programs. • Considered a wide range of sites and input ground motions. • Noted the practical issues encountered while using these programs. • Examined differences between the responses calculated from different programs. • Results of biaxial and uniaxial analyses are compared. - Abstract: Site response analysis is a precursor to soil-structure interaction analysis, which is an essential component in the seismic analysis of safety-related nuclear structures. Output from site response analysis provides input to soil-structure interaction analysis. Current practice in calculating site response for safety-related nuclear applications mainly involves the equivalent linear method in the frequency-domain. Nonlinear time-domain methods are used by some for the assessment of buildings, bridges and petrochemical facilities. Several commercial programs have been developed for site response analysis but none of them have been formally validated for large strains and high frequencies, which are crucial for the performance assessment of safety-related nuclear structures. This study sheds light on the applicability of some industry-standard equivalent linear (SHAKE) and nonlinear (DEEPSOIL and LS-DYNA) programs across a broad range of frequencies, earthquake shaking intensities, and sites ranging from stiff sand to hard rock, all with a focus on application to safety-related nuclear structures. Results show that the equivalent linear method is unable to reproduce the high frequency acceleration response, resulting in almost constant spectral accelerations in the short period range. Analysis using LS-DYNA occasionally results in some unrealistic high frequency acceleration ‘noise’, which can be removed by smoothing the piece-wise linear backbone curve. Analysis using DEEPSOIL results in abrupt variations in the peak strains of consecutive soil layers

  6. Effects of tidal distortion on binary-star velocity curves and ellipsoidal variation

    International Nuclear Information System (INIS)

    Wilson, R.E.; Sofia, S.

    1976-01-01

    Radial velocity curves for the more massive components of binaries with extreme mass ratios can show a large distortion due to tides, as first recognized by Sterne. Binaries in which the effect is large should be rare because nearly all such binaries would be in the rapid phase of mass transfer. However, the optical counterparts of some X-ray binaries may show the effect, which would then serve as a new means of extracting considerable information from the observations. The essential parts of the computational procedure are given. Light curves for ellipsoidal variables with extreme mass ratios were also computed, and were found to be less sinusoidal than those with normal mass ratios

  7. Fabrication of multi-focal microlens array on curved surface for wide-angle camera module

    Science.gov (United States)

    Pan, Jun-Gu; Su, Guo-Dung J.

    2017-08-01

    In this paper, we present a wide-angle and compact camera module that consists of microlens array with different focal lengths on curved surface. The design integrates the principle of an insect's compound eye and the human eye. It contains a curved hexagonal microlens array and a spherical lens. Compared with normal mobile phone cameras which usually need no less than four lenses, but our proposed system only uses one lens. Furthermore, the thickness of our proposed system is only 2.08 mm and diagonal full field of view is about 100 degrees. In order to make the critical microlens array, we used the inkjet printing to control the surface shape of each microlens for achieving different focal lengths and use replication method to form curved hexagonal microlens array.

  8. The effect of earthworm coprolites on the soil water retention curve

    Science.gov (United States)

    Smagin, A. V.; Prusak, A. V.

    2008-06-01

    The effect of earthworm coprolites on the water retention curves in soils of different geneses and textures was investigated by the method of equilibrium centrifuging. Coprolites sampled in the field were compared with the surrounding soil. The effect of earthworms on a soddy-podzolic light loamy soil (from Moscow oblast) was comprehensively analyzed in the course of a special model experiment in a laboratory. This experiment was necessary because it was difficult to separate the coprolites from the soil, in which additional coprolites could appear under natural conditions. In all the variants of the experiment, the differences between the water retention curves of the coprolites and the surrounding soil (or control substrates unaffected by earthworms) were statistically significant. The development of coprolites favored a considerable increase (up to 20 wt.% and more) of the soil water retention capacity upon equivalent water potentials within the range from 0 to -1000 kPa. In most cases, the soil water retention capacity increased within the entire range of the soil moisture contents. This could be explained by the fact that strongly swelling hygroscopic plant remains (detritus) were included into the coprolites and by the formation of a specific highly porous aggregate structure.

  9. Analyzing Multiple-Choice Questions by Model Analysis and Item Response Curves

    Science.gov (United States)

    Wattanakasiwich, P.; Ananta, S.

    2010-07-01

    In physics education research, the main goal is to improve physics teaching so that most students understand physics conceptually and be able to apply concepts in solving problems. Therefore many multiple-choice instruments were developed to probe students' conceptual understanding in various topics. Two techniques including model analysis and item response curves were used to analyze students' responses from Force and Motion Conceptual Evaluation (FMCE). For this study FMCE data from more than 1000 students at Chiang Mai University were collected over the past three years. With model analysis, we can obtain students' alternative knowledge and the probabilities for students to use such knowledge in a range of equivalent contexts. The model analysis consists of two algorithms—concentration factor and model estimation. This paper only presents results from using the model estimation algorithm to obtain a model plot. The plot helps to identify a class model state whether it is in the misconception region or not. Item response curve (IRC) derived from item response theory is a plot between percentages of students selecting a particular choice versus their total score. Pros and cons of both techniques are compared and discussed.

  10. Evaluation of left ventricular diastolic function by appreciating the shape of time activity curve

    International Nuclear Information System (INIS)

    Nishimura, Tohru; Taya, Makoto; Shimoyama, Katsuya; Sasaki, Akira; Mizuno, Haruyoshi; Tahara, Yorio; Ono, Akifumi; Ishikawa, Kyozo

    1993-01-01

    To determine left ventricular diastolic function (LVDF), the shape of time activity curve and primary differential curve, as acquired by Tc-99m radionuclide angiography, were visually assessed. The study popoulation consisted of 1647 patients with heart disease, such as hypertension, ischemic heart disease, cardiomyopathy and valvular disease. Fifty-six other patients were served as controls. The LVDF was divided into 4 degrees: 0=normal, I=slight disturbance, II=moderate disturbance, and III=severe disturbance. LVDF variables, including time to peak filling (TPF), TPF/time to end-systole, peak filling rate (PFR), PFR/t, 1/3 filling fraction (1/3 FR), and 1/3 FR/t, were calculated from time activity curve. There was no definitive correlation between each variable and age or heart rate. Regarding these LVDF variables, except for 1/3 FR, there was no significant difference between the group 0 of heart disease patients and the control group. Among the groups 0-III of heart disease patients, there were significant difference in LVDF variables. Visual assessement concurred with left ventricular ejection fraction, PFR/end-diastolic curve, and filling rate/end-diastolic curve. Visual assessment using time activity curve was considered useful in the semiquantitative determination of early diastolic function. (N.K.)

  11. Equivalent Simplification Method of Micro-Grid

    OpenAIRE

    Cai Changchun; Cao Xiangqin

    2013-01-01

    The paper concentrates on the equivalent simplification method for the micro-grid system connection into distributed network. The equivalent simplification method proposed for interaction study between micro-grid and distributed network. Micro-grid network, composite load, gas turbine synchronous generation, wind generation are equivalent simplification and parallel connect into the point of common coupling. A micro-grid system is built and three phase and single phase grounded faults are per...

  12. Aspects of quantum field theory in curved space-time

    International Nuclear Information System (INIS)

    Fulling, S.A.

    1989-01-01

    The theory of quantum fields on curved spacetimes has attracted great attention since the discovery, by Stephen Hawking, of black-hole evaporation. It remains an important subject for the understanding of such contemporary topics as inflationary cosmology, quantum gravity and superstring theory. The topics covered include normal-mode expansions for a general elliptic operator, Fock space, the Casimir effect, the Klein 'paradox', particle definition and particle creation in expanding universes, asymptotic expansion of Green's functions and heat kernels, and renormalization of the stress tensor. (author)

  13. Approximation by planar elastic curves

    DEFF Research Database (Denmark)

    Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge

    2016-01-01

    We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....

  14. Equivalence relations and the reinforcement contingency.

    Science.gov (United States)

    Sidman, M

    2000-07-01

    Where do equivalence relations come from? One possible answer is that they arise directly from the reinforcement contingency. That is to say, a reinforcement contingency produces two types of outcome: (a) 2-, 3-, 4-, 5-, or n-term units of analysis that are known, respectively, as operant reinforcement, simple discrimination, conditional discrimination, second-order conditional discrimination, and so on; and (b) equivalence relations that consist of ordered pairs of all positive elements that participate in the contingency. This conception of the origin of equivalence relations leads to a number of new and verifiable ways of conceptualizing equivalence relations and, more generally, the stimulus control of operant behavior. The theory is also capable of experimental disproof.

  15. Equivalence Principle, Higgs Boson and Cosmology

    Directory of Open Access Journals (Sweden)

    Mauro Francaviglia

    2013-05-01

    Full Text Available We discuss here possible tests for Palatini f(R-theories together with their implications for different formulations of the Equivalence Principle. We shall show that Palatini f(R-theories obey the Weak Equivalence Principle and violate the Strong Equivalence Principle. The violations of the Strong Equivalence Principle vanish in vacuum (and purely electromagnetic solutions as well as on short time scales with respect to the age of the universe. However, we suggest that a framework based on Palatini f(R-theories is more general than standard General Relativity (GR and it sheds light on the interpretation of data and results in a way which is more model independent than standard GR itself.

  16. Parameterization of electrical equivalent circuits for pem fuel cells; Parametrierung elektrischer Aequivalentschaltbilder von PEM Brennstoffzellen

    Energy Technology Data Exchange (ETDEWEB)

    Haubrock, J.

    2007-12-13

    Fuel cells are a very promising technology for energy conversion. For optimization purpose, useful simulation tools are needs. Simulation tools should simulate the static and dynamic electrical behaviour and the models should parameterized by measurment results which should be done easily. In this dissertation, a useful model for simulating a pem fuel cell is developed. the model should parametrizes by V-I curve measurment and by current step respond. The model based on electrical equivalent circuits and it is shown, that it is possible to simulate the dynamic behaviour of a pem fuel cell stack. The simulation results are compared by measurment results. (orig.)

  17. Symmetries of dynamically equivalent theories

    Energy Technology Data Exchange (ETDEWEB)

    Gitman, D.M.; Tyutin, I.V. [Sao Paulo Univ., SP (Brazil). Inst. de Fisica; Lebedev Physics Institute, Moscow (Russian Federation)

    2006-03-15

    A natural and very important development of constrained system theory is a detail study of the relation between the constraint structure in the Hamiltonian formulation with specific features of the theory in the Lagrangian formulation, especially the relation between the constraint structure with the symmetries of the Lagrangian action. An important preliminary step in this direction is a strict demonstration, and this is the aim of the present article, that the symmetry structures of the Hamiltonian action and of the Lagrangian action are the same. This proved, it is sufficient to consider the symmetry structure of the Hamiltonian action. The latter problem is, in some sense, simpler because the Hamiltonian action is a first-order action. At the same time, the study of the symmetry of the Hamiltonian action naturally involves Hamiltonian constraints as basic objects. One can see that the Lagrangian and Hamiltonian actions are dynamically equivalent. This is why, in the present article, we consider from the very beginning a more general problem: how the symmetry structures of dynamically equivalent actions are related. First, we present some necessary notions and relations concerning infinitesimal symmetries in general, as well as a strict definition of dynamically equivalent actions. Finally, we demonstrate that there exists an isomorphism between classes of equivalent symmetries of dynamically equivalent actions. (author)

  18. Effects of the normalizing time and temperature on the impact properties of ASTM A-516 grade 70 steel

    International Nuclear Information System (INIS)

    Carneiro, T.; Cescon, T.

    1982-01-01

    The influence of normalizing time and temperature, as well as the plate thickness, on the impact properties of ASTM A-516 grade 70 steel, is studied. Results show that different normalizing conditions may lead to equivalent microstructure with different impact properties. Normalizing conditions that cause low cooling rate in the critical zone exhibit banded microstructure with inferior impact properties. (Author) [pt

  19. Bragg Curve Spectroscopy

    International Nuclear Information System (INIS)

    Gruhn, C.R.

    1981-05-01

    An alternative utilization is presented for the gaseous ionization chamber in the detection of energetic heavy ions, which is called Bragg Curve Spectroscopy (BCS). Conceptually, BCS involves using the maximum data available from the Bragg curve of the stopping heavy ion (HI) for purposes of identifying the particle and measuring its energy. A detector has been designed that measures the Bragg curve with high precision. From the Bragg curve the range from the length of the track, the total energy from the integral of the specific ionization over the track, the dE/dx from the specific ionization at the beginning of the track, and the Bragg peak from the maximum of the specific ionization of the HI are determined. This last signal measures the atomic number, Z, of the HI unambiguously

  20. Calculation methods for determining dose equivalent

    International Nuclear Information System (INIS)

    Endres, G.W.R.; Tanner, J.E.; Scherpelz, R.I.; Hadlock, D.E.

    1988-01-01

    A series of calculations of neutron fluence as a function of energy in an anthropomorphic phantom was performed to develop a system for determining effective dose equivalent for external radiation sources. critical organ dose equivalents are calculated and effective dose equivalents are determined using ICRP-26 methods. Quality factors based on both present definitions and ICRP-40 definitions are used in the analysis. The results of these calculations are presented and discussed

  1. Normalizing treatment influence on the forged steel SAE 8620 fracture properties

    Directory of Open Access Journals (Sweden)

    Paulo de Tarso Vida Gomes

    2005-03-01

    Full Text Available In a PWR nuclear power plant, the reactor pressure vessel (RPV contains the fuel assemblies and reactor vessels internals and keeps the coolant at high temperature and high pressure during normal operation. The RPV integrity must be assured all along its useful life to protect the general public against a significant radiation liberation damage. One of the critical issues relative to the VPR structural integrity refers to the pressurized thermal shock (PTS accident evaluation. To better understand the effects of this kind of event, a PTS experiment has been planned using an RPV prototype. The RPV material fracture behavior characterization in the ductile-brittle transition region represents one of the most important aspects of the structural assessment process of RPV's under PTS. This work presents the results of fracture toughness tests carried out to characterize the RPV prototype material behavior. The test data includes Charpy energy curves, T0 reference temperatures for definition of master curves, and fracture surfaces observed in electronic microscope. The results are given for the vessel steel in the "as received" and normalized conditions. This way, the influence of the normalizing treatment on the fracture properties of the steel could be evaluated.

  2. Patterns of pulmonary maturation in normal and abnormal pregnancy.

    Science.gov (United States)

    Goldkrand, J W; Slattery, D S

    1979-03-01

    Fetal pulmonary maturation may be a variable event depending on various feto-maternal environmental and biochemical influences. The patterns of maturation were studied in 211 amniotic fluid samples from 123 patients (normal 55; diabetes 23; Rh sensitization 19; preeclampsia 26). The phenomenon of globule formation from the amniotic fluid lipid extract and is relation to pulmonary maturity was utilized for this analysis. Validation of this technique is presented. A normal curve was constructed from 22 to 42 weeks; gestation and compared to the abnormal pregnancies. Patients with class A, B, and C diabetes and Rh-sensitized pregnancies had delayed pulmonary maturation. Patients with class D diabetes and preclampsia paralleled the normal course of maturation. A discussion of these results and their possible cause is presented.

  3. Learning Curve? Which One?

    Directory of Open Access Journals (Sweden)

    Paulo Prochno

    2004-07-01

    Full Text Available Learning curves have been studied for a long time. These studies provided strong support to the hypothesis that, as organizations produce more of a product, unit costs of production decrease at a decreasing rate (see Argote, 1999 for a comprehensive review of learning curve studies. But the organizational mechanisms that lead to these results are still underexplored. We know some drivers of learning curves (ADLER; CLARK, 1991; LAPRE et al., 2000, but we still lack a more detailed view of the organizational processes behind those curves. Through an ethnographic study, I bring a comprehensive account of the first year of operations of a new automotive plant, describing what was taking place on in the assembly area during the most relevant shifts of the learning curve. The emphasis is then on how learning occurs in that setting. My analysis suggests that the overall learning curve is in fact the result of an integration process that puts together several individual ongoing learning curves in different areas throughout the organization. In the end, I propose a model to understand the evolution of these learning processes and their supporting organizational mechanisms.

  4. Normal tissue dose-effect models in biological dose optimisation

    International Nuclear Information System (INIS)

    Alber, M.

    2008-01-01

    Sophisticated radiotherapy techniques like intensity modulated radiotherapy with photons and protons rely on numerical dose optimisation. The evaluation of normal tissue dose distributions that deviate significantly from the common clinical routine and also the mathematical expression of desirable properties of a dose distribution is difficult. In essence, a dose evaluation model for normal tissues has to express the tissue specific volume effect. A formalism of local dose effect measures is presented, which can be applied to serial and parallel responding tissues as well as target volumes and physical dose penalties. These models allow a transparent description of the volume effect and an efficient control over the optimum dose distribution. They can be linked to normal tissue complication probability models and the equivalent uniform dose concept. In clinical applications, they provide a means to standardize normal tissue doses in the face of inevitable anatomical differences between patients and a vastly increased freedom to shape the dose, without being overly limiting like sets of dose-volume constraints. (orig.)

  5. Identification of geometric faces in hand-sketched 3D objects containing curved lines

    Science.gov (United States)

    El-Sayed, Ahmed M.; Wahdan, A. A.; Youssif, Aliaa A. A.

    2017-07-01

    The reconstruction of 3D objects from 2D line drawings is regarded as one of the key topics in the field of computer vision. The ongoing research is mainly focusing on the reconstruction of 3D objects that are mapped only from 2D straight lines, and that are symmetric in nature. Commonly, this approach only produces basic and simple shapes that are mostly flat or rather polygonized in nature, which is normally attributed to inability to handle curves. To overcome the above-mentioned limitations, a technique capable of handling non-symmetric drawings that encompass curves is considered. This paper discusses a novel technique that can be used to reconstruct 3D objects containing curved lines. In addition, it highlights an application that has been developed in accordance with the suggested technique that can convert a freehand sketch to a 3D shape using a mobile phone.

  6. Roc curves for continuous data

    CERN Document Server

    Krzanowski, Wojtek J

    2009-01-01

    Since ROC curves have become ubiquitous in many application areas, the various advances have been scattered across disparate articles and texts. ROC Curves for Continuous Data is the first book solely devoted to the subject, bringing together all the relevant material to provide a clear understanding of how to analyze ROC curves.The fundamental theory of ROC curvesThe book first discusses the relationship between the ROC curve and numerous performance measures and then extends the theory into practice by describing how ROC curves are estimated. Further building on the theory, the authors prese

  7. Distribution functions for the linear region of the S-N curve

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, Christian; Waechter, Michael; Masendorf, Rainer; Esderts, Alfons [TU Clausthal, Clausthal-Zellerfeld (Germany). Inst. for Plant Engineering and Fatigue Analysis

    2017-08-01

    This study establishes a database containing the results of fatigue tests from the linear region of the S-N curve using sources from the literature. Each set of test results originates from testing metallic components on a single load level. Eighty-nine test series with sample sizes of 14 ≤ n ≤ 500 are included in the database, resulting in a sum of 6,086 individual test results. The test series are tested in terms of the type of distribution function (log-normal or 2-parameter Weibull) using the Shapiro-Wilk test, the Anderson-Darling test and probability plots. The majority of the tested individual test results follows a log-normal distribution.

  8. The one-dimensional normalised generalised equivalence theory (NGET) for generating equivalent diffusion theory group constants for PWR reflector regions

    International Nuclear Information System (INIS)

    Mueller, E.Z.

    1991-01-01

    An equivalent diffusion theory PWR reflector model is presented, which has as its basis Smith's generalisation of Koebke's Equivalent Theory. This method is an adaptation, in one-dimensional slab geometry, of the Generalised Equivalence Theory (GET). Since the method involves the renormalisation of the GET discontinuity factors at nodal interfaces, it is called the Normalised Generalised Equivalence Theory (NGET) method. The advantages of the NGET method for modelling the ex-core nodes of a PWR are summarized. 23 refs

  9. Large-strain time-temperature equivalence in high density polyethylene for prediction of extreme deformation and damage

    Directory of Open Access Journals (Sweden)

    Gray G.T.

    2012-08-01

    Full Text Available Time-temperature equivalence is a widely recognized property of many time-dependent material systems, where there is a clear predictive link relating the deformation response at a nominal temperature and a high strain-rate to an equivalent response at a depressed temperature and nominal strain-rate. It has been found that high-density polyethylene (HDPE obeys a linear empirical formulation relating test temperature and strain-rate. This observation was extended to continuous stress-strain curves, such that material response measured in a load frame at large strains and low strain-rates (at depressed temperatures could be translated into a temperature-dependent response at high strain-rates and validated against Taylor impact results. Time-temperature equivalence was used in conjuction with jump-rate compression tests to investigate isothermal response at high strain-rate while exluding adiabatic heating. The validated constitutive response was then applied to the analysis of Dynamic-Tensile-Extrusion of HDPE, a tensile analog to Taylor impact developed at LANL. The Dyn-Ten-Ext test results and FEA found that HDPE deformed smoothly after exiting the die, and after substantial drawing appeared to undergo a pressure-dependent shear damage mechanism at intermediate velocities, while it fragmented at high velocities. Dynamic-Tensile-Extrusion, properly coupled with a validated constitutive model, can successfully probe extreme tensile deformation and damage of polymers.

  10. Box-Cox transformation for resolving Peelle's Pertinent Puzzle in curve fitting

    International Nuclear Information System (INIS)

    Oh, Soo-Youl

    2003-01-01

    Incorporating the Box-Cox transformation into a least-squares method is presented as one of resolutions of an anomaly known as Peelle's Pertinent Puzzle. The transformation is a strategy to make non-normal distribution data resemble normal data. A procedure is proposed: transform the measured raw data with an optimized Box-Cox transformation parameter, fit the transformed data using a usual curve fitting method, then inverse-transform the fitted results to final estimates. The generalized least-squares method utilized in GMA is adopted as the curve fitting tool for the test of proposed procedure. In the procedure, covariance matrices are correspondingly transformed and inverse-transformed with the aid of error propagation law. In addition to a sensible answer to the Peelle's problem itself, the procedure resulted in reasonable estimates of 6 Li(n,t) cross sections in several to 800 keV energy region. Meanwhile, comparisons of the present procedure with that of Chiba and Smith show that both procedures yield estimates so close each other for the sample evaluation on 6 Li(n,t) above as well as for the Peelle's problem. Two procedures, however, are conceptually very different and further discussions would be needed for a consensus on this issue of resolving the Puzzle. It is also pointed out that the transformation is applicable not only to a least-squares method but also to other parameter estimation method such as a usual Bayesian approach formulated with an assumption of normality of the probability density function. (author)

  11. Use of the t-distribution to construct seismic hazard curves for seismic probabilistic safety assessments

    Energy Technology Data Exchange (ETDEWEB)

    Yee, Eric [KEPCO International Nuclear Graduate School, Dept. of Nuclear Power Plant Engineering, Ulsan (Korea, Republic of)

    2017-03-15

    Seismic probabilistic safety assessments are used to help understand the impact potential seismic events can have on the operation of a nuclear power plant. An important component to seismic probabilistic safety assessment is the seismic hazard curve which shows the frequency of seismic events. However, these hazard curves are estimated assuming a normal distribution of the seismic events. This may not be a strong assumption given the number of recorded events at each source-to-site distance. The use of a normal distribution makes the calculations significantly easier but may underestimate or overestimate the more rare events, which is of concern to nuclear power plants. This paper shows a preliminary exploration into the effect of using a distribution that perhaps more represents the distribution of events, such as the t-distribution to describe data. The integration of a probability distribution with potentially larger tails basically pushes the hazard curves outward, suggesting a different range of frequencies for use in seismic probabilistic safety assessments. Therefore the use of a more realistic distribution results in an increase in the frequency calculations suggesting rare events are less rare than thought in terms of seismic probabilistic safety assessment. However, the opposite was observed with the ground motion prediction equation considered.

  12. Use of the t-distribution to construct seismic hazard curves for seismic probabilistic safety assessments

    International Nuclear Information System (INIS)

    Yee, Eric

    2017-01-01

    Seismic probabilistic safety assessments are used to help understand the impact potential seismic events can have on the operation of a nuclear power plant. An important component to seismic probabilistic safety assessment is the seismic hazard curve which shows the frequency of seismic events. However, these hazard curves are estimated assuming a normal distribution of the seismic events. This may not be a strong assumption given the number of recorded events at each source-to-site distance. The use of a normal distribution makes the calculations significantly easier but may underestimate or overestimate the more rare events, which is of concern to nuclear power plants. This paper shows a preliminary exploration into the effect of using a distribution that perhaps more represents the distribution of events, such as the t-distribution to describe data. The integration of a probability distribution with potentially larger tails basically pushes the hazard curves outward, suggesting a different range of frequencies for use in seismic probabilistic safety assessments. Therefore the use of a more realistic distribution results in an increase in the frequency calculations suggesting rare events are less rare than thought in terms of seismic probabilistic safety assessment. However, the opposite was observed with the ground motion prediction equation considered

  13. Radionuclide Angiocardiographic Evaluation of Left-to-Right Cardiac Shunts: Analysis of Time-Active Curves

    International Nuclear Information System (INIS)

    Kim, Ok Hwa; Bahk, Yong Whee; Kim, Chi Kyung

    1987-01-01

    The noninvasive nature of the radionuclide angiocardiography provided a useful approach for the evaluation of left-to-right cardiac shunts (LRCS). While the qualitative information can be obtained by inspection of serial radionuclide angiocardiograms, the quantitative information of radionuclide angiocardiography can be obtained by the analysis of time-activity curves using advanced computer system. The count ratios method and pulmonary-to-systemic flow ratio (QP/QS) by gamma variate fit method were used to evaluate the accuracy of detection and localization of LRCS. One hundred and ten time-activity curves were analyzed. There were 46 LRCS (atrial septal defects 11, ventricular septal defects 22, patent ductus arteriosus 13) and 64 normal subjects. By computer analysis of time-activity curves of the right atriurn, ventricle and the lungs separately, the count ratios modified by adding the mean cardiac transit time were calculated in each anatomic site. In normal subjects the mean count ratios in the right atrium, ventricle and lungs were 0.24 on average. In atrial septal defects, the count ratios were high in the right atrium, ventricle and lungs, whereas in ventricular septal defects the count ratios were higher only in the right ventricle and lungs. Patent ductus arteriosus showed normal count ratios in the heart but high count ratios were obtained in the lungs. Thus, this count ratios method could be separated normal from those with intracardiac or extracardiac shunts, and moreover, with this method the localization of the shunt level was possible in LRCS. Another method that could differentiate the intracardiac shunts from extracardiac shunts was measuring QP/QS in the left and right lungs. In patent ductus arteriosus, the left lung QP/QS was higher than those of the right lung, whereas in atrial septal defects and ventricular septal defects QP/ QS ratios were equal in both lungs. From this study, it was found that by measuring QP/QS separately in the lungs

  14. The definition of the individual dose equivalent

    International Nuclear Information System (INIS)

    Ehrlich, Margarete

    1986-01-01

    A brief note examines the choice of the present definition of the individual dose equivalent, the new operational dosimetry quantity for external exposure. The consequences of the use of the individual dose equivalent and the danger facing the individual dose equivalent, as currently defined, are briefly discussed. (UK)

  15. Datum Feature Extraction and Deformation Analysis Method Based on Normal Vector of Point Cloud

    Science.gov (United States)

    Sun, W.; Wang, J.; Jin, F.; Liang, Z.; Yang, Y.

    2018-04-01

    In order to solve the problem lacking applicable analysis method in the application of three-dimensional laser scanning technology to the field of deformation monitoring, an efficient method extracting datum feature and analysing deformation based on normal vector of point cloud was proposed. Firstly, the kd-tree is used to establish the topological relation. Datum points are detected by tracking the normal vector of point cloud determined by the normal vector of local planar. Then, the cubic B-spline curve fitting is performed on the datum points. Finally, datum elevation and the inclination angle of the radial point are calculated according to the fitted curve and then the deformation information was analyzed. The proposed approach was verified on real large-scale tank data set captured with terrestrial laser scanner in a chemical plant. The results show that the method could obtain the entire information of the monitor object quickly and comprehensively, and reflect accurately the datum feature deformation.

  16. Tornado-Shaped Curves

    Science.gov (United States)

    Martínez, Sol Sáez; de la Rosa, Félix Martínez; Rojas, Sergio

    2017-01-01

    In Advanced Calculus, our students wonder if it is possible to graphically represent a tornado by means of a three-dimensional curve. In this paper, we show it is possible by providing the parametric equations of such tornado-shaped curves.

  17. Aspects of quantum field theory in curved space-time

    Energy Technology Data Exchange (ETDEWEB)

    Fulling, S.A. (Texas A and M Univ., College Station, TX (USA). Dept. of Mathematics)

    1989-01-01

    The theory of quantum fields on curved spacetimes has attracted great attention since the discovery, by Stephen Hawking, of black-hole evaporation. It remains an important subject for the understanding of such contemporary topics as inflationary cosmology, quantum gravity and superstring theory. The topics covered include normal-mode expansions for a general elliptic operator, Fock space, the Casimir effect, the Klein 'paradox', particle definition and particle creation in expanding universes, asymptotic expansion of Green's functions and heat kernels, and renormalization of the stress tensor. (author).

  18. A Lévy HJM Multiple-Curve Model with Application to CVA Computation

    DEFF Research Database (Denmark)

    Crépey, Stéphane; Grbac, Zorana; Ngor, Nathalie

    2015-01-01

    , the calibration to OTM swaptions guaranteeing that the model correctly captures volatility smile effects and the calibration to co-terminal ATM swaptions ensuring an appropriate term structure of the volatility in the model. To account for counterparty risk and funding issues, we use the calibrated multiple......-curve model as an underlying model for CVA computation. We follow a reduced-form methodology through which the problem of pricing the counterparty risk and funding costs can be reduced to a pre-default Markovian BSDE, or an equivalent semi-linear PDE. As an illustration, we study the case of a basis swap...... and a related swaption, for which we compute the counterparty risk and funding adjustments...

  19. European column buckling curves and finite element modelling including high strength steels

    DEFF Research Database (Denmark)

    Jönsson, Jeppe; Stan, Tudor-Cristian

    2017-01-01

    Eurocode allows for finite element modelling of plated steel structures, however the information in the code on how to perform the analysis or what assumptions to make is quite sparse. The present paper investigates the deterministic modelling of flexural column buckling using plane shell elements...... imperfections may be very conservative if considered by finite element analysis as described in the current Eurocode code. A suggestion is given for a slightly modified imperfection formula within the Ayrton-Perry formulation leading to adequate inclusion of modern high grade steels within the original four...... bucking curves. It is also suggested that finite element or frame analysis may be performed with equivalent column bow imperfections extracted directly from the Ayrton-Perry formulation....

  20. Approximating the imbibition and absorption behavior of a distribution of matrix blocks by an equivalent spherical block

    International Nuclear Information System (INIS)

    Zimmerman, R.W.; Bodvarsson, G.S.

    1994-03-01

    A theoretical study is presented of the effect of matrix block shape and matrix block size distribution on liquid imbibition and solute absorption in a fractured rock mass. It is shown that the behavior of an individual irregularly-shaped matrix block can be modeled with reasonable accuracy by using the results for a spherical matrix block, if one uses an effective radius a = 3V/A, where V is the volume of the block and A is its surface area. In the early-time regime of matrix imbibition, it is shown that a collection of blocks of different sizes can be modeled by a single equivalent block, with an equivalent radius of -1 > -1 , where the average is taken on a volumetrically-weighted basis. In an intermediate time regime, it is shown for the case where the radii are normally distributed that the equivalent radius is reasonably well approximated by the mean radius . In the long-time limit, where no equivalent radius can be rigorously defined, an asymptotic expression is derived for the cumulative diffusion as a function of the mean and the standard deviation of the radius distribution function

  1. A normalized model for the half-bridge series resonant converter

    Science.gov (United States)

    King, R.; Stuart, T. A.

    1981-01-01

    Closed-form steady-state equations are derived for the half-bridge series resonant converter with a rectified (dc) load. Normalized curves for various currents and voltages are then plotted as a function of the circuit parameters. Experimental results based on a 10-kHz converter are presented for comparison with the calculations.

  2. A versatile curve-fit model for linear to deeply concave rank abundance curves

    NARCIS (Netherlands)

    Neuteboom, J.H.; Struik, P.C.

    2005-01-01

    A new, flexible curve-fit model for linear to concave rank abundance curves was conceptualized and validated using observational data. The model links the geometric-series model and log-series model and can also fit deeply concave rank abundance curves. The model is based ¿ in an unconventional way

  3. Equivalent Josephson junctions

    International Nuclear Information System (INIS)

    Boyadzhiev, T.L.; ); Semerdzhieva, E.G.; Shukrinov, Yu.M.; Fiziko-Tekhnicheskij Inst., Dushanbe

    2008-01-01

    The magnetic field dependences of critical current are numerically constructed for a long Josephson junction with a shunt- or resistor-type microscopic inhomogeneities and compared to the critical curve of a junction with exponentially varying width. The numerical results show that it is possible to replace the distributed inhomogeneity of a long Josephson junction by an inhomogeneity localized at one of its ends, which has certain technological advantages. It is also shown that the critical curves of junctions with exponentially varying width and inhomogeneities localized at the ends are unaffected by the mixed fluxon-antifluxon distributions of the magnetic flux [ru

  4. Tibiotalocalcaneal arthrodesis with a curved, interlocking, intramedullary nail.

    Science.gov (United States)

    Budnar, Vijaya M; Hepple, Steve; Harries, William G; Livingstone, James A; Winson, Ian

    2010-12-01

    Tibiotalocalcaneal fusion with a straight rod has a risk of damaging the lateral plantar neurovascular structures and may interfere with maintaining normal heel valgus position.We report the results of a prospective study of tibiotalocalcaneal (TTC) arthrodesis with a short, anatomically curved interlocking, intramedullary nail. Forty-five arthrodesis in 42 patients, performed between Jan 2003 and Oct 2008, were prospectively followed. The mean followup was 48 (range, 10 to 74) months. The main indications for the procedure were failed ankle arthrodesis with progressive subtalar arthritis, failed ankle arthroplasty and complex hindfoot deformity. The outcome was measured by a combination of pre and postoperative clinical examination, AOFAS hindfoot scores, SF-12 scores and radiological assessment. Union rate was 89% (40/45). Eighty-two percent (37/45) reported improvement in pain and 73% (33/45) had improved foot function. Satisfactory hindfoot alignment was achieved in 84% (38/45). Postoperatively there was a mean improvement in the AOFAS score of 37. Complications included a below knee amputation for persistent deep infection, five nonunions, and three delayed unions. Four nails, six proximal and six distal locking screws were removed for various causes. Other complications included two perioperative fractures, four superficial wound infections and one case of lateral plantar nerve irritation. With a short, anatomically curved intramedullary nail, we had a high rate of tibiotalocalcaneal fusion with minimal plantar neurovascular complications. We believe a short, curved intramedullary nail, with its more lateral entry point, helped maintain hindfoot alignment.

  5. 77 FR 32632 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Three New Equivalent Methods

    Science.gov (United States)

    2012-06-01

    ... Methods: Designation of Three New Equivalent Methods AGENCY: Environmental Protection Agency. ACTION... accordance with 40 CFR Part 53, three new equivalent methods: One for measuring concentrations of nitrogen... INFORMATION: In accordance with regulations at 40 CFR Part 53, the EPA evaluates various methods for...

  6. Equivalent circuit parameters of nickel/metal hydride batteries from sparse impedance measurements

    Science.gov (United States)

    Nelatury, Sudarshan Rao; Singh, Pritpal

    In a recent communication, a method for extracting the equivalent circuit parameters of a lead acid battery from sparse (only three) impedance spectroscopy observations at three different frequencies was proposed. It was based on an equivalent circuit consisting of a bulk resistance, a reaction resistance and a constant phase element (CPE). Such a circuit is a very appropriate model of a lead-acid cell at high state of charge (SOC). This paper is a sequel to it and presents an application of it in case of nickel/metal hydride (Ni/MH) batteries, which also at high SOC are represented by the same circuit configuration. But when the SOC of a Ni/MH battery under interrogation goes low, The EIS curve has a positive slope at the low frequency end and our technique yields complex values for the otherwise real circuit parameters, suggesting the need for additional elements in the equivalent circuit and a definite relationship between parameter consistency and SOC. To improvise the previous algorithm, in order that it works reasonably well at both high and low SOCs, we propose three more measurements—two at very low frequencies to include the Warburg response and one at a high frequency to model the series inductance, in addition to the three in the mid frequency band—totally six measurements. In most of the today's instrumentation, it is the user who should choose the circuit configuration and the number of frequencies where impedance should be measured and the accompanying software performs data fitting by complex nonlinear least squares. The proposed method has built into it an SOC-based decision-making capability—both to choose the circuit configuration and to estimate the values of the circuit elements.

  7. In-Vehicle Dynamic Curve-Speed Warnings at High-Risk Rural Curves

    Science.gov (United States)

    2018-03-01

    Lane-departure crashes at horizontal curves represent a significant portion of fatal crashes on rural Minnesota roads. Because of this, solutions are needed to aid drivers in identifying upcoming curves and inform them of a safe speed at which they s...

  8. Beyond Language Equivalence on Visibly Pushdown Automata

    DEFF Research Database (Denmark)

    Srba, Jiri

    2009-01-01

    We study (bi)simulation-like preorder/equivalence checking on the class of visibly pushdown automata and its natural subclasses visibly BPA (Basic Process Algebra) and visibly one-counter automata. We describe generic methods for proving complexity upper and lower bounds for a number of studied...... preorders and equivalences like simulation, completed simulation, ready simulation, 2-nested simulation preorders/equivalences and bisimulation equivalence. Our main results are that all the mentioned equivalences and preorders are EXPTIME-complete on visibly pushdown automata, PSPACE-complete on visibly...... one-counter automata and P-complete on visibly BPA. Our PSPACE lower bound for visibly one-counter automata improves also the previously known DP-hardness results for ordinary one-counter automata and one-counter nets. Finally, we study regularity checking problems for visibly pushdown automata...

  9. Individual external monitoring system for gamma and X ray evaluation of the individual dose equivalent 'HP(10)', utilizing a photographic dosimetry technique

    International Nuclear Information System (INIS)

    Santoro, Christiana; Filho, Joao Antonio

    2008-01-01

    Full text: Individual monitoring evaluates external sources of ionizing radiation X, γ, β and n, to which workers are occupationally exposed, for ensuring safe and acceptable radiological conditions in their places of employment. The dose received by workers should attend the limits authorized by national regulatory organs. Nowadays, there are two radiometric unit systems, based on resolutions of the National Nuclear Energy Commission (NNEC) and the International Commission on Radiation Units and Measurements (ICRU); in the conventional (NNEC) system, the doses received by workers are evaluated through the individual dose H x , where dosemeters used on surface of thorax are calibrated in terms of air kerma; in the recent system (ICRU), the doses are evaluated through the individual dose equivalent H P (d), where dosemeters are calibrated in terms of dose from phantom. The recent system improves the method of evaluation, by taking into account the scattering effect and absorption of radiation in the human body. This work adapts a photographic dosimetry service to the recent ICRU publications, for evaluation of individual monitoring, in function of the individual dose equivalent H P (10) of strong penetrating radiation. For this, a methodology based on linear programming and determination of calibration curves is used for radiation capacities, wide (W) and narrow (N) spectra, as described by the International Organization for Standardization (ISO 4037-1, 1995). These calibration curves offer better accuracy in the determination of doses and energy, which will improve the quality of the service given to society. The results show that the values of individual dose equivalent, evaluated at intervals of 0.2 to 200 mSv, have lower significant uncertainties (10%) than those recommended by the ICRP 75, for individual monitoring; therefore, the evaluation system for developed doses attends the new recommendations proposed by International Commissions. From what has been

  10. The equivalence problem for LL- and LR-regular grammars

    NARCIS (Netherlands)

    Nijholt, Antinus; Gecsec, F.

    It will be shown that the equivalence problem for LL-regular grammars is decidable. Apart from extending the known result for LL(k) grammar equivalence to LLregular grammar equivalence, we obtain an alternative proof of the decidability of LL(k) equivalence. The equivalence prob]em for LL-regular

  11. TOP-DRAWER, Histograms, Scatterplots, Curve-Smoothing

    International Nuclear Information System (INIS)

    Chaffee, R.B.

    1988-01-01

    Description of program or function: TOP DRAWER produces histograms, scatterplots, data points with error bars and plots symbols, and curves passing through data points, with elaborate titles. It also does smoothing and calculates frequency distributions. There is little facility, however, for arithmetic manipulation. Because of its restricted applicability, TOP DRAWER can be controlled by a relatively simple set of commands, and this control is further simplified by the choice of reasonable default values for all parameters. Despite this emphasis on simplicity, TOP DRAWER plots are of exceptional quality and are suitable for publication. Input is normally from card-image records, although a set of subroutines is provided to accommodate FORTRAN calls. The program contains switches which can be set to generate code suitable for execution on IBM, DECX VAX, and PRIME computers

  12. Certificateless short sequential and broadcast multisignature schemes using elliptic curve bilinear pairings

    Directory of Open Access Journals (Sweden)

    SK Hafizul Islam

    2014-01-01

    Full Text Available Several certificateless short signature and multisignature schemes based on traditional public key infrastructure (PKI or identity-based cryptosystem (IBC have been proposed in the literature; however, no certificateless short sequential (or serial multisignature (CL-SSMS or short broadcast (or parallel multisignature (CL-SBMS schemes have been proposed. In this paper, we propose two such new CL-SSMS and CL-SBMS schemes based on elliptic curve bilinear pairing. Like any certificateless public key cryptosystem (CL-PKC, the proposed schemes are free from the public key certificate management burden and the private key escrow problem as found in PKI- and IBC-based cryptosystems, respectively. In addition, the requirements of the expected security level and the fixed length signature with constant verification time have been achieved in our schemes. The schemes are communication efficient as the length of the multisignature is equivalent to a single elliptic curve point and thus become the shortest possible multisignature scheme. The proposed schemes are then suitable for communication systems having resource constrained devices such as PDAs, mobile phones, RFID chips, and sensors where the communication bandwidth, battery life, computing power and storage space are limited.

  13. Equivalent damage of loads on pavements

    CSIR Research Space (South Africa)

    Prozzi, JA

    2009-05-26

    Full Text Available This report describes a new methodology for the determination of Equivalent Damage Factors (EDFs) of vehicles with multiple axle and wheel configurations on pavements. The basic premise of this new procedure is that "equivalent pavement response...

  14. Predicting glucose intolerance with normal fasting plasma glucose by the components of the metabolic syndrome

    International Nuclear Information System (INIS)

    Pei, D.; Lin, J.; Kuo, S.; Wu, D.; Li, J.; Hsieh, C.; Wu, C.; Hung, Y.; Kuo, K.

    2007-01-01

    Surprisingly it is estimated that about half of type 2 diabetics remain undetected. The possible causes may be partly attributable to people with normal fasting plasma glucose (FPG) but abnormal postprandial hyperglycemia. We attempted to develop an effective predictive model by using the metabolic syndrome (MeS) components as parameters to identify such persons. All participants received a standard 75 gm oral glucose tolerance test which showed that 106 had normal glucose tolerance, 61 had impaired glucose tolerance and 6 had diabetes on isolated postchallenge hyperglycemia. We tested five models which included various MeS components. Model 0: FPG; Model 1 (Clinical history model): family history (FH), FPG, age and sex; Model 2 (MeS model): Model 1 plus triglycerides, high-density lipoprotein cholesterol, body mass index, systolic blood pressure and diastolic blood pressure; Model 3: Model 2 plus fasting plasma insulin (FPI); Model 4: Model 3 plus homeostasis model assessment of insulin resistance. A receiver-operating characteristic (ROC) curve was used to determine the predictive discrimination of these models. The area under the ROC curve of the Model 0 was significantly larger than the area under the diagonal reference line. All the other 4 models had a larger area under the ROC curve than Model 0. Considering the simplicity and lower cost of Model 2, it would be the best model to use. Nevertheless, Model 3 had the largest area under the ROC curve. We demonstrated that Model 2 and 3 have a significantly better predictive discrimination to identify persons with normal FPG at high risk for glucose intolerance. (author)

  15. Local normality properties of some infrared representations

    International Nuclear Information System (INIS)

    Doplicher, S.; Spera, M.

    1983-01-01

    We consider the positive energy representations of the algebra of quasilocal observables for the free massless Majorana field described in preceding papers. We show that by an appropriate choice of the (partially) occupied one particle modes we can find irreducible, type IIsub(infinite) or IIIsub(lambda) representations in this class which are unitarily equivalent to the vacuum representation when restricted to any forward light cone and disjoint from it when restricted to any backward light cone, or conversely. We give an elementary explicit proof of local normality of each representation in the above class. (orig.)

  16. Assessment of Corneal Epithelial Thickness in Asymmetric Keratoconic Eyes and Normal Eyes Using Fourier Domain Optical Coherence Tomography

    Directory of Open Access Journals (Sweden)

    S. Catalan

    2016-01-01

    Full Text Available Purpose. To compare the characteristics of asymmetric keratoconic eyes and normal eyes by Fourier domain optical coherence tomography (OCT corneal mapping. Methods. Retrospective corneal and epithelial thickness OCT data for 74 patients were compared in three groups of eyes: keratoconic (n=22 and normal fellow eyes (n=22 in patients with asymmetric keratoconus and normal eyes (n=104 in healthy subjects. Areas under the curve (AUC of receiver operator characteristic (ROC curves for each variable were compared across groups to indicate their discrimination capacity. Results. Three variables were found to differ significantly between fellow eyes and normal eyes (all p<0.05: minimum corneal thickness, thinnest corneal point, and central corneal thickness. These variables combined showed a high discrimination power to differentiate fellow eyes from normal eyes indicated by an AUC of 0.840 (95% CI: 0.762–0.918. Conclusions. Our findings indicate that topographically normal fellow eyes in patients with very asymmetric keratoconus differ from the eyes of healthy individuals in terms of their corneal epithelial and pachymetry maps. This type of information could be useful for an early diagnosis of keratoconus in topographically normal eyes.

  17. End effect Keff bias curve for actinide-only burnup credit casks

    International Nuclear Information System (INIS)

    Kang, C.H.; Lancaster, D.B.

    1997-01-01

    A conservative end effect k eff bias curve for actinide-only burnup credit for spent fuel casks is presented in this paper. The k eff bias values can be added to the uniform axial burnup analysis to conservatively bound the actinide-only end effect. A normalized axial burnup distribution for the standard Westinghouse 17 x 17 assembly design is used for calculating k eff . The end effect calculated is a strong function of burnup, and increases as cask size size decreases. The presence of poison plates increases the end effect. The bias curve presented is based on the most limiting cask configuration of a single PWR assembly with completely black poison plates. Therefore, axially uniform criticality calculations with application of the proposed k eff could eliminate the need for axially burnup dependent analyses. 7 refs., 1 fig

  18. Development of a tissue-engineered human oral mucosa equivalent based on an acellular allogeneic dermal matrix: a preliminary report of clinical application to burn wounds.

    Science.gov (United States)

    Iida, Takuya; Takami, Yoshihiro; Yamaguchi, Ryo; Shimazaki, Shuji; Harii, Kiyonori

    2005-01-01

    Tissue-engineered skin equivalents composed of epidermal and dermal components have been widely investigated for coverage of full-thickness skin defects. We developed a tissue-engineered oral mucosa equivalent based on an acellular allogeneic dermal matrix and investigated its characteristics. We also tried and assessed its preliminary clinical application. Human oral mucosal keratinocytes were separated from a piece of oral mucosa and cultured in a chemically-defined medium. The keratinocytes were seeded on to the acellular allogeneic dermal matrix and cultured. Histologically, the mucosa equivalent had a well-stratified epithelial layer. Immunohistochemical study showed that it was similar to normal oral mucosa. We applied this equivalent in one case with an extensive burn wound. The equivalent was transplanted three weeks after the harvest of the patient's oral mucosa and about 30% of the graft finally survived. We conclude that this new oral mucosa equivalent could become a therapeutic option for the treatment of extensive burns.

  19. Assembling of (βLPH) beta-lypothrophine radioimmunoassay. Plasma levels standardization in normal individuals and patients with hypophysis and adrenals diseases

    International Nuclear Information System (INIS)

    Castro, Margaret de.

    1988-01-01

    The present study investigates the extraction and radioimmunoassay (RIA) conditions of plasma βLPH. It was extracted by the activated silicic acid method, with a mean extraction efficiency of 31.6% and a mean intra-extraction variation coefficient of 8.1%. Radioiodination was performed by the chloramine-T method and βLPH 125 I was purified by gel chromatography on Sephadex G100. Estimated specific activity ranged from 100 to 192.8 μCi/μg, with a mean incorporation percentage of 66.6%. The titer of the first antibody was 1:50.000/100 μl. The assay was performed under non-equilibrium conditions, with a pre-incubation period of 24 hours and incubation of 4 hours. Mean immunoreactivity (Bo/Total) was 21.1%, with a mean Blank/Total ratio of 2.3%. Sensitivity, expressed as the mean minimum detectable dose, was 40 pg/tube, equivalent to 56 pg/ml plasma. Intra-assay variation coefficients were 6.5%, 3.8% and 6.8%, respectively, at B/Bo levels of 0.8, 0.6 and 0.4 of the standard curve. At B/Bo equal to 0.5, the intra-assay variation coefficient was 20.9%. Replicates of 14 plasma samples showed a correlation coefficient of r 0.99, (p< 0.05). Parallelism between the curve obtained with different volumes of an extract with a high βLPH value and the standard curve was found. The method was controlled biologically by the presence of correlation between the plasma βLPH levels and determined pathological states and with clinical functional studies. Twenty seven normal individuals, 10 patients with Cushing's disease to a tumor of the hypophysis, 4 patients with Cushing syndrome due to an adrenal tumor, 10 patients Addison disease, and 8 patients with hypopituitarism were studied. (author). 119 refs., 28 figs., 2 tabs

  20. Uncertainty estimation with bias-correction for flow series based on rating curve

    Science.gov (United States)

    Shao, Quanxi; Lerat, Julien; Podger, Geoff; Dutta, Dushmanta

    2014-03-01

    Streamflow discharge constitutes one of the fundamental data required to perform water balance studies and develop hydrological models. A rating curve, designed based on a series of concurrent stage and discharge measurements at a gauging location, provides a way to generate complete discharge time series with a reasonable quality if sufficient measurement points are available. However, the associated uncertainty is frequently not available even though it has a significant impact on hydrological modelling. In this paper, we identify the discrepancy of the hydrographers' rating curves used to derive the historical discharge data series and proposed a modification by bias correction which is also in the form of power function as the traditional rating curve. In order to obtain the uncertainty estimation, we propose a further both-side Box-Cox transformation to stabilize the regression residuals as close to the normal distribution as possible, so that a proper uncertainty can be attached for the whole discharge series in the ensemble generation. We demonstrate the proposed method by applying it to the gauging stations in the Flinders and Gilbert rivers in north-west Queensland, Australia.

  1. The importance of the radioiodine accumulation curve of the thyroid in diagnostics

    International Nuclear Information System (INIS)

    Policzer, M.

    1979-01-01

    The 131 I-based examination has been carried out since 1956. The first communication on the basis of 500 cases appeared in 1957, reporting mainly on technical details and on the normal curve obtained in the region of Budapest. Between 1968 and 1975 15500 examinations were performed, among which 1157 Basedow diseases and 157 cases of autonomous adenoma were discovered. The shape of the accumulation curve is analyzed in cases of different clinical types of hyperthyroidism. In the case of the Basedow disease the iodine metabolism pointed to hyperthyroidism in 42.9% of the cases, to vegetative dystony in 48.8% and to euthyroid functions in 6.4%. An increase of the iodine uptake was observed in 88.9%, that of iodine storing in 66.6%, while faster iodine release occurred only in 44.9% of the cases. In cases of autonomous adenoma the frequency of euthyroid-type accumulation curve was higher. Thus, it is recommended to determine iodine accumulation 1, 2, 6, 24 and 48 hours after the administration of the isotope. (L.E.)

  2. The curve shortening problem

    CERN Document Server

    Chou, Kai-Seng

    2001-01-01

    Although research in curve shortening flow has been very active for nearly 20 years, the results of those efforts have remained scattered throughout the literature. For the first time, The Curve Shortening Problem collects and illuminates those results in a comprehensive, rigorous, and self-contained account of the fundamental results.The authors present a complete treatment of the Gage-Hamilton theorem, a clear, detailed exposition of Grayson''s convexity theorem, a systematic discussion of invariant solutions, applications to the existence of simple closed geodesics on a surface, and a new, almost convexity theorem for the generalized curve shortening problem.Many questions regarding curve shortening remain outstanding. With its careful exposition and complete guide to the literature, The Curve Shortening Problem provides not only an outstanding starting point for graduate students and new investigations, but a superb reference that presents intriguing new results for those already active in the field.

  3. Analysis of each branch current of serial solar cells by using an equivalent circuit model

    International Nuclear Information System (INIS)

    Yi Shi-Guang; Zhang Wan-Hui; Ai Bin; Song Jing-Wei; Shen Hui

    2014-01-01

    In this paper, based on the equivalent single diode circuit model of the solar cell, an equivalent circuit diagram for two serial solar cells is drawn. Its equations of current and voltage are derived from Kirchhoff's current and voltage law. First, parameters are obtained from the I—V (current—voltage) curves for typical monocrystalline silicon solar cells (125 mm × 125 mm). Then, by regarding photo-generated current, shunt resistance, serial resistance of the first solar cell, and resistance load as the variables. The properties of shunt currents (I sh1 and I sh2 ), diode currents (I D1 and I D2 ), and load current (I L ) for the whole two serial solar cells are numerically analyzed in these four cases for the first time, and the corresponding physical explanations are made. We find that these parameters have different influences on the internal currents of solar cells. Our results will provide a reference for developing higher efficiency solar cell module and contribute to the better understanding of the reason of efficiency loss of solar cell module. (interdisciplinary physics and related areas of science and technology)

  4. Automated curved planar reformation of 3D spine images

    International Nuclear Information System (INIS)

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo

    2005-01-01

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks

  5. Construction of a human corneal stromal equivalent with non-transfected human corneal stromal cells and acellular porcine corneal stromata.

    Science.gov (United States)

    Diao, Jin-Mei; Pang, Xin; Qiu, Yue; Miao, Ying; Yu, Miao-Miao; Fan, Ting-Jun

    2015-03-01

    A tissue-engineered human corneal stroma (TE-HCS) has been developed as a promising equivalent to the native corneal stroma for replacement therapy. However, there is still a crucial need to improve the current approaches to render the TE-HCS equivalent more favorable for clinical applications. At the present study, we constructed a TE-HCS by incubating non-transfected human corneal stromal (HCS) cells in an acellular porcine corneal stromata (aPCS) scaffold in 20% fetal bovine serum supplemented DMEM/F12 (1:1) medium at 37 °C with 5% CO2in vitro. After 3 days of incubation, the constructed TE-HCS had a suitable tensile strength for transplantation, and a transparency that is comparable to native cornea. The TE-HCS had a normal histological structure which contained regularly aligned collagen fibers and differentiated HCS cells with positive expression of marker and functional proteins, mimicking a native HCS. After transplantation into rabbit models, the TE-HCS reconstructed normal corneal stroma in vivo and function well in maintaining corneal clarity and thickness, indicating that the completely biological TE-HCS could be used as a HCS equivalent. The constructed TE-HCS has promising potentials in regenerative medicine and treatment of diseases caused by corneal stromal disorders. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. 7 CFR 1030.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1030.54 Section 1030.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1030.54 Equivalent price. See § 1000.54. ...

  7. Note on the End Game in Homotopy Zero Curve Tracking

    OpenAIRE

    Sosonkina, Masha; Watson, Layne T.; Stewart, David E.

    1995-01-01

    Homotopy algorithms to solve a nonlinear system of equations f(x)=0 involve tracking the zero curve of a homotopy map p(a,theta,x) from theta=0 until theta=1. When the algorithm nears or crosses the hyperplane theta=1, an "end game" phase is begun to compute the solution x(bar) satisfying p(a,theta,x(bar))=f(x(bar))=0. This note compares several end game strategies, including the one implemented in the normal flow code FIXPNF in the homotopy software package HOMPACK.

  8. Applicability of the fracture toughness master curve to irradiated reactor pressure vessel steels

    International Nuclear Information System (INIS)

    Sokolov, M.A.; McCabe, D.E.; Alexander, D.J.; Nanstad, R.K.

    1997-01-01

    The current methodology for determination of fracture toughness of irradiated reactor pressure vessel (RPV) steels is based on the upward temperature shift of the American Society of Mechanical Engineers (ASME) K Ic curve from either measurement of Charpy impact surveillance specimens or predictive calculations based on a database of Charpy impact tests from RPV surveillance programs. Currently, the provisions for determination of the upward temperature shift of the curve due to irradiation are based on the Charpy V-notch (CVN) 41-J shift, and the shape of the fracture toughness curve is assumed to not change as a consequence or irradiation. The ASME curve is a function of test temperature (T) normalized to a reference nit-ductility temperature, RT NDT , namely, T-RT NDT . That curve was constructed as the lower boundary to the available K Ic database and, therefore, does not consider probability matters. Moreover, to achieve valid fracture toughness data in the temperature range where the rate of fracture toughness increase with temperature is rapidly increasing, very large test specimens were needed to maintain plain-strain, linear-elastic conditions. Such large specimens are impractical for fracture toughness testing of each RPV steel, but the evolution of elastic-plastic fracture mechanics has led to the use of relatively small test specimens to achieve acceptable cleavage fracture toughness measurements, K Jc , in the transition temperature range. Accompanying this evolution is the employment of the Weibull distribution function to model the scatter of fracture toughness values in the transition range. Thus, a probabilistic-based bound for a given data population can be made. Further, it has been demonstrated by Wallin that the probabilistic-based estimates of median fracture toughness of ferritic steels tend to form transition curves of the same shape, the so-called ''master curve'', normalized to one common specimen size, namely the 1T [i.e., 1.0-in

  9. Equivalence in Ventilation and Indoor Air Quality

    Energy Technology Data Exchange (ETDEWEB)

    Sherman, Max; Walker, Iain; Logue, Jennifer

    2011-08-01

    We ventilate buildings to provide acceptable indoor air quality (IAQ). Ventilation standards (such as American Society of Heating, Refrigerating, and Air-Conditioning Enginners [ASHRAE] Standard 62) specify minimum ventilation rates without taking into account the impact of those rates on IAQ. Innovative ventilation management is often a desirable element of reducing energy consumption or improving IAQ or comfort. Variable ventilation is one innovative strategy. To use variable ventilation in a way that meets standards, it is necessary to have a method for determining equivalence in terms of either ventilation or indoor air quality. This study develops methods to calculate either equivalent ventilation or equivalent IAQ. We demonstrate that equivalent ventilation can be used as the basis for dynamic ventilation control, reducing peak load and infiltration of outdoor contaminants. We also show that equivalent IAQ could allow some contaminants to exceed current standards if other contaminants are more stringently controlled.

  10. Computer modeling the boron compound factor in normal brain tissue

    International Nuclear Information System (INIS)

    Gavin, P.R.; Huiskamp, R.; Wheeler, F.J.; Griebenow, M.L.

    1993-01-01

    The macroscopic distribution of borocaptate sodium (Na 2 B 12 H 11 SH or BSH) in normal tissues has been determined and can be accurately predicted from the blood concentration. The compound para-borono-phenylalanine (p-BPA) has also been studied in dogs and normal tissue distribution has been determined. The total physical dose required to reach a biological isoeffect appears to increase directly as the proportion of boron capture dose increases. This effect, together with knowledge of the macrodistribution, led to estimates of the influence of the microdistribution of the BSH compound. This paper reports a computer model that was used to predict the compound factor for BSH and p-BPA and, hence, the equivalent radiation in normal tissues. The compound factor would need to be calculated for other compounds with different distributions. This information is needed to design appropriate normal tissue tolerance studies for different organ systems and/or different boron compounds

  11. Tools to identify linear combination of prognostic factors which maximizes area under receiver operator curve.

    Science.gov (United States)

    Todor, Nicolae; Todor, Irina; Săplăcan, Gavril

    2014-01-01

    The linear combination of variables is an attractive method in many medical analyses targeting a score to classify patients. In the case of ROC curves the most popular problem is to identify the linear combination which maximizes area under curve (AUC). This problem is complete closed when normality assumptions are met. With no assumption of normality search algorithm are avoided because it is accepted that we have to evaluate AUC n(d) times where n is the number of distinct observation and d is the number of variables. For d = 2, using particularities of AUC formula, we described an algorithm which lowered the number of evaluations of AUC from n(2) to n(n-1) + 1. For d > 2 our proposed solution is an approximate method by considering equidistant points on the unit sphere in R(d) where we evaluate AUC. The algorithms were applied to data from our lab to predict response of treatment by a set of molecular markers in cervical cancers patients. In order to evaluate the strength of our algorithms a simulation was added. In the case of no normality presented algorithms are feasible. For many variables computation time could be increased but acceptable.

  12. How Effectively Do People Remember Voice Disordered Speech? An Investigation of the Serial-Position Curve

    Directory of Open Access Journals (Sweden)

    Scott R. Schroeder

    2018-01-01

    Full Text Available We examined how well typical adult listeners remember the speech of a person with a voice disorder (relative to that of a person without a voice disorder. Participants (n = 40 listened to two lists of words (one list uttered in a disordered voice and the other list uttered in a normal voice. After each list, participants completed a free recall test, in which they tried to remember as many words as they could. While the total number of words recalled did not differ between the disordered voice condition and the normal voice condition, an investigation of the serial-position curve revealed a difference. In the normal voice condition, a parabolic (i.e., u-shaped serial-position curve was observed, with a significant primacy effect (i.e., the beginning of the list was remembered better than the middle and a significant recency effect (i.e., the end of the list was remembered better than the middle. In contrast, in the disordered voice condition, while there was a significant recency effect, no primacy effect was present. Thus, the increased ability to remember the first words uttered by a speaker (relative to subsequent words may disappear when the speaker has a voice disorder. Explanations and implications of this finding are discussed.

  13. Orientifold Planar Equivalence: The Chiral Condensate

    DEFF Research Database (Denmark)

    Armoni, Adi; Lucini, Biagio; Patella, Agostino

    2008-01-01

    The recently introduced orientifold planar equivalence is a promising tool for solving non-perturbative problems in QCD. One of the predictions of orientifold planar equivalence is that the chiral condensates of a theory with $N_f$ flavours of Dirac fermions in the symmetric (or antisymmetric...

  14. 7 CFR 1005.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1005.54 Section 1005.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1005.54 Equivalent price. See § 1000.54. Uniform Prices ...

  15. 7 CFR 1126.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1126.54 Section 1126.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1126.54 Equivalent price. See § 1000.54. Producer Price Differential ...

  16. 7 CFR 1001.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1001.54 Section 1001.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1001.54 Equivalent price. See § 1000.54. Producer Price Differential ...

  17. 7 CFR 1032.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1032.54 Section 1032.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1032.54 Equivalent price. See § 1000.54. Producer Price Differential ...

  18. 7 CFR 1033.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1033.54 Section 1033.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1033.54 Equivalent price. See § 1000.54. Producer Price Differential ...

  19. 7 CFR 1131.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1131.54 Section 1131.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1131.54 Equivalent price. See § 1000.54. Uniform Prices ...

  20. 7 CFR 1006.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1006.54 Section 1006.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1006.54 Equivalent price. See § 1000.54. Uniform Prices ...

  1. 7 CFR 1007.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1007.54 Section 1007.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1007.54 Equivalent price. See § 1000.54. Uniform Prices ...

  2. Uroflowmetry in neurologically normal children with voiding disorders

    DEFF Research Database (Denmark)

    Jensen, K M; Nielsen, K.K.; Kristensen, E S

    1985-01-01

    of neurological deficits underwent a complete diagnostic program including intravenous urography, voiding cystography and cystoscopy as well as spontaneous uroflowmetry, cystometry-emg and pressure-flow-emg study. The incidence of dyssynergia was 22%. However, neither the flow curve pattern nor single flow...... variables were able to identify children with dyssynergia. Consequently uroflowmetry seems inefficient in the screening for dyssynergia in neurological normal children with voiding disorders in the absence of anatomical bladder outlet obstruction....

  3. Analysis of the static pressure volume curve of the lung in experimentally induced pulmonary damage by CT-densitometry

    International Nuclear Information System (INIS)

    David, M.; Karmrodt, J.; Herwelling, A.; Bletz, C.; David, S.; Heussel, C.P.; Markstaller, K.

    2005-01-01

    Purpose: To study quantitative changes of lung density distributions when recording in- and expiratory static pressure-volume curves by single slice computed tomography (CT). Materials and Methods: Static in- and expiratory pressure volume curves (0 to 1000 ml, increments of 100 ml) were obtained in random order in 10 pigs after induction of lung damage by saline lavage. Simultaneously, CT acquisitions (slice thickness 1 mm, temporal increment 2 s) were performed in a single slice (3 cm below the carina). In each CT image lung segmentation and planimetry of defined density ranges were achieved. The lung density ranges were defined as: hyperinflated (-1024 to -910 HU), normal aerated (-910 to -600 HU), poorly aerated (-600 to -300 HU), and non aerated (-300 to 200 HU) lung. Fractional areas of defined density ranges in percentage of total lung area were compared to recorded volume increments and airway pressures (atmospheric pressure, lower inflection point (LIP), LIP*0.5, LIP*1.5, peak airway pressure) of in- and expiratory pressure-volume curves. Results: Quantitative analysis of defined density ranges showed no differences between in- and expiratory pressure-volume curves. The amount of poorly aerated lung decreased and normal aerated lung increased constantly when airway pressure and volume were increased during inspiratory pressure-volume curves and vice versa during expiratory pressure-volume loops. Conclusion: Recruitment and derecruitment of lung atelectasis during registration of static in- and expiratory pressure-volume loops occurred constantly, but not in a stepwise manner. CT was shown to be an appropriate method to analyse these recruitment process. (orig.)

  4. The Complexity of Identifying Large Equivalence Classes

    DEFF Research Database (Denmark)

    Skyum, Sven; Frandsen, Gudmund Skovbjerg; Miltersen, Peter Bro

    1999-01-01

    We prove that at least 3k−4/k(2k−3)(n/2) – O(k)equivalence tests and no more than 2/k (n/2) + O(n) equivalence tests are needed in the worst case to identify the equivalence classes with at least k members in set of n elements. The upper bound is an improvement by a factor 2 compared to known res...

  5. Association of Body Composition with Curve Severity in Children and Adolescents with Idiopathic Scoliosis (IS

    Directory of Open Access Journals (Sweden)

    Edyta Matusik

    2016-01-01

    Full Text Available The link between scoliotic deformity and body composition assessed with bioimpedance (BIA has not been well researched. The objective of this study was to correlate the extent of scoliotic-curve severity with the anthropometrical status of patients with idiopathic scoliosis (IS based on standard anthropometric measurements and BIA. The study encompassed 279 IS patients (224 girls/55 boys, aged 14.21 ± 2.75 years. Scoliotic curve severity assessed by Cobb’s angle was categorized as moderate (10°–39° or severe (≥40°. Corrected height, weight, waist and hip circumferences were measured and body mass index (BMI, corrected height z-score, BMI Z-score, waist/height ratio (WHtR and waist/hip ratio (WHR were calculated for the entire group. Body composition parameters: fat mass (FAT, fat-free mass (FFM and predicted muscle mass (PMM were determined using a bioelectrical impedance analyzer. The mean Cobb angle was 19.96° ± 7.92° in the moderate group and 52.36° ± 12.54° in the severe group. The corrected body heights, body weights and BMIs were significantly higher in the severe IS group than in the moderate group (p < 0.05. Significantly higher FAT and lower FFM and PMM were observed in the severe IS group (p < 0.05. The corrected heights and weights were significantly higher in patients with severe IS and normal weight (p < 0.01. Normal and overweight patients with a severe IS had significantly higher adiposity levels assessed by FAT, FFM and PMM for normal and BMI, BMI z-score, WHtR, FAT and PMM for overweight, respectively. Overweight IS patients were significantly younger and taller than underweight and normal weight patients. The scoliotic curve severity is significantly related to the degree of adiposity in IS patients. BMI z-score, WHtR and BIA seem to be useful tools for determining baseline anthropometric characteristics of IS children.

  6. Premixed Flames Under Microgravity and Normal Gravity Conditions

    Science.gov (United States)

    Krikunova, Anastasia I.; Son, Eduard E.

    2018-03-01

    Premixed conical CH4-air flames were studied experimentally and numerically under normal straight, reversed gravity conditions and microgravity. Low-gravity experiments were performed in Drop tower. Classical Bunsen-type burner was used to find out features of gravity influence on the combustion processes. Mixture equivalence ratio was varied from 0.8 to 1.3. Wide range of flow velocity allows to study both laminar and weakly turbulized flames. High-speed flame chemoluminescence video-recording was used as diagnostic. The investigations were performed at atmospheric pressure. As results normalized flame height, laminar flame speed were measured, also features of flame instabilities were shown. Low- and high-frequency flame-instabilities (oscillations) have a various nature as velocity fluctuations, preferential diffusion instability, hydrodynamic and Rayleigh-Taylor ones etc., that was explored and demonstrated.

  7. Part 5: Receiver Operating Characteristic Curve and Area under the Curve

    Directory of Open Access Journals (Sweden)

    Saeed Safari

    2016-04-01

    Full Text Available Multiple diagnostic tools are used by emergency physicians,every day. In addition, new tools are evaluated to obtainmore accurate methods and reduce time or cost of conventionalones. In the previous parts of this educationalseries, we described diagnostic performance characteristicsof diagnostic tests including sensitivity, specificity, positiveand negative predictive values, and likelihood ratios. Thereceiver operating characteristics (ROC curve is a graphicalpresentation of screening characteristics. ROC curve is usedto determine the best cutoff point and compare two or moretests or observers by measuring the area under the curve(AUC. In this part of our educational series, we explain ROCcurve and two methods to determine the best cutoff value.

  8. Ayurvedic medicine offers a good alternative to glucosamine and celecoxib in the treatment of symptomatic knee osteoarthritis: a randomized, double-blind, controlled equivalence drug trial.

    Science.gov (United States)

    Chopra, Arvind; Saluja, Manjit; Tillu, Girish; Sarmukkaddam, Sanjeev; Venugopalan, Anuradha; Narsimulu, Gumdal; Handa, Rohini; Sumantran, Venil; Raut, Ashwinikumar; Bichile, Lata; Joshi, Kalpana; Patwardhan, Bhushan

    2013-08-01

    To demonstrate clinical equivalence between two standardized Ayurveda (India) formulations (SGCG and SGC), glucosamine and celecoxib (NSAID). Ayurvedic formulations (extracts of Tinospora cordifolia, Zingiber officinale, Emblica officinalis, Boswellia serrata), glucosamine sulphate (2 g daily) and celecoxib (200 mg daily) were evaluated in a randomized, double-blind, parallel-efficacy, four-arm, multicentre equivalence drug trial of 24 weeks duration. A total of 440 eligible patients suffering from symptomatic knee OA were enrolled and monitored as per protocol. Primary efficacy variables were active body weight-bearing pain (visual analogue scale) and modified WOMAC pain and functional difficulty Likert score (for knee and hip); the corresponding a priori equivalence ranges were ±1.5 cm, ±2.5 and ±8.5. Differences between the intervention arms for mean changes in primary efficacy variables were within the equivalence range by intent-to-treat and per protocol analysis. Twenty-six patients showed asymptomatic increased serum glutamic pyruvic transaminase (SGPT) with otherwise normal liver function; seven patients (Ayurvedic intervention) were withdrawn and SGPT normalized after stopping the drug. Other adverse events were mild and did not differ by intervention. Overall, 28% of patients withdrew from the study. In this 6-month controlled study of knee OA, Ayurvedic formulations (especially SGCG) significantly reduced knee pain and improved knee function and were equivalent to glucosamine and celecoxib. The unexpected SGPT rise requires further safety assessment. Clinical Drug Trial Registry-India, www.ctri.nic.in, CTRI/2008/091/000063.

  9. [Chinese neonatal birth weight curve for different gestational age].

    Science.gov (United States)

    Zhu, Li; Zhang, Rong; Zhang, Shulian; Shi, Wenjing; Yan, Weili; Wang, Xiaoli; Lyu, Qin; Liu, Ling; Zhou, Qin; Qiu, Quanfang; Li, Xiaoying; He, Haiying; Wang, Jimei; Li, Ruichun; Lu, Jiarong; Yin, Zhaoqing; Su, Ping; Lin, Xinzhu; Guo, Fang; Zhang, Hui; Li, Shujun; Xin, Hua; Han, Yanqing; Wang, Hongyun; Chen, Dongmei; Li, Zhankui; Wang, Huiqin; Qiu, Yinping; Liu, Huayan; Yang, Jie; Yang, Xiaoli; Li, Mingxia; Li, Wenjing; Han, Shuping; Cao, Bei; Yi, Bin; Zhang, Yihui; Chen, Chao

    2015-02-01

    Since 1986, the reference of birth weight for gestational age has not been updated. The aim of this study was to set up Chinese neonatal network to investigate the current situation of birth weight in China, especially preterm birth weight, to develop the new reference for birth weight for gestational age and birth weight curve. A nationwide neonatology network was established in China. This survey was carried out in 63 hospitals of 23 provinces, municipalities and autonomous regions. We continuously collected the information of live births in participating hospitals during the study period of 2011-2014. Data describing birth weight and gestational age were collected prospectively. Newborn's birth weight was measured by electronic scale within 2 hours after birth when baby was undressed. The evaluation of gestational age was based on the combination of mother's last menstrual period, ultrasound in first trimester and gestational age estimation by gestational age scoring system. the growth curve was drawn by using LMSP method, which was conducted in GAMLSS 1.9-4 software package in R software 2.11.1. A total of 159 334 newborn infants were enrolled in this study. There were 84 447 male and 74 907 female. The mean birth weight was (3 232 ± 555) g, the mean birth weight of male newborn was (3 271 ± 576) g, the mean weight of female newborn was (3 188 ± 528) g. The test of the variables' distribution suggested that the distribution of gestational age and birth weight did not fit the normal distribution, the optimal distribution for them was BCT distribution. The Q-Q plot test and worm plot test suggested that this curve fitted the distribution optimally. The male and female neonatal birth weight curve was developed using the same method. Using GAMLSS method to establish nationwide neonatal birth weight curve, and the first time to update the birth weight reference in recent 28 years.

  10. Single- and two-phase flow simulation based on equivalent pore network extracted from micro-CT images of sandstone core.

    Science.gov (United States)

    Song, Rui; Liu, Jianjun; Cui, Mengmeng

    2016-01-01

    Due to the intricate structure of porous rocks, relationships between porosity or saturation and petrophysical transport properties classically used for reservoir evaluation and recovery strategies are either very complex or nonexistent. Thus, the pore network model extracted from the natural porous media is emphasized as a breakthrough to predict the fluid transport properties in the complex micro pore structure. This paper presents a modified method of extracting the equivalent pore network model from the three-dimensional micro computed tomography images based on the maximum ball algorithm. The partition of pore and throat are improved to avoid tremendous memory usage when extracting the equivalent pore network model. The porosity calculated by the extracted pore network model agrees well with the original sandstone sample. Instead of the Poiseuille's law used in the original work, the Lattice-Boltzmann method is employed to simulate the single- and two- phase flow in the extracted pore network. Good agreements are acquired on relative permeability saturation curves of the simulation against the experiment results.

  11. 7 CFR 1124.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1124.54 Section 1124.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Class Prices § 1124.54 Equivalent price. See § 1000.54. Producer Price Differential ...

  12. EQUIVALENT MODELS IN COVARIANCE STRUCTURE-ANALYSIS

    NARCIS (Netherlands)

    LUIJBEN, TCW

    1991-01-01

    Defining equivalent models as those that reproduce the same set of covariance matrices, necessary and sufficient conditions are stated for the local equivalence of two expanded identified models M1 and M2 when fitting the more restricted model M0. Assuming several regularity conditions, the rank

  13. A calculational method of photon dose equivalent based on the revised technical standards of radiological protection law

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Suzuki, Tomoo

    1991-03-01

    The effective conversion factor for photons from 0.03 to 10 MeV were calculated to convert the absorbed dose in air to the 1 cm, 3 mm, and 70 μm depth dose equivalents behind iron, lead, concrete, and water shields up to 30 mfp thickness. The effective conversion factor changes slightly with thickness of the shields and becomes nearly constant at 5 to 10 mfp. The difference of the effective conversion factor was less than 2% between plane normal and point isotropic geometries. It is suggested that the present method, making the data base of the exposure buildup factors useful, would be very effective as compared to a new evaluation of the dose equivalent buildup factors. 5 refs., 7 figs., 22 tabs

  14. A non-parametric conditional bivariate reference region with an application to height/weight measurements on normal girls

    DEFF Research Database (Denmark)

    Petersen, Jørgen Holm

    2009-01-01

    A conceptually simple two-dimensional conditional reference curve is described. The curve gives a decision basis for determining whether a bivariate response from an individual is "normal" or "abnormal" when taking into account that a third (conditioning) variable may influence the bivariate...... response. The reference curve is not only characterized analytically but also by geometric properties that are easily communicated to medical doctors - the users of such curves. The reference curve estimator is completely non-parametric, so no distributional assumptions are needed about the two......-dimensional response. An example that will serve to motivate and illustrate the reference is the study of the height/weight distribution of 7-8-year-old Danish school girls born in 1930, 1950, or 1970....

  15. Quantum equivalence principle without mass superselection

    International Nuclear Information System (INIS)

    Hernandez-Coronado, H.; Okon, E.

    2013-01-01

    The standard argument for the validity of Einstein's equivalence principle in a non-relativistic quantum context involves the application of a mass superselection rule. The objective of this work is to show that, contrary to widespread opinion, the compatibility between the equivalence principle and quantum mechanics does not depend on the introduction of such a restriction. For this purpose, we develop a formalism based on the extended Galileo group, which allows for a consistent handling of superpositions of different masses, and show that, within such scheme, mass superpositions behave as they should in order to obey the equivalence principle. - Highlights: • We propose a formalism for consistently handling, within a non-relativistic quantum context, superpositions of states with different masses. • The formalism utilizes the extended Galileo group, in which mass is a generator. • The proposed formalism allows for the equivalence principle to be satisfied without the need of imposing a mass superselection rule

  16. Signature Curves Statistics of DNA Supercoils

    OpenAIRE

    Shakiban, Cheri; Lloyd, Peter

    2004-01-01

    In this paper we describe the Euclidean signature curves for two dimensional closed curves in the plane and their generalization to closed space curves. The focus will be on discrete numerical methods for approximating such curves. Further we will apply these numerical methods to plot the signature curves related to three-dimensional simulated DNA supercoils. Our primary focus will be on statistical analysis of the data generated for the signature curves of the supercoils. We will try to esta...

  17. Method of construction spatial transition curve

    Directory of Open Access Journals (Sweden)

    S.V. Didanov

    2013-04-01

    Full Text Available Purpose. The movement of rail transport (speed rolling stock, traffic safety, etc. is largely dependent on the quality of the track. In this case, a special role is the transition curve, which ensures smooth insertion of the transition from linear to circular section of road. The article deals with modeling of spatial transition curve based on the parabolic distribution of the curvature and torsion. This is a continuation of research conducted by the authors regarding the spatial modeling of curved contours. Methodology. Construction of the spatial transition curve is numerical methods for solving nonlinear integral equations, where the initial data are taken coordinate the starting and ending points of the curve of the future, and the inclination of the tangent and the deviation of the curve from the tangent plane at these points. System solutions for the numerical method are the partial derivatives of the equations of the unknown parameters of the law of change of torsion and length of the transition curve. Findings. The parametric equations of the spatial transition curve are calculated by finding the unknown coefficients of the parabolic distribution of the curvature and torsion, as well as the spatial length of the transition curve. Originality. A method for constructing the spatial transition curve is devised, and based on this software geometric modeling spatial transition curves of railway track with specified deviations of the curve from the tangent plane. Practical value. The resulting curve can be applied in any sector of the economy, where it is necessary to ensure a smooth transition from linear to circular section of the curved space bypass. An example is the transition curve in the construction of the railway line, road, pipe, profile, flat section of the working blades of the turbine and compressor, the ship, plane, car, etc.

  18. Photoelectic BV Light Curves of Algol and the Interpretations of the Light Curves

    Directory of Open Access Journals (Sweden)

    Ho-Il Kim

    1985-06-01

    Full Text Available Standardized B and V photoelectric light curves of Algol are made with the observations obtained during 1982-84 with the 40-cm and the 61-cm reflectors of Yonsei University Observatory. These light curves show asymmetry between ascending and descending shoulders. The ascending shoulder is 0.02 mag brighter than descending shoulder in V light curve and 0.03 mag in B light curve. These asymmetric light curves are interpreted as the result of inhomogeneous energy distribution on the surface of one star of the eclipsing pair rather than the result of gaseous stream flowing from KOIV to B8V star. The 180-year periodicity, so called great inequality, are most likely the result proposed by Kim et al. (1983 that the abrupt and discrete mass losses of cooler component may be the cause of this orbital change. The amount of mass loss deduced from these discrete period changes turned out to be of the order of 10^(-6 - 10^(-5 Msolar.

  19. Probabilistic Rainfall Intensity-Duration-Frequency Curves for the October 2015 Flooding in South Carolina

    Science.gov (United States)

    Phillips, R.; Samadi, S. Z.; Meadows, M.

    2017-12-01

    The potential for the intensity of extreme rainfall to increase with climate change nonstationarity has emerged as a prevailing issue for the design of engineering infrastructure, underscoring the need to better characterize the statistical assumptions underlying hydrological frequency analysis. The focus of this study is on developing probabilistic rainfall intensity-duration-frequency (IDF) curves for the major catchments in South Carolina (SC) where the October 02-05, 2015 floods caused infrastructure damages and several lives to be lost. Continuous to discrete probability distributions including Weibull, the generalized extreme value (GEV), the Generalized Pareto (GP), the Gumbel, the Fréchet, the normal, and the log-normal functions were fitted to the short duration (i.e., 24-hr) intense rainfall. Analysis suggests that the GEV probability distribution provided the most adequate fit to rainfall records. Rainfall frequency analysis indicated return periods above 500 years for urban drainage systems with a maximum return level of approximately 2,744 years, whereas rainfall magnitude was much lower in rural catchments. Further, the return levels (i.e., 2, 20, 50,100, 500, and 1000 years) computed by Monte Carlo method were consistently higher than the NOAA design IDF curves. Given the potential increase in the magnitude of intense rainfall, current IDF curves can substantially underestimate the frequency of extremes, indicating the susceptibility of the storm drainage and flood control structures in SC that were designed under assumptions of a stationary climate.

  20. Arterial pressure measurement: Is the envelope curve of the oscillometric method influenced by arterial stiffness?

    International Nuclear Information System (INIS)

    Gelido, G; Angiletta, S; Pujalte, A; Quiroga, P; Cornes, P; Craiem, D

    2007-01-01

    Measurement of peripheral arterial pressure using the oscillometric method is commonly used by professionals as well as by patients in their homes. This non invasive automatic method is fast, efficient and the required equipment is affordable with a low cost. The measurement method consists of obtaining parameters from a calibrated decreasing curve that is modulated by heart beats witch appear when arterial pressure reaches the cuff pressure. Diastolic, mean and systolic pressures are obtained calculating particular instants from the heart beats envelope curve. In this article we analyze the envelope of this amplified curve to find out if its morphology is related to arterial stiffness in patients. We found, in 33 volunteers, that the envelope waveform width correlates to systolic pressure (r=0.4, p<0.05), to pulse pressure (r=0.6, p<0.05) and to pulse pressure normalized to systolic pressure (r=0.6, p<0.05). We believe that the morphology of the heart beats envelope curve obtained with the oscillometric method for peripheral pressure measurement depends on arterial stiffness and can be used to enhance pressure measurements

  1. A Journey Between Two Curves

    Directory of Open Access Journals (Sweden)

    Sergey A. Cherkis

    2007-03-01

    Full Text Available A typical solution of an integrable system is described in terms of a holomorphic curve and a line bundle over it. The curve provides the action variables while the time evolution is a linear flow on the curve's Jacobian. Even though the system of Nahm equations is closely related to the Hitchin system, the curves appearing in these two cases have very different nature. The former can be described in terms of some classical scattering problem while the latter provides a solution to some Seiberg-Witten gauge theory. This note identifies the setup in which one can formulate the question of relating the two curves.

  2. Asymptotic and numerical prediction of current-voltage curves for an organic bilayer solar cell under varying illumination and comparison to the Shockley equivalent circuit

    KAUST Repository

    Foster, J. M.; Kirkpatrick, J.; Richardson, G.

    2013-01-01

    In this study, a drift-diffusion model is used to derive the current-voltage curves of an organic bilayer solar cell consisting of slabs of electron acceptor and electron donor materials sandwiched together between current collectors. A simplified

  3. Tension Behaviour on the Connection of the Cold-Formed Cut-Curved Steel Channel Section

    Science.gov (United States)

    Sani, Mohd Syahrul Hisyam Mohd; Muftah, Fadhluhartini; Fakri Muda, Mohd; Siang Tan, Cher

    2017-08-01

    Cold-formed steel (CFS) are utilised as a non-structural and structural element in construction activity especially a residential house and small building roof truss system. CFS with a lot of advantages and some of disadvantages such as buckling that must be prevented for roof truss production are being studied equally. CFS was used as a top chord of the roof truss system which normally a slender section is dramatically influenced to buckling failure and instability of the structure. So, the curved section is produced for a top chord for solving the compression member of the roof truss. Besides, there are lacked of design and production information about the CFS curved channel section. In the study, the CFS is bent by using a cut-curved method because of ease of production, without the use of skilled labour and high cost machine. The tension behaviour of the strengthening method of cut-curved or could be recognised as a connection of the cut-curved section was tested and analysed. There are seven types of connection was selected. From the testing and observation, it is shown the specimen with full weld along the cut section and adds with flange element plate with two self-drilling screws (F7A) was noted to have a higher value of ultimate load. Finally, there are three alternative methods of connection for CFS cut-curved that could be a reference for a contractor and further design.

  4. Problems of Equivalence in Shona- English Bilingual Dictionaries

    African Journals Online (AJOL)

    rbr

    Page 1 ... translation equivalents in Shona-English dictionaries where lexicographers will be dealing with divergent languages and cultures, traditional practices of lexicography and the absence of reliable ... ideal in translation is to achieve structural and semantic equivalence. Absolute equivalence between any two ...

  5. 10 CFR 474.3 - Petroleum-equivalent fuel economy calculation.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Petroleum-equivalent fuel economy calculation. 474.3..., DEVELOPMENT, AND DEMONSTRATION PROGRAM; PETROLEUM-EQUIVALENT FUEL ECONOMY CALCULATION § 474.3 Petroleum-equivalent fuel economy calculation. (a) The petroleum-equivalent fuel economy for an electric vehicle is...

  6. Receiver operating characteristic (ROC) curves and the definition of threshold levels to diagnose coronary artery disease on electrocardiographic stress testing. Part II: the use of ROC curves in the choice of electrocardiographic stress test markers of ischaemia.

    Science.gov (United States)

    Marazìa, Stefania; Barnabei, Luca; De Caterina, Raffaele

    2008-01-01

    A common problem in diagnostic medicine, when performing a diagnostic test, is to obtain an accurate discrimination between 'normal' cases and cases with disease, owing to the overlapping distributions of these populations. In clinical practice, it is exceedingly rare that a chosen cut point will achieve perfect discrimination between normal cases and those with disease, and one has to select the best compromise between sensitivity and specificity by comparing the diagnostic performance of different tests or diagnostic criteria available. Receiver operating characteristic (or receiver operator characteristic, ROC) curves allow systematic and intuitively appealing descriptions of the diagnostic performance of a test and a comparison of the performance of different tests or diagnostic criteria. This review will analyse the basic principles underlying ROC curves and their specific application to the choice of optimal parameters on exercise electrocardiographic stress testing. Part II will be devoted to the comparative analysis of various parameters derived from exercise stress testing for the diagnosis of underlying coronary artery disease.

  7. Bond yield curve construction

    Directory of Open Access Journals (Sweden)

    Kožul Nataša

    2014-01-01

    Full Text Available In the broadest sense, yield curve indicates the market's view of the evolution of interest rates over time. However, given that cost of borrowing it closely linked to creditworthiness (ability to repay, different yield curves will apply to different currencies, market sectors, or even individual issuers. As government borrowing is indicative of interest rate levels available to other market players in a particular country, and considering that bond issuance still remains the dominant form of sovereign debt, this paper describes yield curve construction using bonds. The relationship between zero-coupon yield, par yield and yield to maturity is given and their usage in determining curve discount factors is described. Their usage in deriving forward rates and pricing related derivative instruments is also discussed.

  8. Using Light Curves to Characterize Size and Shape of Pseudo-Debris

    Science.gov (United States)

    Rodriquez, Heather M.; Abercromby, Kira J.; Jarvis, Kandy S.; Barker, Edwin

    2006-01-01

    Photometric measurements were collected for a new study aimed at estimating orbital debris sizes based on object brightness. To obtain a size from optical measurements the current practice is to assume an albedo and use a normalized magnitude to calculate optical size. However, assuming a single albedo value may not be valid for all objects or orbit types; material type and orientation can mask an object s true optical cross section. This experiment used a CCD camera to record data, a 300 W Xenon, Ozone Free collimated light source to simulate solar illumination, and a robotic arm with five degrees of freedom to move the piece of simulated debris through various orientations. The pseudo-debris pieces used in this experiment originate from the European Space Operations Centre s ESOC2 ground test explosion of a mock satellite. A uniformly illuminated white ping-pong ball was used as a zero-magnitude reference. Each debris piece was then moved through specific orientations and rotations to generate a light curve. This paper discusses the results of five different object-based light curves as measured through an x-rotation. Intensity measurements, from which each light curve was generated, were recorded in five degree increments from zero to 180 degrees. Comparing light curves of different shaped and sized pieces against their characteristic length establishes the start of a database from which an optical size estimation model will be derived in the future.

  9. Course design via Equivalency Theory supports equivalent student grades and satisfaction in online and face-to-face psychology classes

    Directory of Open Access Journals (Sweden)

    David eGarratt-Reed

    2016-05-01

    Full Text Available There has been a recent rapid growth in the number of psychology courses offered online through institutions of higher education. The American Psychological Association (APA has highlighted the importance of ensuring the effectiveness of online psychology courses. Despite this, there have been inconsistent findings regarding student grades, satisfaction, and retention in online psychology units. Equivalency Theory posits that online and classroom-based learners will attain equivalent learning outcomes when equivalent learning experiences are provided. We present a case study of an online introductory psychology unit designed to provide equivalent learning experiences to the pre-existing face-to-face version of the unit. Academic performance, student feedback, and retention data from 866 Australian undergraduate psychology students were examined to assess whether the online unit produced comparable outcomes to the ‘traditional’ unit delivered face-to-face. Student grades did not significantly differ between modes of delivery, except for a group-work based assessment where online students performed more poorly. Student satisfaction was generally high in both modes of the unit, with group-work the key source of dissatisfaction in the online unit. The results provide partial support for Equivalency Theory. The group-work based assessment did not provide an equivalent learning experience for students in the online unit highlighting the need for further research to determine effective methods of engaging students in online group activities. Consistent with previous research, retention rates were significantly lower in the online unit, indicating the need to develop effective strategies to increase online retention rates. While this study demonstrates successes in presenting online students with an equivalent learning experience, we recommend that future research investigates means of successfully facilitating collaborative group-work assessment

  10. Curve Digitizer – A software for multiple curves digitizing

    Directory of Open Access Journals (Sweden)

    Florentin ŞPERLEA

    2010-06-01

    Full Text Available The Curve Digitizer is software that extracts data from an image file representing a graphicand returns them as pairs of numbers which can then be used for further analysis and applications.Numbers can be read on a computer screen stored in files or copied on paper. The final result is adata set that can be used with other tools such as MSEXCEL. Curve Digitizer provides a useful toolfor any researcher or engineer interested in quantifying the data displayed graphically. The image filecan be obtained by scanning a document

  11. Measurement of the first Townsend ionization coefficient in a methane-based tissue-equivalent gas

    Energy Technology Data Exchange (ETDEWEB)

    Petri, A.R. [Instituto de Pesquisas Energéticas e Nucleares, Cidade Universitária, 05508-000 São Paulo (Brazil); Gonçalves, J.A.C. [Instituto de Pesquisas Energéticas e Nucleares, Cidade Universitária, 05508-000 São Paulo (Brazil); Departamento de Física, Pontifícia Universidade Católica de São Paulo, 01303-050 São Paulo (Brazil); Mangiarotti, A. [Instituto de Física - Universidade de São Paulo, Cidade Universitária, 05508-080 São Paulo (Brazil); Botelho, S. [Instituto de Pesquisas Energéticas e Nucleares, Cidade Universitária, 05508-000 São Paulo (Brazil); Bueno, C.C., E-mail: ccbueno@ipen.br [Instituto de Pesquisas Energéticas e Nucleares, Cidade Universitária, 05508-000 São Paulo (Brazil)

    2017-03-21

    Tissue-equivalent gases (TEGs), often made of a hydrocarbon, nitrogen, and carbon dioxide, have been employed in microdosimetry for decades. However, data on the first Townsend ionization coefficient (α) in such mixtures are scarce, regardless of the chosen hydrocarbon. In this context, measurements of α in a methane-based tissue-equivalent gas (CH{sub 4} – 64.4%, CO{sub 2} – 32.4%, and N{sub 2} – 3.2%) were performed in a uniform field configuration for density-normalized electric fields (E/N) up to 290 Td. The setup adopted in our previous works was improved for operating at low pressures. The modifications introduced in the apparatus and the experimental technique were validated by comparing our results of the first Townsend ionization coefficient in nitrogen, carbon dioxide, and methane with those from the literature and Magboltz simulations. The behavior of α in the methane-based TEG was consistent with that observed for pure methane. All the experimental results are included in tabular form in the .

  12. Development of a molecular dynamic based cohesive zone model for prediction of an equivalent material behavior for Al/Al2O3 composite

    Energy Technology Data Exchange (ETDEWEB)

    Sazgar, A. [Center of Excellence in Design, Robotics and Automation, Department of Mechanical Engineering, Sharif University of Technology, Tehran (Iran, Islamic Republic of); Movahhedy, M.R., E-mail: movahhed@sharif.edu [Center of Excellence in Design, Robotics and Automation, Department of Mechanical Engineering, Sharif University of Technology, Tehran (Iran, Islamic Republic of); Mahnama, M. [School of Mechanical Engineering, University of Tehran, Tehran (Iran, Islamic Republic of); Sohrabpour, S. [Center of Excellence in Design, Robotics and Automation, Department of Mechanical Engineering, Sharif University of Technology, Tehran (Iran, Islamic Republic of)

    2017-01-02

    The interfacial behavior of composites is often simulated using a cohesive zone model (CZM). In this approach, a traction-separation (T-S) relation between the matrix and reinforcement particles, which is often obtained from experimental results, is employed. However, since the determination of this relation from experimental results is difficult, the molecular dynamics (MD) simulation may be used as a virtual environment to obtain this relation. In this study, MD simulations under the normal and shear loadings are used to obtain the interface behavior of Al/Al2O3 composite material and to derive the T-S relation. For better agreement with Al/Al2O3 interfacial behavior, the exponential form of the T-S relation suggested by Needleman [1] is modified to account for thermal effects. The MD results are employed to develop a parameterized cohesive zone model which is implemented in a finite element model of the matrix-particle interactions. Stress-strain curves obtained from simulations under different loading conditions and volume fractions show a close correlation with experimental results. Finally, by studying the effects of strain rate and volume fraction of particles in Al(6061-T6)/Al2O3 composite, an equivalent homogeneous model is introduced which can predict the overall behavior of the composite.

  13. Behavioural equivalence for infinite systems - Partially decidable!

    DEFF Research Database (Denmark)

    Sunesen, Kim; Nielsen, Mogens

    1996-01-01

    languages with two generalizations based on traditional approaches capturing non-interleaving behaviour, pomsets representing global causal dependency, and locality representing spatial distribution of events. We first study equivalences on Basic Parallel Processes, BPP, a process calculus equivalent...... of processes between BPP and TCSP, not only are the two equivalences different, but one (locality) is decidable whereas the other (pomsets) is not. The decidability result for locality is proved by a reduction to the reachability problem for Petri nets....

  14. Relations of equivalence of conditioned radioactive waste

    International Nuclear Information System (INIS)

    Kumer, L.; Szeless, A.; Oszuszky, F.

    1982-01-01

    A compensation for the wastes remaining with the operator of a waste management center, to be given by the agent having caused the waste, may be assured by effecting a financial valuation (equivalence) of wastes. Technically and logically, this equivalence between wastes (or specifically between different waste categories) and financial valuation has been established as reasonable. In this paper, the possibility of establishing such equivalences are developed, and their suitability for waste management concepts is quantitatively expressed

  15. Equivalences of real submanifolds in complex space.

    OpenAIRE

    ZAITSEV, DMITRI

    2001-01-01

    PUBLISHED We show that for any real-analytic submanifold M in CN there is a proper real-analytic subvariety V contained in M such that for any p ? M \\ V , any real-analytic submanifold M? in CN, and any p? ? M?, the germs of the submanifolds M and M? at p and p? respectively are formally equivalent if and only if they are biholomorphically equivalent. More general results for k-equivalences are also stated and proved.

  16. Estimating reaction rate constants: comparison between traditional curve fitting and curve resolution

    NARCIS (Netherlands)

    Bijlsma, S.; Boelens, H. F. M.; Hoefsloot, H. C. J.; Smilde, A. K.

    2000-01-01

    A traditional curve fitting (TCF) algorithm is compared with a classical curve resolution (CCR) approach for estimating reaction rate constants from spectral data obtained in time of a chemical reaction. In the TCF algorithm, reaction rate constants an estimated from the absorbance versus time data

  17. A catalog of special plane curves

    CERN Document Server

    Lawrence, J Dennis

    2014-01-01

    Among the largest, finest collections available-illustrated not only once for each curve, but also for various values of any parameters present. Covers general properties of curves and types of derived curves. Curves illustrated by a CalComp digital incremental plotter. 12 illustrations.

  18. Intersection numbers of spectral curves

    CERN Document Server

    Eynard, B.

    2011-01-01

    We compute the symplectic invariants of an arbitrary spectral curve with only 1 branchpoint in terms of integrals of characteristic classes in the moduli space of curves. Our formula associates to any spectral curve, a characteristic class, which is determined by the laplace transform of the spectral curve. This is a hint to the key role of Laplace transform in mirror symmetry. When the spectral curve is y=\\sqrt{x}, the formula gives Kontsevich--Witten intersection numbers, when the spectral curve is chosen to be the Lambert function \\exp{x}=y\\exp{-y}, the formula gives the ELSV formula for Hurwitz numbers, and when one chooses the mirror of C^3 with framing f, i.e. \\exp{-x}=\\exp{-yf}(1-\\exp{-y}), the formula gives the Marino-Vafa formula, i.e. the generating function of Gromov-Witten invariants of C^3. In some sense this formula generalizes ELSV, Marino-Vafa formula, and Mumford formula.

  19. A Cp-theory problem book functional equivalencies

    CERN Document Server

    Tkachuk, Vladimir V

    2016-01-01

    This fourth volume in Vladimir Tkachuk's series on Cp-theory gives reasonably complete coverage of the theory of functional equivalencies through 500 carefully selected problems and exercises. By systematically introducing each of the major topics of Cp-theory, the book is intended to bring a dedicated reader from basic topological principles to the frontiers of modern research. The book presents complete and up-to-date information on the preservation of topological properties by homeomorphisms of function spaces.  An exhaustive theory of t-equivalent, u-equivalent and l-equivalent spaces is developed from scratch.   The reader will also find introductions to the theory of uniform spaces, the theory of locally convex spaces, as well as  the theory of inverse systems and dimension theory. Moreover, the inclusion of Kolmogorov's solution of Hilbert's Problem 13 is included as it is needed for the presentation of the theory of l-equivalent spaces. This volume contains the most important classical re...

  20. Equivalence relations for the 9972-9975 SARP

    International Nuclear Information System (INIS)

    Niemer, K.A.; Frost, R.L.

    1994-10-01

    Equivalence relations required to determine mass limits for mixtures of nuclides for the Safety Analysis Report for Packaging (SARP) of the Savannah River Site 9972, 9973, 9974, and 9975 shipping casks were calculated. The systems analyzed included aqueous spheres, homogeneous metal spheres, and metal ball-and-shell configurations, all surrounded by an effectively infinite stainless steel or water reflector. Comparison of the equivalence calculations with the rule-of-fractions showed conservative agreement for aqueous solutions, both conservative and non-conservative agreement for the metal homogeneous sphere systems, and non-conservative agreement for the majority of metal ball-and-shell systems. Equivalence factors for the aqueous solutions and homogeneous metal spheres were calculated. The equivalence factors for the non-conservative metal homogeneous sphere systems were adjusted so that they were conservative. No equivalence factors were calculated for the ball-and-shell systems since the -SARP assumes that only homogeneous or uniformly distributed material will be shipped in the 9972-9975 shipping casks, and an unnecessarily conservative critical mass may result if the ball-and-shell configurations are included

  1. Equivalence of Szegedy's and coined quantum walks

    Science.gov (United States)

    Wong, Thomas G.

    2017-09-01

    Szegedy's quantum walk is a quantization of a classical random walk or Markov chain, where the walk occurs on the edges of the bipartite double cover of the original graph. To search, one can simply quantize a Markov chain with absorbing vertices. Recently, Santos proposed two alternative search algorithms that instead utilize the sign-flip oracle in Grover's algorithm rather than absorbing vertices. In this paper, we show that these two algorithms are exactly equivalent to two algorithms involving coined quantum walks, which are walks on the vertices of the original graph with an internal degree of freedom. The first scheme is equivalent to a coined quantum walk with one walk step per query of Grover's oracle, and the second is equivalent to a coined quantum walk with two walk steps per query of Grover's oracle. These equivalences lie outside the previously known equivalence of Szegedy's quantum walk with absorbing vertices and the coined quantum walk with the negative identity operator as the coin for marked vertices, whose precise relationships we also investigate.

  2. Use of linear regression for the processing of curves of differential potentiometric titration of a binary mixture of heterovalent ions using precipitation reactions

    International Nuclear Information System (INIS)

    Mar'yanov, B.M.; Zarubin, A.G.; Shumar, S.V.

    2003-01-01

    A method is proposed for the computer processing of curve of differential potentiometric titration of a binary mixture of heterovalent ions using precipitation reactions. The method is based on the transformation of the titration curve to segment-line characteristics, whose parameters (within the accuracy of the least-squares method) determine the sequence of the equivalence points and solubility products of the resulting precipitation. The method is applied to the titration of Ag(I)-Cd)II), Hg(II)-Te(IV), and Cd(II)-Te(IV) mixtures by a sodium diethyldithiocarbamate solution with membrane sulfide and glassy carbon indicator electrodes. For 4 to 11 mg of the analyte in 50 ml of the solution, RSD varies from 1 to 9% [ru

  3. 7 CFR 1000.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1000.54 Section 1000.54 Agriculture... Prices § 1000.54 Equivalent price. If for any reason a price or pricing constituent required for computing the prices described in § 1000.50 is not available, the market administrator shall use a price or...

  4. A Bayesian equivalency test for two independent binomial proportions.

    Science.gov (United States)

    Kawasaki, Yohei; Shimokawa, Asanao; Yamada, Hiroshi; Miyaoka, Etsuo

    2016-01-01

    In clinical trials, it is often necessary to perform an equivalence study. The equivalence study requires actively denoting equivalence between two different drugs or treatments. Since it is not possible to assert equivalence that is not rejected by a superiority test, statistical methods known as equivalency tests have been suggested. These methods for equivalency tests are based on the frequency framework; however, there are few such methods in the Bayesian framework. Hence, this article proposes a new index that suggests the equivalency of binomial proportions, which is constructed based on the Bayesian framework. In this study, we provide two methods for calculating the index and compare the probabilities that have been calculated by these two calculation methods. Moreover, we apply this index to the results of actual clinical trials to demonstrate the utility of the index.

  5. Momentum-subtraction renormalization techniques in curved space-time

    Energy Technology Data Exchange (ETDEWEB)

    Foda, O.

    1987-10-01

    Momentum-subtraction techniques, specifically BPHZ and Zimmermann's Normal Product algorithm, are introduced as useful tools in the study of quantum field theories in the presence of background fields. In a model of a self-interacting massive scalar field, conformally coupled to a general asymptotically-flat curved space-time with a trivial topology, momentum-subtractions are shown to respect invariance under general coordinate transformations. As an illustration, general expressions for the trace anomalies are derived, and checked by explicit evaluation of the purely gravitational contributions in the free field theory limit. Furthermore, the trace of the renormalized energy-momentum tensor is shown to vanish at the Gell-Mann Low eigenvalue as it should.

  6. Momentum-subtraction renormalization techniques in curved space-time

    International Nuclear Information System (INIS)

    Foda, O.

    1987-01-01

    Momentum-subtraction techniques, specifically BPHZ and Zimmermann's Normal Product algorithm, are introduced as useful tools in the study of quantum field theories in the presence of background fields. In a model of a self-interacting massive scalar field, conformally coupled to a general asymptotically-flat curved space-time with a trivial topology, momentum-subtractions are shown to respect invariance under general coordinate transformations. As an illustration, general expressions for the trace anomalies are derived, and checked by explicit evaluation of the purely gravitational contributions in the free field theory limit. Furthermore, the trace of the renormalized energy-momentum tensor is shown to vanish at the Gell-Mann Low eigenvalue as it should

  7. The ultraviolet interstellar extinction curve in the Pleiades

    Science.gov (United States)

    Witt, A. N.; Bohlin, R. C.; Stecher, T. P.

    1981-01-01

    The wavelength dependence of ultraviolet extinction in the Pleiades dust clouds has been determined from IUE observations of HD 23512, the brightest heavily reddened member of the Pleiades cluster. There is evidence for an anomalously weak absorption bump at 2200 A, followed by an extinction rise in the far ultraviolet with an essentially normal slope. A relatively weak absorption band at 2200 A and a weak diffuse absorption band at 4430 A seem to be common characteristics of dust present in dense clouds. Evidence is presented which suggests that the extinction characteristics found for HD 23512 are typical for a class of extinction curves observed in several cases in the Galaxy and in the LMC.

  8. Assessment of the Radiation-Equivalent of Chemotherapy Contributions in 1-Phase Radio-chemotherapy Treatment of Muscle-Invasive Bladder Cancer

    International Nuclear Information System (INIS)

    Plataniotis, George A.; Dale, Roger G.

    2014-01-01

    Purpose: To estimate the radiation equivalent of the chemotherapy contribution to observed complete response rates in published results of 1-phase radio-chemotherapy of muscle-invasive bladder cancer. Methods and Materials: A standard logistic dose–response curve was fitted to data from radiation therapy-alone trials and then used as the platform from which to quantify the chemotherapy contribution in 1-phase radio-chemotherapy trials. Two possible mechanisms of chemotherapy effect were assumed (1) a fixed radiation-independent contribution to local control; or (2) a fixed degree of chemotherapy-induced radiosensitization. A combination of both mechanisms was also considered. Results: The respective best-fit values of the independent chemotherapy-induced complete response (CCR) and radiosensitization (s) coefficients were 0.40 (95% confidence interval −0.07 to 0.87) and 1.30 (95% confidence interval 0.86-1.70). Independent chemotherapy effect was slightly favored by the analysis, and the derived CCR value was consistent with reports of pathologic complete response rates seen in neoadjuvant chemotherapy-alone treatments of muscle-invasive bladder cancer. The radiation equivalent of the CCR was 36.3 Gy. Conclusion: Although the data points in the analyzed radio-chemotherapy studies are widely dispersed (largely on account of the diverse range of chemotherapy schedules used), it is nonetheless possible to fit plausible-looking response curves. The methodology used here is based on a standard technique for analyzing dose-response in radiation therapy-alone studies and is capable of application to other mixed-modality treatment combinations involving radiation therapy

  9. The equivalence theorem

    International Nuclear Information System (INIS)

    Veltman, H.

    1990-01-01

    The equivalence theorem states that, at an energy E much larger than the vector-boson mass M, the leading order of the amplitude with longitudinally polarized vector bosons on mass shell is given by the amplitude in which these vector bosons are replaced by the corresponding Higgs ghosts. We prove the equivalence theorem and show its validity in every order in perturbation theory. We first derive the renormalized Ward identities by using the diagrammatic method. Only the Feynman-- 't Hooft gauge is discussed. The last step of the proof includes the power-counting method evaluated in the large-Higgs-boson-mass limit, needed to estimate the leading energy behavior of the amplitudes involved. We derive expressions for the amplitudes involving longitudinally polarized vector bosons for all orders in perturbation theory. The fermion mass has not been neglected and everything is evaluated in the region m f ∼M much-lt E much-lt m Higgs

  10. Elliptic curves for applications (Tutorial)

    NARCIS (Netherlands)

    Lange, T.; Bernstein, D.J.; Chatterjee, S.

    2011-01-01

    More than 25 years ago, elliptic curves over finite fields were suggested as a group in which the Discrete Logarithm Problem (DLP) can be hard. Since then many researchers have scrutinized the security of the DLP on elliptic curves with the result that for suitably chosen curves only exponential

  11. Normalized rare earth elements in water, sediments, and wine: identifying sources and environmental redox conditions

    Science.gov (United States)

    Piper, David Z.; Bau, Michael

    2013-01-01

    The concentrations of the rare earth elements (REE) in surface waters and sediments, when normalized on an element-by-element basis to one of several rock standards and plotted versus atomic number, yield curves that reveal their partitioning between different sediment fractions and the sources of those fractions, for example, between terrestrial-derived lithogenous debris and seawater-derived biogenous detritus and hydrogenous metal oxides. The REE of ancient sediments support their partitioning into these same fractions and further contribute to the identification of the redox geochemistry of the sea water in which the sediments accumulated. The normalized curves of the REE that have been examined in several South American wine varietals can be interpreted to reflect the lithology of the bedrock on which the vines may have been grown, suggesting limited fractionation during soil development.

  12. Differential geometry and topology of curves

    CERN Document Server

    Animov, Yu

    2001-01-01

    Differential geometry is an actively developing area of modern mathematics. This volume presents a classical approach to the general topics of the geometry of curves, including the theory of curves in n-dimensional Euclidean space. The author investigates problems for special classes of curves and gives the working method used to obtain the conditions for closed polygonal curves. The proof of the Bakel-Werner theorem in conditions of boundedness for curves with periodic curvature and torsion is also presented. This volume also highlights the contributions made by great geometers. past and present, to differential geometry and the topology of curves.

  13. Models of genus one curves

    OpenAIRE

    Sadek, Mohammad

    2010-01-01

    In this thesis we give insight into the minimisation problem of genus one curves defined by equations other than Weierstrass equations. We are interested in genus one curves given as double covers of P1, plane cubics, or complete intersections of two quadrics in P3. By minimising such a curve we mean making the invariants associated to its defining equations as small as possible using a suitable change of coordinates. We study the non-uniqueness of minimisations of the genus one curves des...

  14. Optimization of conventional rule curves coupled with hedging rules for reservoir operation

    DEFF Research Database (Denmark)

    Taghian, Mehrdad; Rosbjerg, Dan; Haghighi, Ali

    2014-01-01

    As a common approach to reservoir operating policies, water levels at the end of each time interval should be kept at or above the rule curve. In this study, the policy is captured using rationing of the target yield to reduce the intensity of severe water shortages. For this purpose, a hybrid...... to achieve the optimal water allocation and the target storage levels for reservoirs. As a case study, a multipurpose, multireservoir system in southern Iran is selected. The results show that the model has good performance in extracting the optimum policy for reservoir operation under both normal...... model is developed to optimize simultaneously both the conventional rule curve and the hedging rule. In the compound model, a simple genetic algorithm is coupled with a simulation program, including an inner linear programming algorithm. In this way, operational policies are imposed by priority concepts...

  15. The crime kuznets curve

    OpenAIRE

    Buonanno, Paolo; Fergusson, Leopoldo; Vargas, Juan Fernando

    2014-01-01

    We document the existence of a Crime Kuznets Curve in US states since the 1970s. As income levels have risen, crime has followed an inverted U-shaped pattern, first increasing and then dropping. The Crime Kuznets Curve is not explained by income inequality. In fact, we show that during the sample period inequality has risen monotonically with income, ruling out the traditional Kuznets Curve. Our finding is robust to adding a large set of controls that are used in the literature to explain the...

  16. Magnetic measurements on human erythrocytes: Normal, beta thalassemia major, and sickle

    Science.gov (United States)

    Sakhnini, Lama

    2003-05-01

    In this article magnetic measurements were made on human erythrocytes at different hemoglobin states (normal and reduced hemoglobin). Different blood samples: normal, beta thalassemia major, and sickle were studied. Beta thalassemia major and sickle samples were taken from patients receiving lifelong blood transfusion treatment. All samples examined exhibited diamagnetic behavior. Beta thalassemia major and sickle samples showed higher diamagnetic susceptibilities than that for the normal, which was attributed to the increase of membrane to hemoglobin volume ratio of the abnormal cells. Magnetic measurements showed that the erythrocytes in the reduced state showed less diamagnetic response in comparison with erythrocytes in the normal state. Analysis of the paramagnetic component of magnetization curves gave an effective magnetic moment of μeff=7.6 μB per reduced hemoglobin molecule. The same procedure was applied to sickle and beta thalassemia major samples and values for μeff were found to be comparable to that of the normal erythrocytes.

  17. Correlation between 2D and 3D flow curve modelling of DP steels using a microstructure-based RVE approach

    International Nuclear Information System (INIS)

    Ramazani, A.; Mukherjee, K.; Quade, H.; Prahl, U.; Bleck, W.

    2013-01-01

    A microstructure-based approach by means of representative volume elements (RVEs) is employed to evaluate the flow curve of DP steels using virtual tensile tests. Microstructures with different martensite fractions and morphologies are studied in two- and three-dimensional approaches. Micro sections of DP microstructures with various amounts of martensite have been converted to 2D RVEs, while 3D RVEs were constructed statistically with randomly distributed phases. A dislocation-based model is used to describe the flow curve of each ferrite and martensite phase separately as a function of carbon partitioning and microstructural features. Numerical tensile tests of RVE were carried out using the ABAQUS/Standard code to predict the flow behaviour of DP steels. It is observed that 2D plane strain modelling gives an underpredicted flow curve for DP steels, while the 3D modelling gives a quantitatively reasonable description of flow curve in comparison to the experimental data. In this work, a von Mises stress correlation factor σ 3D /σ 2D has been identified to compare the predicted flow curves of these two dimensionalities showing a third order polynomial relation with respect to martensite fraction and a second order polynomial relation with respect to equivalent plastic strain, respectively. The quantification of this polynomial correlation factor is performed based on laboratory-annealed DP600 chemistry with varying martensite content and it is validated for industrially produced DP qualities with various chemistry, strength level and martensite fraction.

  18. Experimental determination of the yield stress curve of the scotch pine wood materials

    Science.gov (United States)

    Günay, Ezgi; Aygün, Cevdet; Kaya, Şükrü Tayfun

    2013-12-01

    Yield stress curve is determined for the pine wood specimens by conducting a series of tests. In this work, pinewood is modeled as a composite material with transversely isotropic fibers. Annual rings (wood grain) of the wood specimens are taken as the major fiber directions with which the strain gauge directions are aligned. For this purpose, three types of tests are arranged. These are tensile, compression and torsion loading tests. All of the tests are categorized with respect to fiber orientations and their corresponding loading conditions. Each test within these categories is conducted separately. Tensile and compression tests are conducted in accordance with standards of Turkish Standards Institution (TSE) whereas torsion tests are conducted in accordance with Standards Australia. Specimens are machined from woods of Scotch pine which is widely used in boat building industries and in other structural engineering applications. It is determined that this species behaves more flexibly than the others. Strain gauges are installed on the specimen surfaces in such a way that loading measurements are performed along directions either parallel or perpendicular to the fiber directions. During the test and analysis phase of yield stress curve, orientation of strain gauge directions with respect to fiber directions are taken into account. The diagrams of the normal stress vs. normal strain or the shear stress vs. shear strain are plotted for each test. In each plot, the yield stress is determined by selecting the point on the diagram, the tangent of which is having a slope of 5% less than the slope of the elastic portion of the diagram. The geometric locus of these selected points constitutes a single yield stress curve on σ1-σ2 principal plane. The resulting yield stress curve is plotted as an approximate ellipse which resembles Tsai-Hill failure criterion. The results attained in this work, compare well with the results which are readily available in the literature.

  19. Hot spot in eclipsing dwarf nova IY Ursae Majoris during quiescence and normal outburst

    OpenAIRE

    Bakowska, K.; Olech, A.

    2016-01-01

    We present the analysis of hot spot brightness in light curves of the eclipsing dwarf nova IY Ursae Majoris during its normal outburst in March 2013 and in quiescence in April 2012 and in October 2015. Examination of four reconstructed light curves of the hot spot eclipses showed directly that the brightness of the hot spot changed significantly only during the outburst. The brightness of the hot spot, before and after the outburst, was on the same level. Hereby, based on the behaviour of the...

  20. A study on the jet pump characteristic curve in boiling water reactor

    International Nuclear Information System (INIS)

    Liao, L.Y.

    1990-01-01

    The jet pump models of RELAP5/MOD2, RETRAN-02/MOD3, and RELAP4/MOD3 are compared. From the investigation of the momentum equations, it is found that the normal quadrant jet pump models of these codes are essentially the same. In this paper, it is found that the relationship between the flow ratio, M, and the heat ratio, N, is uniquely determined for a given jet pump geometry provided that the wall friction and gravitational head are neglected. In other words, under the given assumptions the M - N characteristic curve will not change with power level, recirculation pump speed and loop flow rate. The effect of the gravitational head on the M - N curve has been found to be significant for low flow conditions. As a result, a guideline has been given to the definition of the specific energy (or the head ratio). Sensitivity studies on the key parameters have been performed. It is found that the generic M - N curve should not be used for a jet pump which does not have the same nozzle to throat area ratio as that of the generic jet pump

  1. MM98.57 Quantification of Combined Strain Paths

    DEFF Research Database (Denmark)

    Nielsen, Morten Sturgård; Wanheim, Tarras

    1998-01-01

    this curve into useful scalar relations from experimental data.The strain history for plane strain when assuming volume constancy may be plotted in a shear strain, normal strain diagram, which has the property of showing both the rotation of principal deformation axes during the deformation and the amount...... is to describe the total strain history as a curve in the 6-dimensional shear strain, normal strain space. In order to be able to use these experimental data for calculation, the development of this strain curve must be transformed into a set of scalar relations that may be used for predicting the yield surface...... at a given point in a new strain history. A simple example of this concept is to take the length of the strain curve as describing scalar relation: E.g. to use the equivalent strain as parameter for describing the yield stress. This paper focuses on the strain curve concept and the possibilities to convert...

  2. ROBUST DECLINE CURVE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Sutawanir Darwis

    2012-05-01

    Full Text Available Empirical decline curve analysis of oil production data gives reasonable answer in hyperbolic type curves situations; however the methodology has limitations in fitting real historical production data in present of unusual observations due to the effect of the treatment to the well in order to increase production capacity. The development ofrobust least squares offers new possibilities in better fitting production data using declinecurve analysis by down weighting the unusual observations. This paper proposes a robustleast squares fitting lmRobMM approach to estimate the decline rate of daily production data and compares the results with reservoir simulation results. For case study, we usethe oil production data at TBA Field West Java. The results demonstrated that theapproach is suitable for decline curve fitting and offers a new insight in decline curve analysis in the present of unusual observations.

  3. Development of an experimental method for the determination of the dose equivalent indices for low - and medium energy X- and gamma rays

    International Nuclear Information System (INIS)

    Silva Estrada, J.J. da.

    1980-01-01

    An experimental method was developed to measure Dose Equivalent Indices for low and medium energy X-rays. A sphere was constructed to simulate the human body in accordance with ICRU Report 19 but using plexiglass instead of tissue equivalent material of density 1 g.cm -3 . Experimentally it was demonstrated that for the purpose of applied radiation protection both materials are equivalent in spite of a 18% higher density of plexiglass. CaF 2 :Mn and LiF:Mg might be utilized to determine the absorbed dose distribution within the sphere. Measurements indicate that the effective energy can be determined with an accuracy better than 15% for the energy range under consideration. Depth dose curves measured with ionization chamber compared with those of LiF:Mg showed an agreement better than 12% and in the case of CaF 2 :Mn better than 11% for all irradiation conditions used. Conversion factors in units rad R -1 measured with TLD and compared with those obtained from the literature based upon Monte Carlo calculation showed an agreement better than 23% for CaF 2 :Mn and 19% for LiF:Mg. It is concluded from these experiments that the system plexiglass sphere-TLD dosimeters might be used to measure Dose Equivalent Indices for low and medium energy photons. (Author) [pt

  4. The acrophysis: a unifying concept for enchondral bone growth and its disorders. I. Normal growth

    International Nuclear Information System (INIS)

    Oestreich, Alan E.

    2003-01-01

    In order to discuss and illustrate the common effects on normal and abnormal enchondral bone at the physes and at all other growth plates of the developing child, the term ''acrophysis'' is proposed. Acrophyses include the growth plates of secondary growth centers including carpals and tarsals and apophyses, and the growth plates at the non-physeal ends of small tubular bones. The last layer of development of both physes and acrophysis is the cartilaginous zone of provisional calcification (ZPC). The enchondral bone abutting the ZPC shares similar properties at physes and acrophyses, including the relatively lucent metaphyseal bands of many normal infants at several weeks of age. The bone-in-bone pattern of the normal vertebral bodies and bands of demineralization of the tarsal bones just under the ZPC are the equivalent of those bands. The growth arrest/recovery lines of metaphyses similarly have equivalent lines in growth centers and other acrophyseal sites. Nearly the same effects can also be anticipated from the relatively similar growth plate at the cartilaginous cap of benign exostoses (''paraphysis''). The companion article will explore abnormalities at acrophyseal sites, including metabolic bone disease and dysplasias. (orig.)

  5. Anatomical Origin of Abnormal Somatosensory-Evoked Potential (SEP) in Adolescent Idiopathic Scoliosis With Different Curve Severity and Correlation With Cerebellar Tonsillar Level Determined by MRI.

    Science.gov (United States)

    Chau, Wai Wang; Chu, Winnie C W; Lam, Tsz Ping; Ng, Bobby K W; Fu, Linda L K; Cheng, Jack C Y

    2016-05-01

    A prospective cohort study. The aim of this study was to compare the somatosensory-evoked potential (SEP) findings of adolescent idiopathic scoliosis (AIS) subjects of different curve severity with age- and gender-matched controls and to evaluate any correlation between the site of the SEP abnormality with cerebellar tonsillar level measured by magnetic resonance imaging (MRI). Our previous studies showed that a higher percentage of SEP abnormality and cerebellar tonsillar ectopia was present in AIS patients than in normal controls. However, the relationship between the anatomical site of the neurophysiological abnormality and the severity in AIS patients has not been defined. SEP measurement was conducted on 91 Chinese AIS girls with major right thoracic curve of different curve severity (mild, moderate, severe) and 49 matched normal controls. Waveform characteristics (latency and amplitude) were compared among groups. Specific location of SEP abnormality was identified from tibial to cortical levels. Cerebellar tonsillar ectopia was defined by the previously established reference line between basion and opisthion on MRI. Significant prolonged P37 latency was found on the right side between severe AIS patients and normal controls, while increased inter-side P37 latency difference was found between severe versus moderate, and severe versus normal controls. Cerebellar tonsillar ectopia was detected in 27.3% of severe group, 5.8% to 6.7% in mild and moderate group, but none in normal controls. Abnormal SEP occurred superior to C5 region in all surgical (severe) patients, of whom 58% had cerebellar tonsillar ectopia. AIS patients showed significant prolonged latency and increased latency difference on the side of major curvature. The incidence of SEP abnormality increased with curve severity and occurred above the C5 level. The findings suggested that there was a subgroup of progressive AIS with subclinical neurophysiological dysfunction, associated with underlying

  6. Variability in dose-equivalent assessments for inhaled U3O8 concentrations

    International Nuclear Information System (INIS)

    Hewson, G.; Blyth, D.I.

    1985-01-01

    A potentially significant radiological hazard exists in the packaging area of uranium mills through the inhalation of airborne uranium octoxide (U 3 O 8 ). The Radiation Protection (Mining and Milling Code (1980) requires the measurement and assessment of quarterly, annual and cumulative dose equivalents for employees working in these areas. Arising through differences which exist between the abovementioned Code and ICRP 30, and assumptions of particle size and dust concentration distributions, confusion exists within Australia regarding the methods which can be used to make the required assessments. Exposure data were collected during routine monitoring at an operating mill facility and were interpreted using different methods and a range of assumptions. Results indicated the dust at this facility is characterised by an AMAD greater than 10 μ, and dust concentrations were distributed lognormally. Assumptions of a normal distribution may result in an overestimate of the dose equivalent. The importance of particle size in dose assessments using ICRP 30 techniques was highlighted. Information was masked when employee data was grouped to provide work category dose assessments. The use of ICRP 30 methods were recommended to provide uniformity throughout Australia

  7. Visible bands of ammonia: band strengths, curves of growth, and the spatial distribution of ammonia on Jupiter

    International Nuclear Information System (INIS)

    Lutz, B.L.; Owen, T.

    1980-01-01

    We report room-temperature laboratory studies of the 5520 A (6ν 1 ) and 6475 A (5ν 1 ) bands of self-broadened ammonia at column densities ranging from 1.7--435.7 meter-amagats (m-am). Detailed equivalent-width measurements at 24 different pressure-pathlength combinations corresponding to four pressures between 44 and 689 torr and pathlengths between 32 and 512 m are used to determin curves of growth and integrated band strengths. The band strengths for the 6ν 1 and 5ν 1 overtones are 5520 A: S=0.096 +- 0.005 cm -1 (m-am) -1 and 6475 A: S=0.63 +- 0.03 cm -1 (m-am) -1 , respectively.Using these band strengths and curves of growth, we analyze new spatially resolved spectra of Jupiter showing a nonhomogeneous distribution of ammonia in the Jovian atmosphere. The observed variations in the CH 4 /NH 3 mixing ratio are interpreted as evidence of altitude-dependent depletion of ammonia in the atmosphere

  8. NEW CONCEPTS AND TEST METHODS OF CURVE PROFILE AREA DENSITY IN SURFACE: ESTIMATION OF AREAL DENSITY ON CURVED SPATIAL SURFACE

    OpenAIRE

    Hong Shen

    2011-01-01

    The concepts of curve profile, curve intercept, curve intercept density, curve profile area density, intersection density in containing intersection (or intersection density relied on intersection reference), curve profile intersection density in surface (or curve intercept intersection density relied on intersection of containing curve), and curve profile area density in surface (AS) were defined. AS expressed the amount of curve profile area of Y phase in the unit containing surface area, S...

  9. Presheaves of Superselection Structures in Curved Spacetimes

    Science.gov (United States)

    Vasselli, Ezio

    2015-04-01

    We show that superselection structures on curved spacetimes that are expected to describe quantum charges affected by the underlying geometry are categories of sections of presheaves of symmetric tensor categories. When an embedding functor is given, the superselection structure is a Tannaka-type dual of a locally constant group bundle, which hence becomes a natural candidate for the role of the gauge group. Indeed, we show that any locally constant group bundle (with suitable structure group) acts on a net of C* algebras fulfilling normal commutation relations on an arbitrary spacetime. We also give examples of gerbes of C* algebras, defined by Wightman fields and constructed using projective representations of the fundamental group of the spacetime, which we propose as solutions for the problem that existence and uniqueness of the embedding functor are not guaranteed.

  10. Quantifying the Combined Effect of Radiation Therapy and Hyperthermia in Terms of Equivalent Dose Distributions

    International Nuclear Information System (INIS)

    Kok, H. Petra; Crezee, Johannes; Franken, Nicolaas A.P.; Stalpers, Lukas J.A.; Barendsen, Gerrit W.; Bel, Arjan

    2014-01-01

    Purpose: To develop a method to quantify the therapeutic effect of radiosensitization by hyperthermia; to this end, a numerical method was proposed to convert radiation therapy dose distributions with hyperthermia to equivalent dose distributions without hyperthermia. Methods and Materials: Clinical intensity modulated radiation therapy plans were created for 15 prostate cancer cases. To simulate a clinically relevant heterogeneous temperature distribution, hyperthermia treatment planning was performed for heating with the AMC-8 system. The temperature-dependent parameters α (Gy −1 ) and β (Gy −2 ) of the linear–quadratic model for prostate cancer were estimated from the literature. No thermal enhancement was assumed for normal tissue. The intensity modulated radiation therapy plans and temperature distributions were exported to our in-house-developed radiation therapy treatment planning system, APlan, and equivalent dose distributions without hyperthermia were calculated voxel by voxel using the linear–quadratic model. Results: The planned average tumor temperatures T90, T50, and T10 in the planning target volume were 40.5°C, 41.6°C, and 42.4°C, respectively. The planned minimum, mean, and maximum radiation therapy doses were 62.9 Gy, 76.0 Gy, and 81.0 Gy, respectively. Adding hyperthermia yielded an equivalent dose distribution with an extended 95% isodose level. The equivalent minimum, mean, and maximum doses reflecting the radiosensitization by hyperthermia were 70.3 Gy, 86.3 Gy, and 93.6 Gy, respectively, for a linear increase of α with temperature. This can be considered similar to a dose escalation with a substantial increase in tumor control probability for high-risk prostate carcinoma. Conclusion: A model to quantify the effect of combined radiation therapy and hyperthermia in terms of equivalent dose distributions was presented. This model is particularly instructive to estimate the potential effects of interaction from different treatment

  11. M-curves and symmetric products

    Indian Academy of Sciences (India)

    Indranil Biswas

    2017-08-03

    Aug 3, 2017 ... is bounded above by g + 1, where g is the genus of X [11]. Curves which have exactly the maximum number (i.e., genus +1) of components of the real part are called M-curves. Classifying real algebraic curves up to homeomorphism is straightforward, however, classifying even planar non-singular real ...

  12. General Dynamic Equivalent Modeling of Microgrid Based on Physical Background

    Directory of Open Access Journals (Sweden)

    Changchun Cai

    2015-11-01

    Full Text Available Microgrid is a new power system concept consisting of small-scale distributed energy resources; storage devices and loads. It is necessary to employ a simplified model of microgrid in the simulation of a distribution network integrating large-scale microgrids. Based on the detailed model of the components, an equivalent model of microgrid is proposed in this paper. The equivalent model comprises two parts: namely, equivalent machine component and equivalent static component. Equivalent machine component describes the dynamics of synchronous generator, asynchronous wind turbine and induction motor, equivalent static component describes the dynamics of photovoltaic, storage and static load. The trajectory sensitivities of the equivalent model parameters with respect to the output variables are analyzed. The key parameters that play important roles in the dynamics of the output variables of the equivalent model are identified and included in further parameter estimation. Particle Swarm Optimization (PSO is improved for the parameter estimation of the equivalent model. Simulations are performed in different microgrid operation conditions to evaluate the effectiveness of the equivalent model of microgrid.

  13. 49 CFR 391.33 - Equivalent of road test.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 5 2010-10-01 2010-10-01 false Equivalent of road test. 391.33 Section 391.33... AND LONGER COMBINATION VEHICLE (LCV) DRIVER INSTRUCTORS Tests § 391.33 Equivalent of road test. (a) In place of, and as equivalent to, the road test required by § 391.31, a person who seeks to drive a...

  14. Foundations of gravitation theory: the principle of equivalence

    International Nuclear Information System (INIS)

    Haugan, M.P.

    1978-01-01

    A new framework is presented within which to discuss the principle of equivalence and its experimental tests. The framework incorporates a special structure imposed on the equivalence principle by the principle of energy conservation. This structure includes relations among the conceptual components of the equivalence principle as well as quantitative relations among the outcomes of its experimental tests. One of the most striking new results obtained through use of this framework is a connection between the breakdown of local Lorentz invariance and the breakdown of the principle that all bodies fall with the same acceleration in a gravitational field. An extensive discussion of experimental tests of the equivalence principle and their significance is also presented. Within the above framework, theory-independent analyses of a broad range of equivalence principle tests are possible. Gravitational redshift experiments. Doppler-shift experiments, the Turner-Hill and Hughes-Drever experiments, and a number of solar-system tests of gravitation theories are analyzed. Application of the techniques of theoretical nuclear physics to the quantitative interpretation of equivalence principle tests using laboratory materials of different composition yields a number of important results. It is found that current Eotvos experiments significantly demonstrate the compatibility of the weak interactions with the equivalence principle. It is also shown that the Hughes-Drever experiment is the most precise test of local Lorentz invariance yet performed. The work leads to a strong, tightly knit empirical basis for the principle of equivalence, the central pillar of the foundations of gravitation theory

  15. Mathematical simulation of biologically equivalent doses for LDR-HDR

    International Nuclear Information System (INIS)

    Slosarek, K.; Zajusz, A.

    1996-01-01

    Based on the LQ model examples of biologically equivalent doses LDR, HDR and external beams were calculated. The biologically equivalent doses for LDR were calculated by appending to the LQ model the corrector for the time of repair of radiation sublethal damages. For radiation continuously delivered at a low dose rate the influence of sublethal damage repair time changes on biologically equivalent doses were analysed. For fractionated treatment with high dose rate the biologically equivalent doses were calculated by adding to the LQ model the formula of accelerated repopulation. For total biologically equivalent dose calculation for combine LDR-HDR-Tele irradiation examples are presented with the use of different parameters of the time of repair of sublethal damages and accelerated repopulation. The calculations performed show, that the same biologically equivalent doses can be obtained for different parameters of cell kinetics changes during radiation treatment. It also shows, that during biologically equivalent dose calculations for different radiotherapy schedules, ignorance of cell kinetics parameters can lead to relevant errors

  16. The Impact of Grading on a Curve: Assessing the Results of Kulick and Wright's Simulation Analysis

    Science.gov (United States)

    Bailey, Gary L.; Steed, Ronald C.

    2012-01-01

    Kulick and Wright concluded, based on theoretical mathematical simulations of hypothetical student exam scores, that assigning exam grades to students based on the relative position of their exam performance scores within a normal curve may be unfair, given the role that randomness plays in any given student's performance on any given exam.…

  17. Titration Curves: Fact and Fiction.

    Science.gov (United States)

    Chamberlain, John

    1997-01-01

    Discusses ways in which datalogging equipment can enable titration curves to be measured accurately and how computing power can be used to predict the shape of curves. Highlights include sources of error, use of spreadsheets to generate titration curves, titration of a weak acid with a strong alkali, dibasic acids, weak acid and weak base, and…

  18. Uniform surface-to-line integral reduction of physical optics for curved surfaces by modified edge representation with higher-order correction

    Science.gov (United States)

    Lyu, Pengfei; Ando, Makoto

    2017-09-01

    The modified edge representation is one of the equivalent edge currents approximation methods for calculating the physical optics surface radiation integrals in diffraction analysis. The Stokes' theorem is used in the derivation of the modified edge representation from the physical optics for the planar scatterer case, which implies that the surface integral is rigorously reduced into the line integral of the modified edge representation equivalent edge currents, defined in terms of the local shape of the edge. On the contrary, for curved surfaces, the results of radiation integrals depend upon the global shape of the scatterer. The physical optics surface integral consists of two components, from the inner stationary phase point and the edge. The modified edge representation is defined independently from the orientation of the actual edge, and therefore, it could be available not only at the edge but also at the arbitrary points on the scatterer except the stationary phase point where the modified edge representation equivalent edge currents becomes infinite. If stationary phase point exists inside the illuminated region, the physical optics surface integration is reduced into two kinds of the modified edge representation line integrations, along the edge and infinitesimally small integration around the inner stationary phase point, the former and the latter give the diffraction and reflection components, respectively. The accuracy of the latter has been discussed for the curved surfaces and published. This paper focuses on the errors of the former and discusses its correction. It has been numerically observed that the modified edge representation works well for the physical optics diffraction in flat and concave surfaces; errors appear especially for the observer near the reflection shadow boundary if the frequency is low for the convex scatterer. This paper gives the explicit expression of the higher-order correction for the modified edge representation.

  19. Convective Heat Transfer Scaling of Ignition Delay and Burning Rate with Heat Flux and Stretch Rate in the Equivalent Low Stretch Apparatus

    Science.gov (United States)

    Olson, Sandra

    2011-01-01

    To better evaluate the buoyant contributions to the convective cooling (or heating) inherent in normal-gravity material flammability test methods, we derive a convective heat transfer correlation that can be used to account for the forced convective stretch effects on the net radiant heat flux for both ignition delay time and burning rate. The Equivalent Low Stretch Apparatus (ELSA) uses an inverted cone heater to minimize buoyant effects while at the same time providing a forced stagnation flow on the sample, which ignites and burns as a ceiling fire. Ignition delay and burning rate data is correlated with incident heat flux and convective heat transfer and compared to results from other test methods and fuel geometries using similarity to determine the equivalent stretch rates and thus convective cooling (or heating) rates for those geometries. With this correlation methodology, buoyant effects inherent in normal gravity material flammability test methods can be estimated, to better apply the test results to low stretch environments relevant to spacecraft material selection.

  20. Fitting sediment rating curves using regression analysis: a case study of Russian Arctic rivers

    Directory of Open Access Journals (Sweden)

    N. I. Tananaev

    2015-03-01

    Full Text Available Published suspended sediment data for Arctic rivers is scarce. Suspended sediment rating curves for three medium to large rivers of the Russian Arctic were obtained using various curve-fitting techniques. Due to the biased sampling strategy, the raw datasets do not exhibit log-normal distribution, which restricts the applicability of a log-transformed linear fit. Non-linear (power model coefficients were estimated using the Levenberg-Marquardt, Nelder-Mead and Hooke-Jeeves algorithms, all of which generally showed close agreement. A non-linear power model employing the Levenberg-Marquardt parameter evaluation algorithm was identified as an optimal statistical solution of the problem. Long-term annual suspended sediment loads estimated using the non-linear power model are, in general, consistent with previously published results.

  1. Fitting sediment rating curves using regression analysis: a case study of Russian Arctic rivers

    Science.gov (United States)

    Tananaev, N. I.

    2015-03-01

    Published suspended sediment data for Arctic rivers is scarce. Suspended sediment rating curves for three medium to large rivers of the Russian Arctic were obtained using various curve-fitting techniques. Due to the biased sampling strategy, the raw datasets do not exhibit log-normal distribution, which restricts the applicability of a log-transformed linear fit. Non-linear (power) model coefficients were estimated using the Levenberg-Marquardt, Nelder-Mead and Hooke-Jeeves algorithms, all of which generally showed close agreement. A non-linear power model employing the Levenberg-Marquardt parameter evaluation algorithm was identified as an optimal statistical solution of the problem. Long-term annual suspended sediment loads estimated using the non-linear power model are, in general, consistent with previously published results.

  2. Inverse Diffusion Curves Using Shape Optimization.

    Science.gov (United States)

    Zhao, Shuang; Durand, Fredo; Zheng, Changxi

    2018-07-01

    The inverse diffusion curve problem focuses on automatic creation of diffusion curve images that resemble user provided color fields. This problem is challenging since the 1D curves have a nonlinear and global impact on resulting color fields via a partial differential equation (PDE). We introduce a new approach complementary to previous methods by optimizing curve geometry. In particular, we propose a novel iterative algorithm based on the theory of shape derivatives. The resulting diffusion curves are clean and well-shaped, and the final image closely approximates the input. Our method provides a user-controlled parameter to regularize curve complexity, and generalizes to handle input color fields represented in a variety of formats.

  3. Normalization of energy-dependent gamma survey data.

    Science.gov (United States)

    Whicker, Randy; Chambers, Douglas

    2015-05-01

    Instruments and methods for normalization of energy-dependent gamma radiation survey data to a less energy-dependent basis of measurement are evaluated based on relevant field data collected at 15 different sites across the western United States along with a site in Mongolia. Normalization performance is assessed relative to measurements with a high-pressure ionization chamber (HPIC) due to its "flat" energy response and accurate measurement of the true exposure rate from both cosmic and terrestrial radiation. While analytically ideal for normalization applications, cost and practicality disadvantages have increased demand for alternatives to the HPIC. Regression analysis on paired measurements between energy-dependent sodium iodide (NaI) scintillation detectors (5-cm by 5-cm crystal dimensions) and the HPIC revealed highly consistent relationships among sites not previously impacted by radiological contamination (natural sites). A resulting generalized data normalization factor based on the average sensitivity of NaI detectors to naturally occurring terrestrial radiation (0.56 nGy hHPIC per nGy hNaI), combined with the calculated site-specific estimate of cosmic radiation, produced reasonably accurate predictions of HPIC readings at natural sites. Normalization against two to potential alternative instruments (a tissue-equivalent plastic scintillator and energy-compensated NaI detector) did not perform better than the sensitivity adjustment approach at natural sites. Each approach produced unreliable estimates of HPIC readings at radiologically impacted sites, though normalization against the plastic scintillator or energy-compensated NaI detector can address incompatibilities between different energy-dependent instruments with respect to estimation of soil radionuclide levels. The appropriate data normalization method depends on the nature of the site, expected duration of the project, survey objectives, and considerations of cost and practicality.

  4. Spaces of homotopy self-equivalences a survey

    CERN Document Server

    Rutter, John W

    1997-01-01

    This survey covers groups of homotopy self-equivalence classes of topological spaces, and the homotopy type of spaces of homotopy self-equivalences. For manifolds, the full group of equivalences and the mapping class group are compared, as are the corresponding spaces. Included are methods of calculation, numerous calculations, finite generation results, Whitehead torsion and other areas. Some 330 references are given. The book assumes familiarity with cell complexes, homology and homotopy. Graduate students and established researchers can use it for learning, for reference, and to determine the current state of knowledge.

  5. Calculations of a wideband metamaterial absorber using equivalent medium theory

    Science.gov (United States)

    Huang, Xiaojun; Yang, Helin; Wang, Danqi; Yu, Shengqing; Lou, Yanchao; Guo, Ling

    2016-08-01

    Metamaterial absorbers (MMAs) have drawn increasing attention in many areas due to the fact that they can achieve electromagnetic (EM) waves with unity absorptivity. We demonstrate the design, simulation, experiment and calculation of a wideband MMA based on a loaded double-square-loop (DSL) array of chip resisters. For a normal incidence EM wave, the simulated results show that the absorption of the full width at half maximum is about 9.1 GHz, and the relative bandwidth is 87.1%. Experimental results are in agreement with the simulations. More importantly, equivalent medium theory (EMT) is utilized to calculate the absorptions of the DSL MMA, and the calculated absorptions based on EMT agree with the simulated and measured results. The method based on EMT provides a new way to analysis the mechanism of MMAs.

  6. Retrograde curves of solidus and solubility

    International Nuclear Information System (INIS)

    Vasil'ev, M.V.

    1979-01-01

    The investigation was concerned with the constitutional diagrams of the eutectic type with ''retrograde solidus'' and ''retrograde solubility curve'' which must be considered as diagrams with degenerate monotectic transformation. The solidus and the solubility curves form a retrograde curve with a common retrograde point representing the solubility maximum. The two branches of the Aetrograde curve can be described with the aid of two similar equations. Presented are corresponding equations for the Cd-Zn system and shown is the possibility of predicting the run of the solubility curve

  7. [Customized and non-customized French intrauterine growth curves. II - Comparison with existing curves and benefits of customization].

    Science.gov (United States)

    Ego, A; Prunet, C; Blondel, B; Kaminski, M; Goffinet, F; Zeitlin, J

    2016-02-01

    Our aim is to compare the new French EPOPé intrauterine growth curves, developed to address the guidelines 2013 of the French College of Obstetricians and Gynecologists, with reference curves currently used in France, and to evaluate the consequences of their adjustment for fetal sex and maternal characteristics. Eight intrauterine and birthweight curves, used in France were compared to the EPOPé curves using data from the French Perinatal Survey 2010. The influence of adjustment on the rate of SGA births and the characteristics of these births was analysed. Due to their birthweight values and distribution, the selected intrauterine curves are less suitable for births in France than the new curves. Birthweight curves led to low rates of SGA births from 4.3 to 8.5% compared to 10.0% with the EPOPé curves. The adjustment for maternal and fetal characteristics avoids the over-representation of girls among SGA births, and reclassifies 4% of births. Among births reclassified as SGA, the frequency of medical and obstetrical risk factors for growth restriction, smoking (≥10 cigarettes/day), and neonatal transfer is higher than among non-SGA births (P<0.01). The EPOPé curves are more suitable for French births than currently used curves, and their adjustment improves the identification of mothers and babies at risk of growth restriction and poor perinatal outcomes. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  8. Equivalent nozzle in thermomechanical problems

    International Nuclear Information System (INIS)

    Cesari, F.

    1977-01-01

    When analyzing nuclear vessels, it is most important to study the behavior of the nozzle cylinder-cylinder intersection. For the elastic field, this analysis in three dimensions is quite easy using the method of finite elements. The same analysis in the non-linear field becomes difficult for designs in 3-D. It is therefore necessary to resolve a nozzle in two dimensions equivalent to a 3-D nozzle. The purpose of the present work is to find an equivalent nozzle both with a mechanical and thermal load. This has been achieved by the analysis in three dimensions of a nozzle and a nozzle cylinder-sphere intersection, of a different radius. The equivalent nozzle will be a nozzle with a sphere radius in a given ratio to the radius of a cylinder; thus, the maximum equivalent stress is the same in both 2-D and 3-D. The nozzle examined derived from the intersection of a cylindrical vessel of radius R=191.4 mm and thickness T=6.7 mm with a cylindrical nozzle of radius r=24.675 mm and thickness t=1.350 mm, for which the experimental results for an internal pressure load are known. The structure was subdivided into 96 finite, three-dimensional and isoparametric elements with 60 degrees of freedom and 661 total nodes. Both the analysis with a mechanical load as well as the analysis with a thermal load were carried out on this structure according to the Bersafe system. The thermal load consisted of a transient typical of an accident occurring in a sodium-cooled fast reactor, with a peak of the temperature (540 0 C) for the sodium inside the vessel with an insulating argon temperature constant at 525 0 C. The maximum value of the equivalent tension was found in the internal area at the union towards the vessel side. The analysis of the nozzle in 2-D consists in schematizing the structure as a cylinder-sphere intersection, where the sphere has a given relation to the

  9. Bifurcation structure of parameter plane for a family of unimodal piecewise smooth maps: Border-collision bifurcation curves

    International Nuclear Information System (INIS)

    Sushko, Iryna; Agliari, Anna; Gardini, Laura

    2006-01-01

    We study the structure of the 2D bifurcation diagram for a two-parameter family of piecewise smooth unimodal maps f with one break point. Analysing the parameters of the normal form for the border-collision bifurcation of an attracting n-cycle of the map f, we describe the possible kinds of dynamics associated with such a bifurcation. Emergence and role of border-collision bifurcation curves in the 2D bifurcation plane are studied. Particular attention is paid also to the curves of homoclinic bifurcations giving rise to the band merging of pieces of cyclic chaotic intervals

  10. Extended analysis of cooling curves

    International Nuclear Information System (INIS)

    Djurdjevic, M.B.; Kierkus, W.T.; Liliac, R.E.; Sokolowski, J.H.

    2002-01-01

    Thermal Analysis (TA) is the measurement of changes in a physical property of a material that is heated through a phase transformation temperature range. The temperature changes in the material are recorded as a function of the heating or cooling time in such a manner that allows for the detection of phase transformations. In order to increase accuracy, characteristic points on the cooling curve have been identified using the first derivative curve plotted versus time. In this paper, an alternative approach to the analysis of the cooling curve has been proposed. The first derivative curve has been plotted versus temperature and all characteristic points have been identified with the same accuracy achieved using the traditional method. The new cooling curve analysis also enables the Dendrite Coherency Point (DCP) to be detected using only one thermocouple. (author)

  11. Modelling of creep curves of Ni3Ge single crystals

    Science.gov (United States)

    Starenchenko, V. A.; Starenchenko, S. V.; Pantyukhova, O. D.; Solov'eva, Yu V.

    2015-01-01

    In this paper the creep model of alloys with L12 superstructure is presented. The creep model is based on the idea of the mechanisms superposition connected with the different elementary deformation processes. Some of them are incident to the ordered structure L12 (anomalous mechanisms), others are typical to pure metals with the fcc structure (normal mechanisms): the accumulation of thermal APBs by means of the intersection of moving dislocations; the formation of APB tubes; the multiplication of superdislocations; the movement of single dislocations; the accumulation of point defects, such as vacancies and interstitial atoms; the accumulation APBs at the climb of edge dislocations. This model takes into account the experimental facts of the wetting antiphase boundaries and emergence of the disordered phase within the ordered phase. The calculations of the creep curves are performed under different conditions. This model describes different kinds of the creep curves and demonstrates the important meaning of the deformation superlocalisation leading to the inverse creep. The experimental and theoretical results coincide rather well.

  12. A simple method to calculate the influence of dose inhomogeneity and fractionation in normal tissue complication probability evaluation

    International Nuclear Information System (INIS)

    Begnozzi, L.; Gentile, F.P.; Di Nallo, A.M.; Chiatti, L.; Zicari, C.; Consorti, R.; Benassi, M.

    1994-01-01

    Since volumetric dose distributions are available with 3-dimensional radiotherapy treatment planning they can be used in statistical evaluation of response to radiation. This report presents a method to calculate the influence of dose inhomogeneity and fractionation in normal tissue complication probability evaluation. The mathematical expression for the calculation of normal tissue complication probability has been derived combining the Lyman model with the histogram reduction method of Kutcher et al. and using the normalized total dose (NTD) instead of the total dose. The fitting of published tolerance data, in case of homogeneous or partial brain irradiation, has been considered. For the same total or partial volume homogeneous irradiation of the brain, curves of normal tissue complication probability have been calculated with fraction size of 1.5 Gy and of 3 Gy instead of 2 Gy, to show the influence of fraction size. The influence of dose distribution inhomogeneity and α/β value has also been simulated: Considering α/β=1.6 Gy or α/β=4.1 Gy for kidney clinical nephritis, the calculated curves of normal tissue complication probability are shown. Combining NTD calculations and histogram reduction techniques, normal tissue complication probability can be estimated taking into account the most relevant contributing factors, including the volume effect. (orig.) [de

  13. Gompertz: A Scilab Program for Estimating Gompertz Curve Using Gauss-Newton Method of Least Squares

    Directory of Open Access Journals (Sweden)

    Surajit Ghosh Dastidar

    2006-04-01

    Full Text Available A computer program for estimating Gompertz curve using Gauss-Newton method of least squares is described in detail. It is based on the estimation technique proposed in Reddy (1985. The program is developed using Scilab (version 3.1.1, a freely available scientific software package that can be downloaded from http://www.scilab.org/. Data is to be fed into the program from an external disk file which should be in Microsoft Excel format. The output will contain sample size, tolerance limit, a list of initial as well as the final estimate of the parameters, standard errors, value of Gauss-Normal equations namely GN1 GN2 and GN3 , No. of iterations, variance(σ2 , Durbin-Watson statistic, goodness of fit measures such as R2 , D value, covariance matrix and residuals. It also displays a graphical output of the estimated curve vis a vis the observed curve. It is an improved version of the program proposed in Dastidar (2005.

  14. Gompertz: A Scilab Program for Estimating Gompertz Curve Using Gauss-Newton Method of Least Squares

    Directory of Open Access Journals (Sweden)

    Surajit Ghosh Dastidar

    2006-04-01

    Full Text Available A computer program for estimating Gompertz curve using Gauss-Newton method of least squares is described in detail. It is based on the estimation technique proposed in Reddy (1985. The program is developed using Scilab (version 3.1.1, a freely available scientific software package that can be downloaded from http://www.scilab.org/. Data is to be fed into the program from an external disk file which should be in Microsoft Excel format. The output will contain sample size, tolerance limit, a list of initial as well as the final estimate of the parameters, standard errors, value of Gauss-Normal equations namely GN1 GN2 and GN3, No. of iterations, variance(σ2, Durbin-Watson statistic, goodness of fit measures such as R2, D value, covariance matrix and residuals. It also displays a graphical output of the estimated curve vis a vis the observed curve. It is an improved version of the program proposed in Dastidar (2005.

  15. Normal freezing of ideal ternary systems of the pseudobinary type

    Science.gov (United States)

    Li, C. H.

    1972-01-01

    Perfect liquid mixing but no solid diffusion is assumed in normal freezing. In addition, the molar compositions of the freezing solid and remaining liquid, respectively, follow the solidus and liquidus curves of the constitutional diagram. For the linear case, in which both the liquidus and solidus are perfectly straight lines, the normal freezing equation giving the fraction solidified at each melt temperature and the solute concentration profile in the frozen solid was determined as early as 1902, and has since been repeatedly published. Corresponding equations for quadratic, cubic or higher-degree liquidus and solidus lines have also been obtained. The equation of normal freezing for ideal ternary liquid solutions solidified into ideal solid solutions of the pseudobinary type is given. Sample computations with the use of this new equation were made and are given for the Ga-Al-As system.

  16. Mapping of iso exposure curves generated by conventional mobile radiodiagnostic equipment and dose in hospitalized patients

    International Nuclear Information System (INIS)

    Hoff, Gabriela; Fischer, Andreia Caroline Fischer da Silveira; Accurso, Andre; Andrade, Jose Rodrigo Mendes; Bacelar, Alexandre

    2011-01-01

    This paper intended to measure iso expositions curves in areas of mobile equipment use. It was selected: a Shimadzu mobile equipment and two Siemens equipment, being used a non-anthropomorphic scatterer. The exposure measurements in mesh of 4.20 x 4.20 cubic centimeters, at a half-height of the simulator and steps of 30 cm, were used by using the radiographic techniques: 100 k Vp and 63 m As (Shimadzu) and 96 k Vp and 40 m As (Siemens). For estimation of environmental equivalent dose, during 12 months, were considered: 3.55 m As/examination and 44.5 procedures/month (adults): and 3.16 m As/examination and 20.1 procedures/month (pediatrics). It was observed that only the values in the distance of 60 cm presented over the maximum limit of environment equivalent dose defined for Free Area (0.5 mSv/year). The points collected at 2.1 m from the primary beam center, have shown to be always 12% of referred limit, shown to be a safe distance for the hospitalized patients

  17. On the stress–energy tensor of quantum fields in curved spacetimes—comparison of different regularization schemes and symmetry of the Hadamard/Seeley–DeWitt coefficients

    International Nuclear Information System (INIS)

    Hack, Thomas-Paul; Moretti, Valter

    2012-01-01

    We review a few rigorous and partly unpublished results on the regularization of the stress–energy in quantum field theory on curved spacetimes: (1) the symmetry of the Hadamard/Seeley–DeWitt coefficients in smooth Riemannian and Lorentzian spacetimes, (2) the equivalence of the local ζ-function and the Hadamard-point-splitting procedure in smooth static spacetimes and (3) the equivalence of the DeWitt–Schwinger- and the Hadamard-point-splitting procedure in smooth Riemannian and Lorentzian spacetimes. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical in honour of Stuart Dowker’s 75th birthday devoted to ‘Applications of zeta functions and other spectral functions in mathematics and physics’. (paper)

  18. Tempo curves considered harmful

    NARCIS (Netherlands)

    Desain, P.; Honing, H.

    1993-01-01

    In the literature of musicology, computer music research and the psychology of music, timing or tempo measurements are mostly presented in the form of continuous curves. The notion of these tempo curves is dangerous, despite its widespread use, because it lulls its users into the false impression

  19. Quantum mechanics and the equivalence principle

    International Nuclear Information System (INIS)

    Davies, P C W

    2004-01-01

    A quantum particle moving in a gravitational field may penetrate the classically forbidden region of the gravitational potential. This raises the question of whether the time of flight of a quantum particle in a gravitational field might deviate systematically from that of a classical particle due to tunnelling delay, representing a violation of the weak equivalence principle. I investigate this using a model quantum clock to measure the time of flight of a quantum particle in a uniform gravitational field, and show that a violation of the equivalence principle does not occur when the measurement is made far from the turning point of the classical trajectory. The results are then confirmed using the so-called dwell time definition of quantum tunnelling. I conclude with some remarks about the strong equivalence principle in quantum mechanics

  20. 99mTc-DTPA Pulmonary Clearance in Normals

    International Nuclear Information System (INIS)

    Chung, Soo Kyo; Yang, Woo Jin; Sohn, Hyung Sun; Shinn, Kyung Sub; Bahk, Yong Whee

    1994-01-01

    Pulmonary clearance of 99m Tc-DTPA(PCD) has been used for the measurement of pulmonary epithelial permeability. It has been reported to be increased not only in variety of pulmonary diseases including ARDS, interstitial fibrosis, and smokers, but also in normal subjects on positive end expiratory pressure respirator, or after exercise. It was also noted that decrease of pulmonary blood flow due to pulmonary arterial obstruction results in delayed PCD. Normal range of PCD varies with institutes. We prospectively measured PCD in 17 normals (5 males and 12 females) consisted of staffs and trainees in the department of radiology of Kangnam St, Marys hospital using original Bark Nebulizer (India). Age ranged from 32 to 43 years. 370 MBq of 99m Tc-DTPA was inhaled in supine position and supine posterior images were subsequently obtained with 1 min/frame, 64 X 64 matrix and word mode for 30 min. Regions of interest. Were set on each lung, whole lungs, and upper, middle and lower thirds of right lung, respectively. Best fit regression curve was obtained by least square method from initial 7 min after peak activity on each curve and time for half clearance of maximum activity (tl/2) was calculated. Mean tl/2 was 51 ± 11.2 min for whole lung. There was no significant difference between tl/ 2 of right and left lungs. Initial uptake was higher in the lower third and tl/2 was shorter in the lower third than in the upper third(P<0.05). We reviewed several reports on PCD and compared our data with the others. In this study, faster clearance in the lower third may be due to the position imaged with or the environment the subjects belong to, and further investigation is under way.