WorldWideScience

Sample records for normal curve equivalent

  1. Principal normal indicatrices of closed space curves

    DEFF Research Database (Denmark)

    Røgen, Peter

    1999-01-01

    A theorem due to J. Weiner, which is also proven by B. Solomon, implies that a principal normal indicatrix of a closed space curve with nonvanishing curvature has integrated geodesic curvature zero and contains no subarc with integrated geodesic curvature pi. We prove that the inverse problem alw...

  2. According to Jim: The Flawed Normal Curve of Intelligence

    Science.gov (United States)

    Gallagher, James J.

    2008-01-01

    In this article, the author talks about the normal curve of intelligence which he thinks is flawed and contends that wrong conclusions have been drawn based on this spurious normal curve. An example is that of racial and ethnic differences wherein some authors maintain that some ethnic and racial groups are clearly superior to others based on…

  3. Power curve report - with turbulence intensity normalization

    DEFF Research Database (Denmark)

    Gómez Arranz, Paula; Wagner, Rozenn; Vesth, Allan

    , additional shear and turbulence intensitity filters are applied on the measured data. Secondly, the method for normalization to a given reference turbulence intensity level (as described in Annex M of the draft of IEC 61400-12-1 Ed.2 [3]) is applied. The measurements have been performed using DTU...

  4. On the projective normality of Artin-Schreier curves

    Directory of Open Access Journals (Sweden)

    Alberto Ravagnani

    2013-11-01

    Full Text Available In this paper we study the projective normality of certain Artin-Schreier curves Y_f defined over a field F of characteristic p by the equations y^q+y=f(x, q being a power of p and f in F[x] being a polynomial in x of degree m, with (m,p=1. Many Y_f curves are singular and so, to be precise, here we study the projective normality of appropriate projective models of their normalization.

  5. Sketching Curves for Normal Distributions--Geometric Connections

    Science.gov (United States)

    Bosse, Michael J.

    2006-01-01

    Within statistics instruction, students are often requested to sketch the curve representing a normal distribution with a given mean and standard deviation. Unfortunately, these sketches are often notoriously imprecise. Poor sketches are usually the result of missing mathematical knowledge. This paper considers relationships which exist among…

  6. Modelling growth curves of Nigerian indigenous normal feather ...

    African Journals Online (AJOL)

    This study was conducted to predict the growth curve parameters using Bayesian Gompertz and logistic models and also to compare the two growth function in describing the body weight changes across age in Nigerian indigenous normal feather chicken. Each chick was wing-tagged at day old and body weights were ...

  7. Incorporating Measurement Non-Equivalence in a Cross-Study Latent Growth Curve Analysis.

    Science.gov (United States)

    Flora, David B; Curran, Patrick J; Hussong, Andrea M; Edwards, Michael C

    2008-10-01

    A large literature emphasizes the importance of testing for measurement equivalence in scales that may be used as observed variables in structural equation modeling applications. When the same construct is measured across more than one developmental period, as in a longitudinal study, it can be especially critical to establish measurement equivalence, or invariance, across the developmental periods. Similarly, when data from more than one study are combined into a single analysis, it is again important to assess measurement equivalence across the data sources. Yet, how to incorporate non-equivalence when it is discovered is not well described for applied researchers. Here, we present an item response theory approach that can be used to create scale scores from measures while explicitly accounting for non-equivalence. We demonstrate these methods in the context of a latent curve analysis in which data from two separate studies are combined to create a single longitudinal model spanning several developmental periods.

  8. Eyewitness identification: Bayesian information gain, base-rate effect equivalency curves, and reasonable suspicion.

    Science.gov (United States)

    Wells, Gary L; Yang, Yueran; Smalarz, Laura

    2015-04-01

    We provide a novel Bayesian treatment of the eyewitness identification problem as it relates to various system variables, such as instruction effects, lineup presentation format, lineup-filler similarity, lineup administrator influence, and show-ups versus lineups. We describe why eyewitness identification is a natural Bayesian problem and how numerous important observations require careful consideration of base rates. Moreover, we argue that the base rate in eyewitness identification should be construed as a system variable (under the control of the justice system). We then use prior-by-posterior curves and information-gain curves to examine data obtained from a large number of published experiments. Next, we show how information-gain curves are moderated by system variables and by witness confidence and we note how information-gain curves reveal that lineups are consistently more proficient at incriminating the guilty than they are at exonerating the innocent. We then introduce a new type of analysis that we developed called base rate effect-equivalency (BREE) curves. BREE curves display how much change in the base rate is required to match the impact of any given system variable. The results indicate that even relatively modest changes to the base rate can have more impact on the reliability of eyewitness identification evidence than do the traditional system variables that have received so much attention in the literature. We note how this Bayesian analysis of eyewitness identification has implications for the question of whether there ought to be a reasonable-suspicion criterion for placing a person into the jeopardy of an identification procedure. (c) 2015 APA, all rights reserved).

  9. Universal Survival Curve and Single Fraction Equivalent Dose: Useful Tools in Understanding Potency of Ablative Radiotherapy

    International Nuclear Information System (INIS)

    Park, Clint; Papiez, Lech; Zhang Shichuan; Story, Michael; Timmerman, Robert D.

    2008-01-01

    Purpose: Overprediction of the potency and toxicity of high-dose ablative radiotherapy such as stereotactic body radiotherapy (SBRT) by the linear quadratic (LQ) model led to many clinicians' hesitating to adopt this efficacious and well-tolerated therapeutic option. The aim of this study was to offer an alternative method of analyzing the effect of SBRT by constructing a universal survival curve (USC) that provides superior approximation of the experimentally measured survival curves in the ablative, high-dose range without losing the strengths of the LQ model around the shoulder. Methods and Materials: The USC was constructed by hybridizing two classic radiobiologic models: the LQ model and the multitarget model. We have assumed that the LQ model gives a good description for conventionally fractionated radiotherapy (CFRT) for the dose to the shoulder. For ablative doses beyond the shoulder, the survival curve is better described as a straight line as predicted by the multitarget model. The USC smoothly interpolates from a parabola predicted by the LQ model to the terminal asymptote of the multitarget model in the high-dose region. From the USC, we derived two equivalence functions, the biologically effective dose and the single fraction equivalent dose for both CFRT and SBRT. Results: The validity of the USC was tested by using previously published parameters of the LQ and multitarget models for non-small-cell lung cancer cell lines. A comparison of the goodness-of-fit of the LQ and USC models was made to a high-dose survival curve of the H460 non-small-cell lung cancer cell line. Conclusion: The USC can be used to compare the dose fractionation schemes of both CFRT and SBRT. The USC provides an empirically and a clinically well-justified rationale for SBRT while preserving the strengths of the LQ model for CFRT

  10. A measurable Lawson criterion and hydro-equivalent curves for inertial confinement fusion

    International Nuclear Information System (INIS)

    Zhou, C. D.; Betti, R.

    2008-01-01

    It is shown that the ignition condition (Lawson criterion) for inertial confinement fusion (ICF) can be cast in a form dependent on the only two parameters of the compressed fuel assembly that can be measured with existing techniques: the hot spot ion temperature (T i h ) and the total areal density (ρR tot ), which includes the cold shell contribution. A marginal ignition curve is derived in the ρR tot , T i h plane and current implosion experiments are compared with the ignition curve. On this plane, hydrodynamic equivalent curves show how a given implosion would perform with respect to the ignition condition when scaled up in the laser-driver energy. For 3 i h > n i h > n 2.6 · tot > n >50 keV 2.6 · g/cm 2 , where tot > n and i h > n are the burn-averaged total areal density and hot spot ion temperature, respectively. Both quantities are calculated without accounting for the alpha-particle energy deposition. Such a criterion can be used to determine how surrogate D 2 and subignited DT target implosions perform with respect to the one-dimensional ignition threshold.

  11. Equivalent dose determination in foraminifera: analytical description of the CO2--signal dose-response curve

    International Nuclear Information System (INIS)

    Hoffmann, D.; Woda, C.; Mangini, A.

    2003-01-01

    The dose-response of the CO 2 - signal (g=2.0006) in foraminifera with ages between 19 and 300 ka is investigated. The sum of two exponential saturation functions is an adequate function to describe the dose-response curve up to an additional dose of 8000 Gy. It yields excellent dating results but requires an artificial doses of at least 5000 Gy. For small additional doses of about 500 Gy the single exponential saturation function can be used to calculate a reliable equivalent dose D E , although it does not describ the dose-response for higher doses. The CO 2 - -signal dose-response indicates that the signal has two components of which one is less stable than the other

  12. DUAL TIMELIKE NORMAL AND DUAL TIMELIKE SPHERICAL CURVES IN DUAL MINKOWSKI SPACE

    OpenAIRE

    ÖNDER, Mehmet

    2009-01-01

    Abstract: In this paper, we give characterizations of dual timelike normal and dual timelike spherical curves in the dual Minkowski 3-space and we show that every dual timelike normal curve is also a dual timelike spherical curve. Keywords: Normal curves, Dual Minkowski 3-Space, Dual Timelike curves. Mathematics Subject Classifications (2000): 53C50, 53C40. DUAL MINKOWSKI UZAYINDA DUAL TIMELIKE NORMAL VE DUAL TIMELIKE KÜRESEL EĞRİLER Özet: Bu çalışmada, dual Minkowski 3-...

  13. Optimization of equivalent uniform dose using the L-curve criterion

    International Nuclear Information System (INIS)

    Chvetsov, Alexei V; Dempsey, James F; Palta, Jatinder R

    2007-01-01

    Optimization of equivalent uniform dose (EUD) in inverse planning for intensity-modulated radiation therapy (IMRT) prevents variation in radiobiological effect between different radiotherapy treatment plans, which is due to variation in the pattern of dose nonuniformity. For instance, the survival fraction of clonogens would be consistent with the prescription when the optimized EUD is equal to the prescribed EUD. One of the problems in the practical implementation of this approach is that the spatial dose distribution in EUD-based inverse planning would be underdetermined because an unlimited number of nonuniform dose distributions can be computed for a prescribed value of EUD. Together with ill-posedness of the underlying integral equation, this may significantly increase the dose nonuniformity. To optimize EUD and keep dose nonuniformity within reasonable limits, we implemented into an EUD-based objective function an additional criterion which ensures the smoothness of beam intensity functions. This approach is similar to the variational regularization technique which was previously studied for the dose-based least-squares optimization. We show that the variational regularization together with the L-curve criterion for the regularization parameter can significantly reduce dose nonuniformity in EUD-based inverse planning

  14. Optimization of equivalent uniform dose using the L-curve criterion

    Energy Technology Data Exchange (ETDEWEB)

    Chvetsov, Alexei V; Dempsey, James F; Palta, Jatinder R [Department of Radiation Oncology, University of Florida, Gainesville, FL 32610-0385 (United States)

    2007-09-21

    Optimization of equivalent uniform dose (EUD) in inverse planning for intensity-modulated radiation therapy (IMRT) prevents variation in radiobiological effect between different radiotherapy treatment plans, which is due to variation in the pattern of dose nonuniformity. For instance, the survival fraction of clonogens would be consistent with the prescription when the optimized EUD is equal to the prescribed EUD. One of the problems in the practical implementation of this approach is that the spatial dose distribution in EUD-based inverse planning would be underdetermined because an unlimited number of nonuniform dose distributions can be computed for a prescribed value of EUD. Together with ill-posedness of the underlying integral equation, this may significantly increase the dose nonuniformity. To optimize EUD and keep dose nonuniformity within reasonable limits, we implemented into an EUD-based objective function an additional criterion which ensures the smoothness of beam intensity functions. This approach is similar to the variational regularization technique which was previously studied for the dose-based least-squares optimization. We show that the variational regularization together with the L-curve criterion for the regularization parameter can significantly reduce dose nonuniformity in EUD-based inverse planning.

  15. Optimization of equivalent uniform dose using the L-curve criterion.

    Science.gov (United States)

    Chvetsov, Alexei V; Dempsey, James F; Palta, Jatinder R

    2007-10-07

    Optimization of equivalent uniform dose (EUD) in inverse planning for intensity-modulated radiation therapy (IMRT) prevents variation in radiobiological effect between different radiotherapy treatment plans, which is due to variation in the pattern of dose nonuniformity. For instance, the survival fraction of clonogens would be consistent with the prescription when the optimized EUD is equal to the prescribed EUD. One of the problems in the practical implementation of this approach is that the spatial dose distribution in EUD-based inverse planning would be underdetermined because an unlimited number of nonuniform dose distributions can be computed for a prescribed value of EUD. Together with ill-posedness of the underlying integral equation, this may significantly increase the dose nonuniformity. To optimize EUD and keep dose nonuniformity within reasonable limits, we implemented into an EUD-based objective function an additional criterion which ensures the smoothness of beam intensity functions. This approach is similar to the variational regularization technique which was previously studied for the dose-based least-squares optimization. We show that the variational regularization together with the L-curve criterion for the regularization parameter can significantly reduce dose nonuniformity in EUD-based inverse planning.

  16. Mapping of isoexposure curves for evaluation of equivalent environmental doses for radiodiagnostic mobile equipment

    International Nuclear Information System (INIS)

    Bacelar, Alexandre; Andrade, Jose Rodrigo Mendes; Fischer, Andreia Caroline Fischer da Silveira; Accurso, Andre; Hoff, Gabriela

    2011-01-01

    This paper generates iso exposure curves in areas where the mobile radiodiagnostic equipment are used for evaluation of iso kerma map and the environment equivalent dose (H * (d)). It was used a Shimadzu mobile equipment and two Siemens, with non anthropomorphic scatter. The exposure was measured in a mesh of 4.20 x 4.20 square meter in steps of 30 cm, at half height from the scatterer. The calculation of H * (d) were estimated for a worker present in all the procedures in a period of 11 months, being considered 3.55 m As/examination and 44.5 procedures/month (adult UTI) and 3.16 m As/examination and 20.1 procedure/month (pediatric UTI), and 3.16 m As/examination and 20.1 procedure/month (pediatric UTI). It was observed that there exist points where the H * (d) was over the limit established for the free area inside the radius of 30 cm from the central beam of radiation in the case of pediatric UTI and 60 cm for adult UTI. The points localized 2.1 m from the center presented values lower than 25% of those limit

  17. ACL graft can replicate the normal ligament's tension curve

    NARCIS (Netherlands)

    Arnold, MP; Verdonschot, N; van Kampen, A

    2005-01-01

    The anatomical femoral insertion of the normal anterior cruciate ligament (ACL) lies on the deep portion of the lateral wall of the intercondylar fossa. Following the deep bone-cartilage border, it stretches from 11 o'clock high in the notch all the way down to its lowest border at 8 o'clock. The

  18. Computerized tomography and head growth curve infantile macrocephaly with normal psychomotor development

    International Nuclear Information System (INIS)

    Eda, Isematsu; Kitahara, Tadashi; Takashima, Sachio; Takeshita, Kenzo

    1982-01-01

    Macrocephaly was defined as a head measuring larger than 98th percentile. We have evaluated CT findings and head growth curves in 25 infants with large heads. Ten (40%) of 25 infants with large heads were normal developmentally and neurologically. Five (20%) of those were mentally retarded. The other 10 infants (40%) included hydrocephalus (4 cases), malformation syndrome (3 cases), brain tumor (1 case), metabolic disorder (1 case) and degenerative disorder (1 case). Their head growth curves were typed as (I), (II) and (III): Type (I) (excessive head growth curve to 2 SDs above normal); Type (II) (head growth curve gradually approached to 2 SDs above normal); Type (III) (head growth curve parallel to 2 SDs above normal). Ten of macrocephaly with normal psychomotor development were studied clinically and radiologically in details. They were all male. CT pictures of those showed normal or various abnormal findings: ventricular dilatations, wide frontal and temporal subdural spaces, wide interhemispheric fissures, wide cerebral sulci, and large sylvian fissures. CT findings in 2 of those, which because normal after repeated CT examinations, resembled benign subdural collection. CT findings in one of those were external hydrocephalus. Head growth curves were obtained from 8 of those. Six cases revealed type (II) and two cases did type (III). The remaining 2 cases could not be followed up. We consider that CT findings of infants showed macrocephaly with normal psychomotor development reveals normal or various abnormal (ventricular dilatations, benign subdural collection, external hydrocephalus) and their head growth curves are not at least excessive. Infants with mental retardation showed similar CT findings and head growth curves as those with normal psychomotor development. It was difficult to distinguish normal from mentally retarded infants by either CT findings or head growth curves. (author)

  19. NormaCurve: a SuperCurve-based method that simultaneously quantifies and normalizes reverse phase protein array data.

    Directory of Open Access Journals (Sweden)

    Sylvie Troncale

    Full Text Available MOTIVATION: Reverse phase protein array (RPPA is a powerful dot-blot technology that allows studying protein expression levels as well as post-translational modifications in a large number of samples simultaneously. Yet, correct interpretation of RPPA data has remained a major challenge for its broad-scale application and its translation into clinical research. Satisfying quantification tools are available to assess a relative protein expression level from a serial dilution curve. However, appropriate tools allowing the normalization of the data for external sources of variation are currently missing. RESULTS: Here we propose a new method, called NormaCurve, that allows simultaneous quantification and normalization of RPPA data. For this, we modified the quantification method SuperCurve in order to include normalization for (i background fluorescence, (ii variation in the total amount of spotted protein and (iii spatial bias on the arrays. Using a spike-in design with a purified protein, we test the capacity of different models to properly estimate normalized relative expression levels. The best performing model, NormaCurve, takes into account a negative control array without primary antibody, an array stained with a total protein stain and spatial covariates. We show that this normalization is reproducible and we discuss the number of serial dilutions and the number of replicates that are required to obtain robust data. We thus provide a ready-to-use method for reliable and reproducible normalization of RPPA data, which should facilitate the interpretation and the development of this promising technology. AVAILABILITY: The raw data, the scripts and the normacurve package are available at the following web site: http://microarrays.curie.fr.

  20. The Hartshorne-Rao module of curves on rational normal scrolls

    Directory of Open Access Journals (Sweden)

    Roberta Di Gennaro

    2000-09-01

    Full Text Available We study the Hartshorne-Rao module of curves lying on a rational normal scroll S_e of invariant e ≥ 0 in P^{e+3} .We calculate the Rao function, we characterize the aCM curves on S_e .Finally, we give an algorithm to check if a curve is aC M or not and, inthe second case, to calculate the Rao function.

  1. On the possible ''normalization'' of experimental curves of 230Th vertical distribution in abyssal oceanic sediments

    International Nuclear Information System (INIS)

    Kuznetsov, Yu.V.; Al'terman, Eh.I.; Lisitsyn, A.P.; AN SSSR, Moscow. Inst. Okeanologii)

    1981-01-01

    The possibilities of the method of normalization of experimental ionic curves in reference to dating of abyssal sediments and establishing their accumulation rapidities are studied. The method is based on using correlation between ionic curves extrema and variations of Fe, Mn, C org., and P contents in abyssal oceanic sediments. It has been found that the above method can be successfully applied for correction of 230 Th vertical distribution data obtained by low-background γ-spectrometry. The method leads to most reliable results in those cases when the vertical distribution curves in sediments of elements concentrators of 230 Th are symbasic between themselves. The normalization of experimental ionic curves in many cases gives the possibility to realize the sediment age stratification [ru

  2. High-fructose corn syrup and sucrose have equivalent effects on energy-regulating hormones at normal human consumption levels.

    Science.gov (United States)

    Yu, Zhiping; Lowndes, Joshua; Rippe, James

    2013-12-01

    Intake of high-fructose corn syrup (HFCS) has been suggested to contribute to the increased prevalence of obesity, whereas a number of studies and organizations have reported metabolic equivalence between HFCS and sucrose. We hypothesized that HFCS and sucrose would have similar effects on energy-regulating hormones and metabolic substrates at normal levels of human consumption and that these values would not change over a 10-week, free-living period at these consumption levels. This was a randomized, prospective, double-blind, parallel group study in which 138 adult men and women consumed 10 weeks of low-fat milk sweetened with either HFCS or sucrose at levels of the 25th, 50th, and 90th percentile population consumption of fructose (the equivalent of 40, 90, or 150 g of sugar per day in a 2000-kcal diet). Before and after the 10-week intervention, 24-hour blood samples were collected. The area under the curve (AUC) for glucose, insulin, leptin, active ghrelin, triglyceride, and uric acid was measured. There were no group differences at baseline or posttesting for all outcomes (interaction, P > .05). The AUC response of glucose, active ghrelin, and uric acid did not change between baseline and posttesting (P > .05), whereas the AUC response of insulin (P < .05), leptin (P < .001), and triglyceride (P < .01) increased over the course of the intervention when the 6 groups were averaged. We conclude that there are no differences in the metabolic effects of HFCS and sucrose when compared at low, medium, and high levels of consumption. © 2013 Elsevier Inc. All rights reserved.

  3. Analysis of normalized-characteristic curves and determination of the granulometric state of dissolved uranium dioxides

    International Nuclear Information System (INIS)

    Melichar, F.; Neumann, L.

    1977-01-01

    Methods are presented for the analysis of normalized-characteristic curves, which make it possible to determine the granulometric composition of a dissolved polydispersion - the cumulative mass distribution of particles - as a function of the relative particle size. If the size of the largest particle in the dissolved polydispersion is known, these methods allow the determination of the dependence of cumulative mass ratios of particles on their absolute sizes. In the inverse method of the geometrical model for determining the granulometric composition of a dissolved polydispersion, the polydispersion is represented by a finite number of monodispersions. An accurate analysis of normalized-characteristic equations leads to the Akselrud dissolution model. As against the other two methods, the latter allows the determination of the granulometric composition for an arbitrary number of particle sizes. The method of the granulometric atlas is a method for estimating the granulometric composition of a dissolved polydispersion and is based on comparison of a normalized-characteristic curve for an unknown granulometric composition with an atlas of normalized-characteristic curves for selected granulometric spectra of polydispersions. (author)

  4. A Note on the Equivalence between the Normal and the Lognormal Implied Volatility : A Model Free Approach

    OpenAIRE

    Grunspan, Cyril

    2011-01-01

    First, we show that implied normal volatility is intimately linked with the incomplete Gamma function. Then, we deduce an expansion on implied normal volatility in terms of the time-value of a European call option. Then, we formulate an equivalence between the implied normal volatility and the lognormal implied volatility with any strike and any model. This generalizes a known result for the SABR model. Finally, we adress the issue of the "breakeven move" of a delta-hedged portfolio.

  5. Equivalent distributed capacitance model of oxide traps on frequency dispersion of C-V curve for MOS capacitors

    Science.gov (United States)

    Lu, Han-Han; Xu, Jing-Ping; Liu, Lu; Lai, Pui-To; Tang, Wing-Man

    2016-11-01

    An equivalent distributed capacitance model is established by considering only the gate oxide-trap capacitance to explain the frequency dispersion in the C-V curve of MOS capacitors measured for a frequency range from 1 kHz to 1 MHz. The proposed model is based on the Fermi-Dirac statistics and the charging/discharging effects of the oxide traps induced by a small ac signal. The validity of the proposed model is confirmed by the good agreement between the simulated results and experimental data. Simulations indicate that the capacitance dispersion of an MOS capacitor under accumulation and near flatband is mainly caused by traps adjacent to the oxide/semiconductor interface, with negligible effects from the traps far from the interface, and the relevant distance from the interface at which the traps can still contribute to the gate capacitance is also discussed. In addition, by excluding the negligible effect of oxide-trap conductance, the model avoids the use of imaginary numbers and complex calculations, and thus is simple and intuitive. Project supported by the National Natural Science Foundation of China (Grant Nos. 61176100 and 61274112), the University Development Fund of the University of Hong Kong, China (Grant No. 00600009), and the Hong Kong Polytechnic University, China (Grant No. 1-ZVB1).

  6. Equivalent distributed capacitance model of oxide traps on frequency dispersion of C – V curve for MOS capacitors

    International Nuclear Information System (INIS)

    Lu Han-Han; Xu Jing-Ping; Liu Lu; Lai Pui-To; Tang Wing-Man

    2016-01-01

    An equivalent distributed capacitance model is established by considering only the gate oxide-trap capacitance to explain the frequency dispersion in the C – V curve of MOS capacitors measured for a frequency range from 1 kHz to 1 MHz. The proposed model is based on the Fermi–Dirac statistics and the charging/discharging effects of the oxide traps induced by a small ac signal. The validity of the proposed model is confirmed by the good agreement between the simulated results and experimental data. Simulations indicate that the capacitance dispersion of an MOS capacitor under accumulation and near flatband is mainly caused by traps adjacent to the oxide/semiconductor interface, with negligible effects from the traps far from the interface, and the relevant distance from the interface at which the traps can still contribute to the gate capacitance is also discussed. In addition, by excluding the negligible effect of oxide-trap conductance, the model avoids the use of imaginary numbers and complex calculations, and thus is simple and intuitive. (paper)

  7. Design of elliptic curve cryptoprocessors over GF(2^163 using the Gaussian normal basis

    Directory of Open Access Journals (Sweden)

    Paulo Cesar Realpe

    2014-05-01

    Full Text Available This paper presents the efficient hardware implementation of cryptoprocessors that carry out the scalar multiplication kP over finite field GF(2163 using two digit-level multipliers. The finite field arithmetic operations were implemented using Gaussian normal basis (GNB representation, and the scalar multiplication kP was implemented using Lopez-Dahab algorithm, 2-NAF halve-and-add algorithm and w-tNAF method for Koblitz curves. The processors were designed using VHDL description, synthesized on the Stratix-IV FPGA using Quartus II 12.0 and verified using SignalTAP II and Matlab. The simulation results show that the cryptoprocessors present a very good performance to carry out the scalar multiplication kP. In this case, the computation times of the multiplication kP using Lopez-Dahab, 2-NAF halve-and-add and 16-tNAF for Koblitz curves were 13.37 µs, 16.90 µs and 5.05 µs, respectively.

  8. A note on families of fragility curves

    International Nuclear Information System (INIS)

    Kaplan, S.; Bier, V.M.; Bley, D.C.

    1989-01-01

    In the quantitative assessment of seismic risk, uncertainty in the fragility of a structural component is usually expressed by putting forth a family of fragility curves, with probability serving as the parameter of the family. Commonly, a lognormal shape is used both for the individual curves and for the expression of uncertainty over the family. A so-called composite single curve can also be drawn and used for purposes of approximation. This composite curve is often regarded as equivalent to the mean curve of the family. The equality seems intuitively reasonable, but according to the authors has never been proven. The paper presented proves this equivalence hypothesis mathematically. Moreover, the authors show that this equivalence hypothesis between fragility curves is itself equivalent to an identity property of the standard normal probability curve. Thus, in the course of proving the fragility curve hypothesis, the authors have also proved a rather obscure, but interesting and perhaps previously unrecognized, property of the standard normal curve

  9. Delay Discounting Rates Are Temporally Stable in an Equivalent Present Value Procedure Using Theoretical and Area under the Curve Analyses

    Science.gov (United States)

    Harrison, Justin; McKay, Ryan

    2012-01-01

    Temporal discounting rates have become a popular dependent variable in social science research. While choice procedures are commonly employed to measure discounting rates, equivalent present value (EPV) procedures may be more sensitive to experimental manipulation. However, their use has been impeded by the absence of test-retest reliability data.…

  10. A new formula for normal tissue complication probability (NTCP) as a function of equivalent uniform dose (EUD).

    Science.gov (United States)

    Luxton, Gary; Keall, Paul J; King, Christopher R

    2008-01-07

    To facilitate the use of biological outcome modeling for treatment planning, an exponential function is introduced as a simpler equivalent to the Lyman formula for calculating normal tissue complication probability (NTCP). The single parameter of the exponential function is chosen to reproduce the Lyman calculation to within approximately 0.3%, and thus enable easy conversion of data contained in empirical fits of Lyman parameters for organs at risk (OARs). Organ parameters for the new formula are given in terms of Lyman model m and TD(50), and conversely m and TD(50) are expressed in terms of the parameters of the new equation. The role of the Lyman volume-effect parameter n is unchanged from its role in the Lyman model. For a non-homogeneously irradiated OAR, an equation relates d(ref), n, v(eff) and the Niemierko equivalent uniform dose (EUD), where d(ref) and v(eff) are the reference dose and effective fractional volume of the Kutcher-Burman reduction algorithm (i.e. the LKB model). It follows in the LKB model that uniform EUD irradiation of an OAR results in the same NTCP as the original non-homogeneous distribution. The NTCP equation is therefore represented as a function of EUD. The inverse equation expresses EUD as a function of NTCP and is used to generate a table of EUD versus normal tissue complication probability for the Emami-Burman parameter fits as well as for OAR parameter sets from more recent data.

  11. A new formula for normal tissue complication probability (NTCP) as a function of equivalent uniform dose (EUD)

    International Nuclear Information System (INIS)

    Luxton, Gary; Keall, Paul J; King, Christopher R

    2008-01-01

    To facilitate the use of biological outcome modeling for treatment planning, an exponential function is introduced as a simpler equivalent to the Lyman formula for calculating normal tissue complication probability (NTCP). The single parameter of the exponential function is chosen to reproduce the Lyman calculation to within ∼0.3%, and thus enable easy conversion of data contained in empirical fits of Lyman parameters for organs at risk (OARs). Organ parameters for the new formula are given in terms of Lyman model m and TD 50 , and conversely m and TD 50 are expressed in terms of the parameters of the new equation. The role of the Lyman volume-effect parameter n is unchanged from its role in the Lyman model. For a non-homogeneously irradiated OAR, an equation relates d ref , n, v eff and the Niemierko equivalent uniform dose (EUD), where d ref and v eff are the reference dose and effective fractional volume of the Kutcher-Burman reduction algorithm (i.e. the LKB model). It follows in the LKB model that uniform EUD irradiation of an OAR results in the same NTCP as the original non-homogeneous distribution. The NTCP equation is therefore represented as a function of EUD. The inverse equation expresses EUD as a function of NTCP and is used to generate a table of EUD versus normal tissue complication probability for the Emami-Burman parameter fits as well as for OAR parameter sets from more recent data

  12. Asymptotic and numerical prediction of current-voltage curves for an organic bilayer solar cell under varying illumination and comparison to the Shockley equivalent circuit

    KAUST Repository

    Foster, J. M.

    2013-01-01

    In this study, a drift-diffusion model is used to derive the current-voltage curves of an organic bilayer solar cell consisting of slabs of electron acceptor and electron donor materials sandwiched together between current collectors. A simplified version of the standard drift-diffusion equations is employed in which minority carrier densities are neglected. This is justified by the large disparities in electron affinity and ionisation potential between the two materials. The resulting equations are solved (via both asymptotic and numerical techniques) in conjunction with (i) Ohmic boundary conditions on the contacts and (ii) an internal boundary condition, imposed on the interface between the two materials, that accounts for charge pair generation (resulting from the dissociation of excitons) and charge pair recombination. Current-voltage curves are calculated from the solution to this model as a function of the strength of the solar charge generation. In the physically relevant power generating regime, it is shown that these current-voltage curves are well-approximated by a Shockley equivalent circuit model. Furthermore, since our drift-diffusion model is predictive, it can be used to directly calculate equivalent circuit parameters from the material parameters of the device. © 2013 AIP Publishing LLC.

  13. On the variability of the salting-out curves of proteins of normal human plasma and serum

    NARCIS (Netherlands)

    Steyn-Parvé, Elizabeth P.; Hout, A.J. van den

    1953-01-01

    Salting-out curves of proteins of normal human plasma reflect the influence of a number of other factors besides the protein composition: the manner of obtaining the blood, the nature of the anti-coagulant used, the non-protein components of the plasma. Diagrams of serum and plasma obtained from

  14. Equivalent intraperitoneal doses of ibuprofen supplemented in drinking water or in diet: a behavioral and biochemical assay using antinociceptive and thromboxane inhibitory dose–response curves in mice

    Directory of Open Access Journals (Sweden)

    Raghda A.M. Salama

    2016-07-01

    Full Text Available Background. Ibuprofen is used chronically in different animal models of inflammation by administration in drinking water or in diet due to its short half-life. Though this practice has been used for years, ibuprofen doses were never assayed against parenteral dose–response curves. This study aims at identifying the equivalent intraperitoneal (i.p. doses of ibuprofen, when it is administered in drinking water or in diet. Methods. Bioassays were performed using formalin test and incisional pain model for antinociceptive efficacy and serum TXB2 for eicosanoid inhibitory activity. The dose–response curve of i.p. administered ibuprofen was constructed for each test using 50, 75, 100 and 200 mg/kg body weight (b.w.. The dose–response curves were constructed of phase 2a of the formalin test (the most sensitive phase to COX inhibitory agents, the area under the ‘change in mechanical threshold’-time curve in the incisional pain model and serum TXB2 levels. The assayed ibuprofen concentrations administered in drinking water were 0.2, 0.35, 0.6 mg/ml and those administered in diet were 82, 263, 375 mg/kg diet. Results. The 3 concentrations applied in drinking water lay between 73.6 and 85.5 mg/kg b.w., i.p., in case of the formalin test; between 58.9 and 77.8 mg/kg b.w., i.p., in case of the incisional pain model; and between 71.8 and 125.8 mg/kg b.w., i.p., in case of serum TXB2 levels. The 3 concentrations administered in diet lay between 67.6 and 83.8 mg/kg b.w., i.p., in case of the formalin test; between 52.7 and 68.6 mg/kg b.w., i.p., in case of the incisional pain model; and between 63.6 and 92.5 mg/kg b.w., i.p., in case of serum TXB2 levels. Discussion. The increment in pharmacological effects of different doses of continuously administered ibuprofen in drinking water or diet do not parallel those of i.p. administered ibuprofen. It is therefore difficult to assume the equivalent parenteral daily doses based on mathematical calculations.

  15. Normalization of flow-mediated dilation to shear stress area under the curve eliminates the impact of variable hyperemic stimulus

    Directory of Open Access Journals (Sweden)

    Mickleborough Timothy D

    2008-09-01

    Full Text Available Abstract Background Normalization of brachial artery flow-mediated dilation (FMD to individual shear stress area under the curve (peak FMD:SSAUC ratio has recently been proposed as an approach to control for the large inter-subject variability in reactive hyperemia-induced shear stress; however, the adoption of this approach among researchers has been slow. The present study was designed to further examine the efficacy of FMD normalization to shear stress in reducing measurement variability. Methods Five different magnitudes of reactive hyperemia-induced shear stress were applied to 20 healthy, physically active young adults (25.3 ± 0. 6 yrs; 10 men, 10 women by manipulating forearm cuff occlusion duration: 1, 2, 3, 4, and 5 min, in a randomized order. A venous blood draw was performed for determination of baseline whole blood viscosity and hematocrit. The magnitude of occlusion-induced forearm ischemia was quantified by dual-wavelength near-infrared spectrometry (NIRS. Brachial artery diameters and velocities were obtained via high-resolution ultrasound. The SSAUC was individually calculated for the duration of time-to-peak dilation. Results One-way repeated measures ANOVA demonstrated distinct magnitudes of occlusion-induced ischemia (volume and peak, hyperemic shear stress, and peak FMD responses (all p AUC (p = 0.785. Conclusion Our data confirm that normalization of FMD to SSAUC eliminates the influences of variable shear stress and solidifies the utility of FMD:SSAUC ratio as an index of endothelial function.

  16. Radiobiological equivalent of low/high dose rate brachytherapy and evaluation of tumor and normal responses to the dose.

    Science.gov (United States)

    Manimaran, S

    2007-06-01

    The aim of this study was to compare the biological equivalent of low-dose-rate (LDR) and high-dose-rate (HDR) brachytherapy in terms of the more recent linear quadratic (LQ) model, which leads to theoretical estimation of biological equivalence. One of the key features of the LQ model is that it allows a more systematic radiobiological comparison between different types of treatment because the main parameters alpha/beta and micro are tissue-specific. Such comparisons also allow assessment of the likely change in the therapeutic ratio when switching between LDR and HDR treatments. The main application of LQ methodology, which focuses on by increasing the availability of remote afterloading units, has been to design fractionated HDR treatments that can replace existing LDR techniques. In this study, with LDR treatments (39 Gy in 48 h) equivalent to 11 fractions of HDR irradiation at the experimental level, there are increasing reports of reproducible animal models that may be used to investigate the biological basis of brachytherapy and to help confirm theoretical predictions. This is a timely development owing to the nonavailability of sufficient retrospective patient data analysis. It appears that HDR brachytherapy is likely to be a viable alternative to LDR only if it is delivered without a prohibitively large number of fractions (e.g., fewer than 11). With increased scientific understanding and technological capability, the prospect of a dose equivalent to HDR brachytherapy will allow greater utilization of the concepts discussed in this article.

  17. Normalization of flow-mediated dilation to shear stress area under the curve eliminates the impact of variable hyperemic stimulus.

    Science.gov (United States)

    Padilla, Jaume; Johnson, Blair D; Newcomer, Sean C; Wilhite, Daniel P; Mickleborough, Timothy D; Fly, Alyce D; Mather, Kieren J; Wallace, Janet P

    2008-09-04

    Normalization of brachial artery flow-mediated dilation (FMD) to individual shear stress area under the curve (peak FMD:SSAUC ratio) has recently been proposed as an approach to control for the large inter-subject variability in reactive hyperemia-induced shear stress; however, the adoption of this approach among researchers has been slow. The present study was designed to further examine the efficacy of FMD normalization to shear stress in reducing measurement variability. Five different magnitudes of reactive hyperemia-induced shear stress were applied to 20 healthy, physically active young adults (25.3 +/- 0. 6 yrs; 10 men, 10 women) by manipulating forearm cuff occlusion duration: 1, 2, 3, 4, and 5 min, in a randomized order. A venous blood draw was performed for determination of baseline whole blood viscosity and hematocrit. The magnitude of occlusion-induced forearm ischemia was quantified by dual-wavelength near-infrared spectrometry (NIRS). Brachial artery diameters and velocities were obtained via high-resolution ultrasound. The SSAUC was individually calculated for the duration of time-to-peak dilation. One-way repeated measures ANOVA demonstrated distinct magnitudes of occlusion-induced ischemia (volume and peak), hyperemic shear stress, and peak FMD responses (all p index of endothelial function.

  18. Comparison of global gene expression profiles of microdissected human foetal Leydig cells with their normal and hyperplastic adult equivalents

    DEFF Research Database (Denmark)

    Lottrup, Grete; Belling, Kirstine González-Izarzugaza; Leffers, Henrik

    2017-01-01

    the normally clustered and hyperplastic ALCs.WHAT IS KNOWN ALREADY: LCs are the primary androgen producing cells in males throughout development and appear in chronologically distinct populations; FLCs, neonatal LCs and ALCs. ALCs are responsible for progression through puberty and for maintenance...... of reproductive functions in adulthood. In patients with reproductive problems, such as infertility or testicular cancer, and especially in men with high gonadotrophin levels, LC function is often impaired, and LCs may cluster abnormally into hyperplastic micronodules (defined as clusters of > 15 LCs in a cross...... with reproductive disorders possibly reflect subtle changes in the expression of many genes rather than regulatory changes of single genes or pathways. The study provides new insights into the development and maturation of human LCs by the identification of a number of potential functional markers for FLC and ALC....

  19. Asymptotic and numerical prediction of current-voltage curves for an organic bilayer solar cell under varying illumination and comparison to the Shockley equivalent circuit

    KAUST Repository

    Foster, J. M.; Kirkpatrick, J.; Richardson, G.

    2013-01-01

    In this study, a drift-diffusion model is used to derive the current-voltage curves of an organic bilayer solar cell consisting of slabs of electron acceptor and electron donor materials sandwiched together between current collectors. A simplified

  20. Low-loss, compact, and fabrication-tolerant Si-wire 90° waveguide bend using clothoid and normal curves for large scale photonic integrated circuits.

    Science.gov (United States)

    Fujisawa, Takeshi; Makino, Shuntaro; Sato, Takanori; Saitoh, Kunimasa

    2017-04-17

    Ultimately low-loss 90° waveguide bend composed of clothoid and normal curves is proposed for dense optical interconnect photonic integrated circuits. By using clothoid curves at the input and output of 90° waveguide bend, straight and bent waveguides are smoothly connected without increasing the footprint. We found that there is an optimum ratio of clothoid curves in the bend and the bending loss can be significantly reduced compared with normal bend. 90% reduction of the bending loss for the bending radius of 4 μm is experimentally demonstrated with excellent agreement between theory and experiment. The performance is compared with the waveguide bend with offset, and the proposed bend is superior to the waveguide bend with offset in terms of fabrication tolerance.

  1. Contrast-enhanced transrectal ultrasound for prediction of prostate cancer aggressiveness: The role of normal peripheral zone time-intensity curves.

    Science.gov (United States)

    Huang, Hui; Zhu, Zheng-Qiu; Zhou, Zheng-Guo; Chen, Ling-Shan; Zhao, Ming; Zhang, Yang; Li, Hong-Bo; Yin, Li-Ping

    2016-12-08

    To assess the role of time-intensity curves (TICs) of the normal peripheral zone (PZ) in the identification of biopsy-proven prostate nodules using contrast-enhanced transrectal ultrasound (CETRUS). This study included 132 patients with 134 prostate PZ nodules. Arrival time (AT), peak intensity (PI), mean transit time (MTT), area under the curve (AUC), time from peak to one half (TPH), wash in slope (WIS) and time to peak (TTP) were analyzed using multivariate linear logistic regression and receiver operating characteristic (ROC) curves to assess whether combining nodule TICs with normal PZ TICs improved the prediction of prostate cancer (PCa) aggressiveness. The PI, AUC (p < 0.001 for both), MTT and TPH (p = 0.011 and 0.040 respectively) values of the malignant nodules were significantly higher than those of the benign nodules. Incorporating the PI and AUC values (both, p < 0.001) of the normal PZ TIC, but not the MTT and TPH values (p = 0.076 and 0.159 respectively), significantly improved the AUC for prediction of malignancy (PI: 0.784-0.923; AUC: 0.758-0.891) and assessment of cancer aggressiveness (p < 0.001). Thus, all these findings indicate that incorporating normal PZ TICs with nodule TICs in CETRUS readings can improve the diagnostic accuracy for PCa and cancer aggressiveness assessment.

  2. Contractibility of curves

    Directory of Open Access Journals (Sweden)

    Janusz Charatonik

    1991-11-01

    Full Text Available Results concerning contractibility of curves (equivalently: of dendroids are collected and discussed in the paper. Interrelations tetween various conditions which are either sufficient or necessary for a curve to be contractible are studied.

  3. Maturation of the auditory system in clinically normal puppies as reflected by the brain stem auditory-evoked potential wave V latency-intensity curve and rarefaction-condensation differential potentials.

    Science.gov (United States)

    Poncelet, L C; Coppens, A G; Meuris, S I; Deltenre, P F

    2000-11-01

    To evaluate auditory maturation in puppies. Ten clinically normal Beagle puppies. Puppies were examined repeatedly from days 11 to 36 after birth (8 measurements). Click-evoked brain stem auditory-evoked potentials (BAEP) were obtained in response to rarefaction and condensation click stimuli from 90 dB normal hearing level to wave V threshold, using steps of 10 dB. Responses were added, providing an equivalent to alternate polarity clicks, and subtracted, providing the rarefaction-condensation differential potential (RCDP). Steps of 5 dB were used to determine thresholds of RCDP and wave V. Slope of the low-intensity segment of the wave V latency-intensity curve was calculated. The intensity range at which RCDP could not be recorded (ie, pre-RCDP range) was calculated by subtracting the threshold of wave V from threshold of RCDP RESULTS: Slope of the wave V latency-intensity curve low-intensity segment evolved with age, changing from (mean +/- SD) -90.8 +/- 41.6 to -27.8 +/- 4.1 micros/dB. Similar results were obtained from days 23 through 36. The pre-RCDP range diminished as puppies became older, decreasing from 40.0 +/- 7.5 to 20.5 +/- 6.4 dB. Changes in slope of the latency-intensity curve with age suggest enlargement of the audible range of frequencies toward high frequencies up to the third week after birth. Decrease in the pre-RCDP range may indicate an increase of the audible range of frequencies toward low frequencies. Age-related reference values will assist clinicians in detecting hearing loss in puppies.

  4. Measurement of extrapolation curves for the secondary pattern of beta radiation Nr. 86 calibrated in rapidity of absorbed dose for tissue equivalent by the Physikalisch Technische Bundesanstalt

    International Nuclear Information System (INIS)

    Alvarez R, J.T.

    1988-10-01

    The following report has as objective to present the obtained results of measuring - with a camera of extrapolation of variable electrodes (CE) - the dose speed absorbed in equivalent fabric given by the group of sources of the secondary pattern of radiation Beta Nr. 86, (PSB), and to compare this results with those presented by the calibration certificates that accompany the PSB extended by the primary laboratory Physikalisch Technische Bundesanstalt, (PTB), of the R.F.A. as well as the uncertainties associated to the measure process. (Author)

  5. Analysis and comparison of immune reactivity in guinea-pigs immunized with equivalent numbers of normal or radiation-attenuated cercariae of Schistosoma mansoni

    International Nuclear Information System (INIS)

    Rogers, M.V.; McLaren, D.J.

    1987-01-01

    Guinea-pigs immunized with equivalent numbers of normal or radiation-attenuated cercariae of Schistosoma mansoni develop close to complete resistance to reinfection at weeks 12 and 4.5 respectively. We here analyse and compare the immune responses induced by the two populations of cercariae. Both radiation-attenuated and normal parasites of S. mansoni elicited an extensive germinal centre response in guinea-pigs by week 4.5 post-immunization. The anti-parasite antibody titre and cytotoxic activity of serum from 4.5-week-vaccinated, or 4.5-week-infected guinea-pigs were approximately equal, but sera from 12-week-infected individuals had high titres of anti-parasite antibody, which promoted significant larvicidal activity in vitro. In all cases, larvicidal activity was mediated by the IgG 2 fraction of the immune serum. Lymphocyte transformation tests conducted on splenic lymphocytes from 4.5-week vaccinated guinea-pigs revealed maximal stimulation against cercarial, 2-week and 3-week worm antigens, whereas spleen cells from 4.5-week-infected guinea-pigs were maximally stimulated by cercarial and 6-week worm antigens. The splenic lymphocyte responses of 12-week infected animals were dramatic against antigens prepared from all life-stages of the parasite. (author)

  6. Limbal Fibroblasts Maintain Normal Phenotype in 3D RAFT Tissue Equivalents Suggesting Potential for Safe Clinical Use in Treatment of Ocular Surface Failure.

    Science.gov (United States)

    Massie, Isobel; Dale, Sarah B; Daniels, Julie T

    2015-06-01

    Limbal epithelial stem cell deficiency can cause blindness, but transplantation of these cells on a carrier such as human amniotic membrane can restore vision. Unfortunately, clinical graft manufacture using amnion can be inconsistent. Therefore, we have developed an alternative substrate, Real Architecture for 3D Tissue (RAFT), which supports human limbal epithelial cells (hLE) expansion. Epithelial organization is improved when human limbal fibroblasts (hLF) are incorporated into RAFT tissue equivalent (TE). However, hLF have the potential to transdifferentiate into a pro-scarring cell type, which would be incompatible with therapeutic transplantation. The aim of this work was to assess the scarring phenotype of hLF in RAFT TEs in hLE+ and hLE- RAFT TEs and in nonairlifted and airlifted RAFT TEs. Diseased fibroblasts (dFib) isolated from the fibrotic conjunctivae of ocular mucous membrane pemphigoid (Oc-MMP) patients were used as a pro-scarring positive control against which hLF were compared using surrogate scarring parameters: matrix metalloproteinase (MMP) activity, de novo collagen synthesis, α-smooth muscle actin (α-SMA) expression, and transforming growth factor-β (TGF-β) secretion. Normal hLF and dFib maintained different phenotypes in RAFT TE. MMP-2 and -9 activity, de novo collagen synthesis, and α-SMA expression were all increased in dFib cf. normal hLF RAFT TEs, although TGF-β1 secretion did not differ between normal hLF and dFib RAFT TEs. Normal hLF do not progress toward a scarring-like phenotype during culture in RAFT TEs and, therefore, may be safe to include in therapeutic RAFT TE, where they can support hLE, although in vivo work is required to confirm this. dFib RAFT TEs (used in this study as a positive control) may be useful toward the development of an ex vivo disease model of Oc-MMP.

  7. Dynamical response of the Galileo Galilei on the ground rotor to test the equivalence principle: Theory, simulation, and experiment. I. The normal modes

    International Nuclear Information System (INIS)

    Comandi, G.L.; Chiofalo, M.L.; Toncelli, R.; Bramanti, D.; Polacco, E.; Nobili, A.M.

    2006-01-01

    Recent theoretical work suggests that violation of the equivalence principle might be revealed in a measurement of the fractional differential acceleration η between two test bodies-of different compositions, falling in the gravitational field of a source mass--if the measurement is made to the level of η≅10 -13 or better. This being within the reach of ground based experiments gives them a new impetus. However, while slowly rotating torsion balances in ground laboratories are close to reaching this level, only an experiment performed in a low orbit around the Earth is likely to provide a much better accuracy. We report on the progress made with the 'Galileo Galilei on the ground' (GGG) experiment, which aims to compete with torsion balances using an instrument design also capable of being converted into a much higher sensitivity space test. In the present and following articles (Part I and Part II), we demonstrate that the dynamical response of the GGG differential accelerometer set into supercritical rotation-in particular, its normal modes (Part I) and rejection of common mode effects (Part II)-can be predicted by means of a simple but effective model that embodies all the relevant physics. Analytical solutions are obtained under special limits, which provide the theoretical understanding. A simulation environment is set up, obtaining a quantitative agreement with the available experimental data on the frequencies of the normal modes and on the whirling behavior. This is a needed and reliable tool for controlling and separating perturbative effects from the expected signal, as well as for planning the optimization of the apparatus

  8. SU-F-T-02: Estimation of Radiobiological Doses (BED and EQD2) of Single Fraction Electronic Brachytherapy That Equivalent to I-125 Eye Plaque: By Using Linear-Quadratic and Universal Survival Curve Models

    International Nuclear Information System (INIS)

    Kim, Y; Waldron, T; Pennington, E

    2016-01-01

    Purpose: To test the radiobiological impact of hypofractionated choroidal melanoma brachytherapy, we calculated single fraction equivalent doses (SFED) of the tumor that equivalent to 85 Gy of I125-BT for 20 patients. Corresponding organs-at-risks (OARs) doses were estimated. Methods: Twenty patients treated with I125-BT were retrospectively examined. The tumor SFED values were calculated from tumor BED using a conventional linear-quadratic (L-Q) model and an universal survival curve (USC). The opposite retina (α/β = 2.58), macula (2.58), optic disc (1.75), and lens (1.2) were examined. The % doses of OARs over tumor doses were assumed to be the same as for a single fraction delivery. The OAR SFED values were converted into BED and equivalent dose in 2 Gy fraction (EQD2) by using both L-Q and USC models, then compared to I125-BT. Results: The USC-based BED and EQD2 doses of the macula, optic disc, and the lens were on average 118 ± 46% (p 14 Gy). Conclusion: The estimated single fraction doses were feasible to be delivered within 1 hour using a high dose rate source such as electronic brachytherapy (eBT). However, the estimated OAR doses using eBT were 112 ∼ 118% higher than when using the I125-BT technique. Continued exploration of alternative dose rate or fractionation schedules should be followed.

  9. Evidence of non-coincidence of normalized sigmoidal curves of two different structural properties for two-state protein folding/unfolding

    International Nuclear Information System (INIS)

    Rahaman, Hamidur; Khan, Md. Khurshid Alam; Hassan, Md. Imtaiyaz; Islam, Asimul; Moosavi-Movahedi, Ali Akbar; Ahmad, Faizan

    2013-01-01

    Highlights: ► Non-coincidence of normalized sigmoidal curves of two different structural properties is consistence with the two-state protein folding/unfolding. ► DSC measurements of denaturation show a two-state behavior of g-cyt-c at pH 6.0. ► Urea-induced denaturation of g-cyt-c is a variable two- state process at pH 6.0. ► GdmCl-induced denaturation of g-cyt-c is a fixed two- state process at pH 6.0. -- Abstract: In practice, the observation of non-coincidence of normalized sigmoidal transition curves measured by two different structural properties constitutes a proof of existence of thermodynamically stable intermediate(s) on the folding ↔ unfolding pathway of a protein. Here we give first experimental evidence that this non-coincidence is also observed for a two-state protein denaturation. Proof of this evidence comes from our studies of denaturation of goat cytochrome-c (g-cyt-c) at pH 6.0. These studies involve differential scanning calorimetry (DSC) measurements in the absence of urea and measurements of urea-induced denaturation curves monitored by observing changes in absorbance at 405, 530, and 695 nm and circular dichroism (CD) at 222, 405, and 416 nm. DSC measurements showed that denaturation of the protein is a two-state process, for calorimetric and van’t Hoff enthalpy changes are, within experimental errors, identical. Normalization of urea-induced denaturation curves monitored by optical properties leads to noncoincident sigmoidal curves. Heat-induced transition of g-cyt-c in the presence of different urea concentrations was monitored by CD at 222 nm and absorption at 405 nm. It was observed that these two different structural probes gave not only identical values of T m (transition temperature), ΔH m (change in enthalpy at T m ) and ΔC p (constant-pressure heat capacity change), but these thermodynamic parameters in the absence of urea are also in agreement with those obtained from DSC measurements

  10. Curva de hemoglobina em um grupo de gestantes normais Hemoglobin curve in a normal pregnant women group

    Directory of Open Access Journals (Sweden)

    Pedro Augusto Marcondes de Almeida

    1973-09-01

    Full Text Available Através das dosagens de hemoglobina realizadas em várias épocas da gravidez, em 701 gestantes sem suplementação de ferro escolhidas por amostragem casual simples de um universo de 7050 no período de 1947 a 1969, foi construída uma curva com as taxas médias de hemoglobina, que evidenciou uma queda que atinge o máximo por volta do 7.° mês de gravidez e elevando-se a partir desta época. A partir dela foi construída uma curva operacional e discutida a sua importância no diagnóstico e conduta frente a anemia na gravidez.Through hemoglobin determinations made in various stages of pregnancy, in 701 pregnant women with no iron supplementation ad hoc chosen out of a number of 7050 in the period from 1947 to 1969, a curve was constructed with the average rate of hemoglobin, which gave evidence of a drop that reaches its maximum around the 7th month of pregnancy and rises from that moment on. From this a working graph was built up and then it was discussed its importance in the diagnostic and treatment in anemia in pregnancy.

  11. Validação da curva normal de peso fetal estimado pela ultra-sonografia para o diagnóstico do peso neonatal Validity of the normal fetal weight curve estimated by ultrasound for diagnosis of neonatal weight

    Directory of Open Access Journals (Sweden)

    José Guilherme Cecatti

    2003-02-01

    Full Text Available OBJETIVO: avaliar a concordância entre o peso fetal estimado (PFE por ultra-sonografia e o neonatal, o desempenho da curva normal de PFE por idade gestacional no diagnóstico de desvios do peso fetal/neonatal e fatores associados. MÉTODOS: participaram do estudo 186 grávidas atendidas de novembro de 1998 a janeiro de 2000, com avaliação ultra-sonográfica até 3 dias antes do parto, determinação do PFE e do índice de líquido amniótico e parto na instituição. O PFE foi calculado e classificado de acordo com a curva de valores normais de PFE em: pequeno para a idade gestacional (PIG, adequado para a idade gestacional (AIG e grande para a idade gestacional (GIG. A mesma classificação foi feita para o peso neonatal. A variabilidade das medidas e o grau de correlação linear entre o PFE e o peso neonatal foram calculados, bem como a sensibilidade, especificidade e valores preditivos para o uso da curva de valores normais de PFE para o diagnóstico dos desvios do peso neonatal. RESULTADOS: diferença entre o PFE e o peso neonatal variou entre -540 e +594 g, com média de +47,1 g, e as duas medidas apresentaram um coeficiente de correlação linear de 0,94. A curva normal de PFE teve sensibilidade de 100% e especificidade de 90,5% em detectar PIG ao nascimento, e de 94,4 e 92,8%, respectivamente, em detectar GIG, porém os valores preditivos positivos foram baixos para ambos. CONCLUSÕES: a estimativa ultra-sonográfica do peso fetal foi concordante com o peso neonatal, superestimando-o em apenas cerca de 47 g e a curva do PFE teve bom desempenho no rastreamento diagnóstico de recém-nascidos PIG e GIG.PURPOSE: tocompare the ultrasound estimation of fetal weight (EFW with neonatal weight and to evaluate the performance of the normal EFW curve according to gestational age for the diagnosis of fetal/neonatal weight deviation and associated factors. METHODS: one hundred and eighty-six pregnant women who delivered at the institution from

  12. Comparable attenuation of sympathetic nervous system activity in obese subjects with normal glucose tolerance, impaired glucose tolerance and treatment naïve type 2 diabetes following equivalent weight loss

    Directory of Open Access Journals (Sweden)

    Nora E. Straznicky

    2016-11-01

    Full Text Available Background and Purpose: Elevated sympathetic nervous system (SNS activity is a characteristic of obesity and type 2 diabetes (T2D that contributes to target organ damage and cardiovascular risk. In this study we examined whether baseline metabolic status influences the degree of sympathoinhibition attained following equivalent dietary weight loss. Methods: Un-medicated obese individuals categorized as normal glucose tolerant (NGT, n=15, impaired glucose tolerant (IGT, n=24 and newly-diagnosed T2D (n=15 consumed a hypocaloric diet (29% fat, 23% protein, 45% carbohydrate for 4-months. The three groups were matched for baseline age (56 + 1 years, body mass index (BMI, 32.9 + 0.7 kg/m2 and gender. Clinical measurements included whole-body norepinephrine kinetics, muscle sympathetic nerve activity (MSNA, by microneurography, spontaneous cardiac baroreflex sensitivity (BRS and oral glucose tolerance test. Results: Weight loss averaged -7.5 + 0.8, -8.1 + 0.5 and -8.0 + 0.9 % of body weight in NGT, IGT and T2D groups, respectively. T2D subjects had significantly greater reductions in fasting glucose, 2-h glucose and glucose area under the curve (AUC0-120 compared to NGT and IGT (group effect, P<0.001. Insulinogenic index decreased in IGT and NGT groups and increased in T2D (group x time, P=0.04. The magnitude of reduction in MSNA (-7 + 3, -8 + 4, -15 + 4 burst/100hb, respectively and whole-body norepinephrine spillover rate (-28 + 8, -18 + 6 and -25 + 7 %, respectively, time effect both P<0.001, did not differ between groups. After adjustment for age and change in body weight, ∆ insulin AUC0-120 was independently associated with reduction in arterial norepinephrine concentration, whilst ∆ LDL-cholesterol and improvement in BRS were independently associated with decrease in MSNA. Conclusions: Equivalent weight loss through hypocaloric diet is accompanied by similar sympathoinhibition in matched obese subjects with different baseline glucose tolerance

  13. Fasting plasma glucose and serum uric acid levels in a general Chinese population with normal glucose tolerance: A U-shaped curve.

    Directory of Open Access Journals (Sweden)

    Yunyang Wang

    Full Text Available Although several epidemiological studies assessed the relationship between fasting plasma glucose (FPG and serum uric acid (SUA levels, the results were inconsistent. A cross-sectional study was conducted to investigate this relationship in Chinese individuals with normal glucose tolerance.A total of 5,726 women and 5,457 men with normal glucose tolerance were enrolled in the study. All subjects underwent a 75-g oral glucose tolerance test. Generalized additive models and two-piecewise linear regression models were applied to assess the relationship.A U-shaped relationship between FPG and SUA was observed. After adjusting for potential confounders, the inflection points of FPG levels in the curves were 4.6 mmol/L in women and 4.7 mmol/L in men respectively. SUA levels decreased with increasing fasting plasma glucose concentrations before the inflection points (regression coefficient [β] = -36.4, P < 0.001 for women; β = -33.5, P < 0.001 for men, then SUA levels increased (β = 17.8, P < 0.001 for women; β = 13.9, P < 0.001 for men. Additionally, serum insulin levels were positively associated with FPG and SUA (P < 0.05.A U-shaped relationship between FPG and SUA levels existed in Chinese individuals with normal glucose tolerance. The association is partly mediated through serum insulin levels.

  14. Extracting the normal lung dose–response curve from clinical DVH data: a possible role for low dose hyper-radiosensitivity, increased radioresistance

    International Nuclear Information System (INIS)

    Gordon, J J; Snyder, K; Zhong, H; Barton, K; Sun, Z; Chetty, I J; Matuszak, M; Ten Haken, R K

    2015-01-01

    In conventionally fractionated radiation therapy for lung cancer, radiation pneumonitis’ (RP) dependence on the normal lung dose-volume histogram (DVH) is not well understood. Complication models alternatively make RP a function of a summary statistic, such as mean lung dose (MLD). This work searches over damage profiles, which quantify sub-volume damage as a function of dose. Profiles that achieve best RP predictive accuracy on a clinical dataset are hypothesized to approximate DVH dependence.Step function damage rate profiles R(D) are generated, having discrete steps at several dose points. A range of profiles is sampled by varying the step heights and dose point locations. Normal lung damage is the integral of R(D) with the cumulative DVH. Each profile is used in conjunction with a damage cutoff to predict grade 2 plus (G2+) RP for DVHs from a University of Michigan clinical trial dataset consisting of 89 CFRT patients, of which 17 were diagnosed with G2+ RP.Optimal profiles achieve a modest increase in predictive accuracy—erroneous RP predictions are reduced from 11 (using MLD) to 8. A novel result is that optimal profiles have a similar distinctive shape: enhanced damage contribution from low doses (<20 Gy), a flat contribution from doses in the range ∼20–40 Gy, then a further enhanced contribution from doses above 40 Gy. These features resemble the hyper-radiosensitivity / increased radioresistance (HRS/IRR) observed in some cell survival curves, which can be modeled using Joiner’s induced repair model.A novel search strategy is employed, which has the potential to estimate RP dependence on the normal lung DVH. When applied to a clinical dataset, identified profiles share a characteristic shape, which resembles HRS/IRR. This suggests that normal lung may have enhanced sensitivity to low doses, and that this sensitivity can affect RP risk. (paper)

  15. Equivalent Lagrangians

    International Nuclear Information System (INIS)

    Hojman, S.

    1982-01-01

    We present a review of the inverse problem of the Calculus of Variations, emphasizing the ambiguities which appear due to the existence of equivalent Lagrangians for a given classical system. In particular, we analyze the properties of equivalent Lagrangians in the multidimensional case, we study the conditions for the existence of a variational principle for (second as well as first order) equations of motion and their solutions, we consider the inverse problem of the Calculus of Variations for singular systems, we state the ambiguities which emerge in the relationship between symmetries and conserved quantities in the case of equivalent Lagrangians, we discuss the problems which appear in trying to quantize classical systems which have different equivalent Lagrangians, we describe the situation which arises in the study of equivalent Lagrangians in field theory and finally, we present some unsolved problems and discussion topics related to the content of this article. (author)

  16. SU-F-T-02: Estimation of Radiobiological Doses (BED and EQD2) of Single Fraction Electronic Brachytherapy That Equivalent to I-125 Eye Plaque: By Using Linear-Quadratic and Universal Survival Curve Models

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Y; Waldron, T; Pennington, E [University Of Iowa, College of Medicine, Iowa City, IA (United States)

    2016-06-15

    Purpose: To test the radiobiological impact of hypofractionated choroidal melanoma brachytherapy, we calculated single fraction equivalent doses (SFED) of the tumor that equivalent to 85 Gy of I125-BT for 20 patients. Corresponding organs-at-risks (OARs) doses were estimated. Methods: Twenty patients treated with I125-BT were retrospectively examined. The tumor SFED values were calculated from tumor BED using a conventional linear-quadratic (L-Q) model and an universal survival curve (USC). The opposite retina (α/β = 2.58), macula (2.58), optic disc (1.75), and lens (1.2) were examined. The % doses of OARs over tumor doses were assumed to be the same as for a single fraction delivery. The OAR SFED values were converted into BED and equivalent dose in 2 Gy fraction (EQD2) by using both L-Q and USC models, then compared to I125-BT. Results: The USC-based BED and EQD2 doses of the macula, optic disc, and the lens were on average 118 ± 46% (p < 0.0527), 126 ± 43% (p < 0.0354), and 112 ± 32% (p < 0.0265) higher than those of I125-BT, respectively. The BED and EQD2 doses of the opposite retina were 52 ± 9% lower than I125-BT. The tumor SFED values were 25.2 ± 3.3 Gy and 29.1 ± 2.5 Gy when using USC and LQ models which can be delivered within 1 hour. All BED and EQD2 values using L-Q model were significantly larger when compared to the USC model (p < 0.0274) due to its large single fraction size (> 14 Gy). Conclusion: The estimated single fraction doses were feasible to be delivered within 1 hour using a high dose rate source such as electronic brachytherapy (eBT). However, the estimated OAR doses using eBT were 112 ∼ 118% higher than when using the I125-BT technique. Continued exploration of alternative dose rate or fractionation schedules should be followed.

  17. Normal-Mode Analysis of Circular DNA at the Base-Pair Level. 2. Large-Scale Configurational Transformation of a Naturally Curved Molecule.

    Science.gov (United States)

    Matsumoto, Atsushi; Tobias, Irwin; Olson, Wilma K

    2005-01-01

    Fine structural and energetic details embedded in the DNA base sequence, such as intrinsic curvature, are important to the packaging and processing of the genetic material. Here we investigate the internal dynamics of a 200 bp closed circular molecule with natural curvature using a newly developed normal-mode treatment of DNA in terms of neighboring base-pair "step" parameters. The intrinsic curvature of the DNA is described by a 10 bp repeating pattern of bending distortions at successive base-pair steps. We vary the degree of intrinsic curvature and the superhelical stress on the molecule and consider the normal-mode fluctuations of both the circle and the stable figure-8 configuration under conditions where the energies of the two states are similar. To extract the properties due solely to curvature, we ignore other important features of the double helix, such as the extensibility of the chain, the anisotropy of local bending, and the coupling of step parameters. We compare the computed normal modes of the curved DNA model with the corresponding dynamical features of a covalently closed duplex of the same chain length constructed from naturally straight DNA and with the theoretically predicted dynamical properties of a naturally circular, inextensible elastic rod, i.e., an O-ring. The cyclic molecules with intrinsic curvature are found to be more deformable under superhelical stress than rings formed from naturally straight DNA. As superhelical stress is accumulated in the DNA, the frequency, i.e., energy, of the dominant bending mode decreases in value, and if the imposed stress is sufficiently large, a global configurational rearrangement of the circle to the figure-8 form takes place. We combine energy minimization with normal-mode calculations of the two states to decipher the configurational pathway between the two states. We also describe and make use of a general analytical treatment of the thermal fluctuations of an elastic rod to characterize the

  18. Mapping of isoexposure curves for evaluation of equivalent environmental doses for radiodiagnostic mobile equipment; Mapeamento de curvas de isoexposicao para avaliacao de equivalente de dose ambiente para equipamentos moveis de radiodiagnostico

    Energy Technology Data Exchange (ETDEWEB)

    Bacelar, Alexandre, E-mail: abacelar@hcpa.ufrgs.b [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Hospital de Clinicas. Setor de Fisica Medica e Radioprotecao; Andrade, Jose Rodrigo Mendes, E-mail: jose.andrade@santacasa.tche.b [Irmandade da Santa Casa de Misericordia de Porto Alegre, RS (Brazil). Servico de Atencao a Saude e Qualidade de Vida; Fischer, Andreia Caroline Fischer da Silveira; Accurso, Andre; Hoff, Gabriela, E-mail: andreia.silveira.001@acad.pucrs.b, E-mail: andre.accurso@acad.pucrs.b [Pontificia Univ. Catolica do Rio Grande do Sul (PUC/RS), Porto Alegre, RS (Brazil). Grupo de Experimentacao e Simulacao Computacional em Fisica Medica

    2011-10-26

    This paper generates iso exposure curves in areas where the mobile radiodiagnostic equipment are used for evaluation of iso kerma map and the environment equivalent dose (H{sup *}(d)). It was used a Shimadzu mobile equipment and two Siemens, with non anthropomorphic scatter. The exposure was measured in a mesh of 4.20 x 4.20 square meter in steps of 30 cm, at half height from the scatterer. The calculation of H{sup *}(d) were estimated for a worker present in all the procedures in a period of 11 months, being considered 3.55 m As/examination and 44.5 procedures/month (adult UTI) and 3.16 m As/examination and 20.1 procedure/month (pediatric UTI), and 3.16 m As/examination and 20.1 procedure/month (pediatric UTI). It was observed that there exist points where the H{sup *}(d) was over the limit established for the free area inside the radius of 30 cm from the central beam of radiation in the case of pediatric UTI and 60 cm for adult UTI. The points localized 2.1 m from the center presented values lower than 25% of those limit

  19. In silico sampling reveals the effect of clustering and shows that the log-normal rank abundance curve is an artefact

    NARCIS (Netherlands)

    Neuteboom, J.H.; Struik, P.C.

    2005-01-01

    The impact of clustering on rank abundance, species-individual (S-N)and species-area curves was investigated using a computer programme for in silico sampling. In a rank abundance curve the abundances of species are plotted on log-scale against species sequence. In an S-N curve the number of species

  20. Study of the variation of the E-I curves in the superconducting to normal transition of Bi-2212 textured ceramics by Pb addition

    Directory of Open Access Journals (Sweden)

    Sotelo, A.

    2006-06-01

    Full Text Available Vitreous cylinders with compositions Bi2-xPbxSr2CaCu2Oy, (x = 0, 0.2, 0.4 and 0.6 were prepared and used as precursors to fabricate textured bars through a laser floating zone melting method (LFZ. The resulting textured cylindrical bars were annealed, followed by their electrical characterization. The microstructure was determined and correlated with the electrical measured properties. The influence of Pb doping on the sharpness of the superconducting to normal transition on the E-I curves has been determined. The sharpest transitions have been obtained for samples doped with 0.4Pb.

    Se han preparado precursores de tipo vítreo en forma de cilindro con composiciones nominales Bi2-xPbxSr2CaCu2Oy, con x = 0, 0.2, 0.4 y 0.6. Estos cilindros se han utilizado como precursores para fabricar barras texturadas por medio de una técnica de fusión zonal inducida por láser (LFZ. Estas barras texturadas se recocieron a diferentes temperaturas y se caracterizaron eléctricamente. Además, se examinó su microestructura para correlacionarla con las propiedades eléctricas medidas. La variación de la transición del estado superconductor al normal se ha relacionado con el dopaje con Pb a través de las curvas E-I. Las mejores transiciones se han obtenido para muestras dopadas con 0.4 Pb.

  1. Considerations for potency equivalent calculations in the Ah receptor-based CALUX bioassay: normalization of superinduction results for improved sample potency estimation.

    Science.gov (United States)

    Baston, David S; Denison, Michael S

    2011-02-15

    The chemically activated luciferase expression (CALUX) system is a mechanistically based recombinant luciferase reporter gene cell bioassay used in combination with chemical extraction and clean-up methods for the detection and relative quantitation of 2,3,7,8-tetrachlorodibenzo-p-dioxin and related dioxin-like halogenated aromatic hydrocarbons in a wide variety of sample matrices. While sample extracts containing complex mixtures of chemicals can produce a variety of distinct concentration-dependent luciferase induction responses in CALUX cells, these effects are produced through a common mechanism of action (i.e. the Ah receptor (AhR)) allowing normalization of results and sample potency determination. Here we describe the diversity in CALUX response to PCDD/Fs from sediment and soil extracts and not only report the occurrence of superinduction of the CALUX bioassay, but we describe a mechanistically based approach for normalization of superinduction data that results in a more accurate estimation of the relative potency of such sample extracts. Copyright © 2010 Elsevier B.V. All rights reserved.

  2. Gyrokinetic equivalence

    International Nuclear Information System (INIS)

    Parra, Felix I; Catto, Peter J

    2009-01-01

    We compare two different derivations of the gyrokinetic equation: the Hamiltonian approach in Dubin D H E et al (1983 Phys. Fluids 26 3524) and the recursive methodology in Parra F I and Catto P J (2008 Plasma Phys. Control. Fusion 50 065014). We prove that both approaches yield the same result at least to second order in a Larmor radius over macroscopic length expansion. There are subtle differences in the definitions of some of the functions that need to be taken into account to prove the equivalence.

  3. Identification of patients with persistent trophoblastic disease after complete hydatidiform mole by using a normal 24-hour urine hCG regression curve

    NARCIS (Netherlands)

    Cromvoirt, S.M. van; Thomas, C.M.G.; Quinn, M.A.; McNally, O.M.; Bekkers, R.L.M.

    2014-01-01

    OBJECTIVE: The aim of this study was to establish a reference 24-hour urine human chorionic gonadotropin (hCG) regression curve in patients with complete hydatidiform mole (CHM) as diagnostic tool in the prediction of persistent trophoblastic disease (PTD). METHODS: From 2004 to 2011, 312 cases

  4. Direct Extraction of InP/GaAsSb/InP DHBT Equivalent-Circuit Elements From S-Parameters Measured at Cut-Off and Normal Bias Conditions

    DEFF Research Database (Denmark)

    Johansen, Tom Keinicke; Leblanc, Rémy; Poulain, Julien

    2016-01-01

    A unique direct parameter extraction method for the small-signal equivalent-circuit model of InP/GaAsSb/InP double heterojunction bipolar transistors (DHBTs) is presented. $S$-parameters measured at cut-off bias are used, at first, to extract the distribution factor $X_{0}$ for the base-collector......A unique direct parameter extraction method for the small-signal equivalent-circuit model of InP/GaAsSb/InP double heterojunction bipolar transistors (DHBTs) is presented. $S$-parameters measured at cut-off bias are used, at first, to extract the distribution factor $X_{0}$ for the base......-collector capacitance at zero collector current and the collector-to-emitter overlap capacitance $C_{ceo}$ present in InP DHBT devices. Low-frequency $S$-parameters measured at normal bias conditions then allows the extraction of the external access resistances $R_{bx}$, $R_{e}$, and $R_{cx}$ as well as the intrinsic...

  5. Idiopathic and normal lateral lumbar curves: muscle effects interpreted by 12th rib length asymmetry with pathomechanic implications for lumbar idiopathic scoliosis

    Directory of Open Access Journals (Sweden)

    Theodoros B. Grivas

    2016-10-01

    Full Text Available Abstract Background The historical view of scoliosis as a primary rotation deformity led to debate about the pathomechanic role of paravertebral muscles; particularly multifidus, thought by some to be scoliogenic, counteracting, uncertain, or unimportant. Here, we address lateral lumbar curves (LLC and suggest a pathomechanic role for quadrates lumborum, (QL in the light of a new finding, namely of 12th rib bilateral length asymmetry associated with idiopathic and small non-scoliosis LLC. Methods Group 1: The postero-anterior spinal radiographs of 14 children (girls 9, boys 5 aged 9–18, median age 13 years, with right lumbar idiopathic scoliosis (IS and right LLC less that 10°, were studied. The mean Cobb angle was 12° (range 5–22°. Group 2: In 28 children (girls 17, boys 11 with straight spines, postero-anterior spinal radiographs were evaluated similarly to the children with the LLC, aged 8–17, median age 13 years. The ratio of the right/left 12th rib lengths and it’s reliability was calculated. The difference of the ratio between the two groups was tested; and the correlation between the ratio and the Cobb angle estimated. Statistical analysis was done using the SPSS package. Results The ratio’s reliability study showed intra-observer +/−0,036 and the inter-observer error +/−0,042 respectively in terms of 95 % confidence limit of the error of measurements. The 12th rib was longer on the side of the curve convexity in 12 children with LLC and equal in two patients with lumbar scoliosis. The 12th rib ratios of the children with lumbar curve were statistically significantly greater than in those with straight spines. The correlation of the 12th rib ratio with Cobb angle was statistically significant. The 12th thoracic vertebrae show no axial rotation (or minimal in the LLC and no rotation in the straight spine group. Conclusions It is not possible, at present, to determine whether the 12th convex rib lengthening is

  6. Improvement of High-throughput Genotype Analysis After Implementation of a Dual-curve Sybr Green I-based Quantification and Normalization Procedure

    Science.gov (United States)

    The ability to rapidly screen a large number of individuals is the key to any successful plant breeding program. One of the primary bottlenecks in high throughput screening is the preparation of DNA samples, particularly the quantification and normalization of samples for downstream processing. A ...

  7. A mathematical model resolving normal human blood lymphocyte population X-ray survival curves into six components: radiosensitivity, death rate and size of two responding sub-populations

    International Nuclear Information System (INIS)

    Thomson, A.E.R.; Vaughan-Smith, S.; Peel, W.E.

    1982-01-01

    The analysis was based on observations of survival decrease as a function of dose (range 0-5 Gy (= 500 rad)) and time after irradiation in vitro. Since lymphocyte survival is also sensitive to culture conditions the effects of radiation were examined daily up to 3 days only, while survival of control cells remained ca. 90 per cent. The time-dependent changes were resolved as the death rates (first-order governed) of lethally-hit cells (apparent survivors), so rendering these distinguishable from the morphologically identical, true (ultimate) survivors. For 12 blood donors the estimated dose permitting 37 per cent ultimate survival (D 37 value) averaged 0.72 +- 0.18 (SD) Gy for the more radiosensitive lymphocyte fraction and 2.50 +- 0.67 Gy for the less radiosensitive, each fraction proving homogeneously radiosensitive and the latter identifying substantially in kind with T-type (E-rosetting lymphocytes). The half-life of lethally-hit members of either fraction varied widely among the donors (ranges, 25-104 hours and 11-40 hours, respectively). Survival curves reconstructed by summating the numerical estimates of the six parameters according to the theoretical model closely matched those observed experimentally (ranged in multiple correlation coefficient, 0.9709-0.9994) for all donors). This signified the absence of any additional, totally radioresistant cell fraction. (author)

  8. Normalized Rotational Multiple Yield Surface Framework (NRMYSF) stress-strain curve prediction method based on small strain triaxial test data on undisturbed Auckland residual clay soils

    Science.gov (United States)

    Noor, M. J. Md; Ibrahim, A.; Rahman, A. S. A.

    2018-04-01

    Small strain triaxial test measurement is considered to be significantly accurate compared to the external strain measurement using conventional method due to systematic errors normally associated with the test. Three submersible miniature linear variable differential transducer (LVDT) mounted on yokes which clamped directly onto the soil sample at equally 120° from the others. The device setup using 0.4 N resolution load cell and 16 bit AD converter was capable of consistently resolving displacement of less than 1µm and measuring axial strains ranging from less than 0.001% to 2.5%. Further analysis of small strain local measurement data was performed using new Normalized Multiple Yield Surface Framework (NRMYSF) method and compared with existing Rotational Multiple Yield Surface Framework (RMYSF) prediction method. The prediction of shear strength based on combined intrinsic curvilinear shear strength envelope using small strain triaxial test data confirmed the significant improvement and reliability of the measurement and analysis methods. Moreover, the NRMYSF method shows an excellent data prediction and significant improvement toward more reliable prediction of soil strength that can reduce the cost and time of experimental laboratory test.

  9. Acoustic plane waves normally incident on a clamped panel in a rectangular duct. [to explain noise reduction curves for reducing interior noise in aircraft

    Science.gov (United States)

    Unz, H.; Roskam, J.

    1979-01-01

    The theory of acoustic plane wave normally incident on a clamped panel in a rectangular duct is developed. The coupling theory between the elastic vibrations of the panel (plate) and the acoustic wave propagation in infinite space and in the rectangular duct is considered. The partial differential equation which governs the vibration of the panel (plate) is modified by adding to its stiffness (spring) forces and damping forces, and the fundamental resonance frequency and the attenuation factor are discussed. The noise reduction expression based on the theory is found to agree well with the corresponding experimental data of a sample aluminum panel in the mass controlled region, the damping controlled region, and the stiffness controlled region. All the frequency positions of the upward and downward resonance spikes in the sample experimental data are identified theoretically as resulting from four cross interacting major resonance phenomena: the cavity resonance, the acoustic resonance, the plate resonance, and the wooden back panel resonance.

  10. Power Curve Measurements REWS

    DEFF Research Database (Denmark)

    Gómez Arranz, Paula; Vesth, Allan

    This report describes the power curve measurements carried out on a given wind turbine in a chosen period. The measurements were carried out following the measurement procedure in the draft of IEC 61400-12-1 Ed.2 [1], with some deviations mostly regarding uncertainty calculation. Here, the refere......This report describes the power curve measurements carried out on a given wind turbine in a chosen period. The measurements were carried out following the measurement procedure in the draft of IEC 61400-12-1 Ed.2 [1], with some deviations mostly regarding uncertainty calculation. Here......, the reference wind speed used in the power curve is the equivalent wind speed obtained from lidar measurements at several heights between lower and upper blade tip, in combination with a hub height meteorological mast. The measurements have been performed using DTU’s measurement equipment, the analysis...

  11. Incidence of late rectal bleeding in high-dose conformal radiotherapy of prostate cancer using equivalent uniform dose-based and dose-volume-based normal tissue complication probability models

    International Nuclear Information System (INIS)

    Soehn, Matthias; Yan Di; Liang Jian; Meldolesi, Elisa; Vargas, Carlos; Alber, Markus

    2007-01-01

    Purpose: Accurate modeling of rectal complications based on dose-volume histogram (DVH) data are necessary to allow safe dose escalation in radiotherapy of prostate cancer. We applied different equivalent uniform dose (EUD)-based and dose-volume-based normal tissue complication probability (NTCP) models to rectal wall DVHs and follow-up data for 319 prostate cancer patients to identify the dosimetric factors most predictive for Grade ≥ 2 rectal bleeding. Methods and Materials: Data for 319 patients treated at the William Beaumont Hospital with three-dimensional conformal radiotherapy (3D-CRT) under an adaptive radiotherapy protocol were used for this study. The following models were considered: (1) Lyman model and (2) logit-formula with DVH reduced to generalized EUD (3) serial reconstruction unit (RU) model (4) Poisson-EUD model, and (5) mean dose- and (6) cutoff dose-logistic regression model. The parameters and their confidence intervals were determined using maximum likelihood estimation. Results: Of the patients, 51 (16.0%) showed Grade 2 or higher bleeding. As assessed qualitatively and quantitatively, the Lyman- and Logit-EUD, serial RU, and Poisson-EUD model fitted the data very well. Rectal wall mean dose did not correlate to Grade 2 or higher bleeding. For the cutoff dose model, the volume receiving > 73.7 Gy showed most significant correlation to bleeding. However, this model fitted the data more poorly than the EUD-based models. Conclusions: Our study clearly confirms a volume effect for late rectal bleeding. This can be described very well by the EUD-like models, of which the serial RU- and Poisson-EUD model can describe the data with only two parameters. Dose-volume-based cutoff-dose models performed worse

  12. Politico-economic equivalence

    DEFF Research Database (Denmark)

    Gonzalez Eiras, Martin; Niepelt, Dirk

    2015-01-01

    Traditional "economic equivalence'' results, like the Ricardian equivalence proposition, define equivalence classes over exogenous policies. We derive "politico-economic equivalence" conditions that apply in environments where policy is endogenous and chosen sequentially. A policy regime and a st......Traditional "economic equivalence'' results, like the Ricardian equivalence proposition, define equivalence classes over exogenous policies. We derive "politico-economic equivalence" conditions that apply in environments where policy is endogenous and chosen sequentially. A policy regime...... their use in the context of several applications, relating to social security reform, tax-smoothing policies and measures to correct externalities....

  13. Matching of equivalent field regions

    DEFF Research Database (Denmark)

    Appel-Hansen, Jørgen; Rengarajan, S.B.

    2005-01-01

    In aperture problems, integral equations for equivalent currents are often found by enforcing matching of equivalent fields. The enforcement is made in the aperture surface region adjoining the two volumes on each side of the aperture. In the case of an aperture in a planar perfectly conducting...... screen, having the same homogeneous medium on both sides and an impressed current on one aide, an alternative procedure is relevant. We make use of the fact that in the aperture the tangential component of the magnetic field due to the induced currents in the screen is zero. The use of such a procedure...... shows that equivalent currents can be found by a consideration of only one of the two volumes into which the aperture plane divides the space. Furthermore, from a consideration of an automatic matching at the aperture, additional information about tangential as well as normal field components...

  14. Equivalence principles and electromagnetism

    Science.gov (United States)

    Ni, W.-T.

    1977-01-01

    The implications of the weak equivalence principles are investigated in detail for electromagnetic systems in a general framework. In particular, it is shown that the universality of free-fall trajectories (Galileo weak equivalence principle) does not imply the validity of the Einstein equivalence principle. However, the Galileo principle plus the universality of free-fall rotation states does imply the Einstein principle.

  15. Mannheim Partner D-Curves in the Euclidean 3-space

    Directory of Open Access Journals (Sweden)

    Mustafa Kazaz

    2015-02-01

    Full Text Available In this paper, we consider the idea of Mannheim partner curves for curves lying on surfaces. By considering the Darboux frames of surface curves, we define Mannheim partner D-curves and give the characterizations for these curves. We also find the relations between geodesic curvatures, normal curvatures and geodesic torsions of these associated curves. Furthermore, we show that definition and characterizations of Mannheim partner D-curves include those of Mannheim partner curves in some special cases.

  16. Detecting overpressure using the Eaton and Equivalent Depth methods in Offshore Nova Scotia, Canada

    Science.gov (United States)

    Ernanda; Primasty, A. Q. T.; Akbar, K. A.

    2018-03-01

    Overpressure is an abnormal high subsurface pressure of any fluids which exceeds the hydrostatic pressure of column of water or formation brine. In Offshore Nova Scotia Canada, the values and depth of overpressure zone are determined using the eaton and equivalent depth method, based on well data and the normal compaction trend analysis. Since equivalent depth method is using effective vertical stress principle and Eaton method considers physical property ratio (velocity). In this research, pressure evaluation only applicable on Penobscot L-30 well. An abnormal pressure is detected at depth 11804 feet as possibly overpressure zone, based on pressure gradient curve and calculation between the Eaton method (7241.3 psi) and Equivalent Depth method (6619.4 psi). Shales within Abenaki formation especially Baccaro Member is estimated as possible overpressure zone due to hydrocarbon generation mechanism.

  17. Effective dose equivalent

    International Nuclear Information System (INIS)

    Huyskens, C.J.; Passchier, W.F.

    1988-01-01

    The effective dose equivalent is a quantity which is used in the daily practice of radiation protection as well as in the radiation hygienic rules as measure for the health risks. In this contribution it is worked out upon which assumptions this quantity is based and in which cases the effective dose equivalent can be used more or less well. (H.W.)

  18. Characterization of revenue equivalence

    NARCIS (Netherlands)

    Heydenreich, B.; Müller, R.; Uetz, Marc Jochen; Vohra, R.

    2009-01-01

    The property of an allocation rule to be implementable in dominant strategies by a unique payment scheme is called revenue equivalence. We give a characterization of revenue equivalence based on a graph theoretic interpretation of the incentive compatibility constraints. The characterization holds

  19. Characterization of Revenue Equivalence

    NARCIS (Netherlands)

    Heydenreich, Birgit; Müller, Rudolf; Uetz, Marc Jochen; Vohra, Rakesh

    2008-01-01

    The property of an allocation rule to be implementable in dominant strategies by a unique payment scheme is called \\emph{revenue equivalence}. In this paper we give a characterization of revenue equivalence based on a graph theoretic interpretation of the incentive compatibility constraints. The

  20. On the operator equivalents

    International Nuclear Information System (INIS)

    Grenet, G.; Kibler, M.

    1978-06-01

    A closed polynomial formula for the qth component of the diagonal operator equivalent of order k is derived in terms of angular momentum operators. The interest in various fields of molecular and solid state physics of using such a formula in connection with symmetry adapted operator equivalents is outlined

  1. Friction characteristics of the curved sidewall surfaces of a rotary MEMS device in oscillating motion

    International Nuclear Information System (INIS)

    Wu, Jie; Wang, Shao; Miao, Jianmin

    2009-01-01

    A MEMS device with a configuration similar to that of a micro-bearing was developed to study the friction behavior of the curved sidewall surfaces. This friction-testing device consists of two sets of actuators for normal motion and rotation, respectively. Friction measurements were performed at the curved sidewall surfaces of single-crystal silicon. Two general models were developed to determine the equivalent tangential stiffness of the bush-flexure assembly at the contact point by reducing a matrix equation to a one-dimensional formulation. With this simplification, the motions of the contacting surfaces were analyzed by using a recently developed quasi-static stick-slip model. The measurement results show that the coefficient of static friction exhibits a nonlinear dependence on the normal load. The true coefficient of static friction was determined by fitting the experimental friction curve

  2. Adaptive robust polynomial regression for power curve modeling with application to wind power forecasting

    DEFF Research Database (Denmark)

    Xu, Man; Pinson, Pierre; Lu, Zongxiang

    2016-01-01

    of the lack of time adaptivity. In this paper, a refined local polynomial regression algorithm is proposed to yield an adaptive robust model of the time-varying scattered power curve for forecasting applications. The time adaptivity of the algorithm is considered with a new data-driven bandwidth selection......Wind farm power curve modeling, which characterizes the relationship between meteorological variables and power production, is a crucial procedure for wind power forecasting. In many cases, power curve modeling is more impacted by the limited quality of input data rather than the stochastic nature...... of the energy conversion process. Such nature may be due the varying wind conditions, aging and state of the turbines, etc. And, an equivalent steady-state power curve, estimated under normal operating conditions with the intention to filter abnormal data, is not sufficient to solve the problem because...

  3. Growth curves for Laron syndrome.

    OpenAIRE

    Laron, Z; Lilos, P; Klinger, B

    1993-01-01

    Growth curves for children with Laron syndrome were constructed on the basis of repeated measurements made throughout infancy, childhood, and puberty in 24 (10 boys, 14 girls) of the 41 patients with this syndrome investigated in our clinic. Growth retardation was already noted at birth, the birth length ranging from 42 to 46 cm in the 12/20 available measurements. The postnatal growth curves deviated sharply from the normal from infancy on. Both sexes showed no clear pubertal spurt. Girls co...

  4. Equivalent Dynamic Models.

    Science.gov (United States)

    Molenaar, Peter C M

    2017-01-01

    Equivalences of two classes of dynamic models for weakly stationary multivariate time series are discussed: dynamic factor models and autoregressive models. It is shown that exploratory dynamic factor models can be rotated, yielding an infinite set of equivalent solutions for any observed series. It also is shown that dynamic factor models with lagged factor loadings are not equivalent to the currently popular state-space models, and that restriction of attention to the latter type of models may yield invalid results. The known equivalent vector autoregressive model types, standard and structural, are given a new interpretation in which they are conceived of as the extremes of an innovating type of hybrid vector autoregressive models. It is shown that consideration of hybrid models solves many problems, in particular with Granger causality testing.

  5. Remote sensing used for power curves

    International Nuclear Information System (INIS)

    Wagner, R; Joergensen, H E; Paulsen, U S; Larsen, T J; Antoniou, I; Thesbjerg, L

    2008-01-01

    Power curve measurement for large wind turbines requires taking into account more parameters than only the wind speed at hub height. Based on results from aerodynamic simulations, an equivalent wind speed taking the wind shear into account was defined and found to reduce the power standard deviation in the power curve significantly. Two LiDARs and a SoDAR are used to measure the wind profile in front of a wind turbine. These profiles are used to calculate the equivalent wind speed. The comparison of the power curves obtained with the three instruments to the traditional power curve, obtained using a cup anemometer measurement, confirms the results obtained from the simulations. Using LiDAR profiles reduces the error in power curve measurement, when these are used as relative instrument together with a cup anemometer. Results from the SoDAR do not show such promising results, probably because of noisy measurements resulting in distorted profiles

  6. Translation of the Children Helping Out--Responsibilities, Expectations and Supports (CHORES) questionnaire into Brazilian-Portuguese: semantic, idiomatic, conceptual and experiential equivalences and application in normal children and adolescents and in children with cerebral palsy.

    Science.gov (United States)

    Amaral, Maíra; Paula, Rebeca L; Drummond, Adriana; Dunn, Louise; Mancini, Marisa C

    2012-01-01

    The participation of children with disabilities in daily chores in different environments has been a therapeutic goal shared by both parents and rehabilitation professionals, leading to increased demand for instrument development. The Children Helping Out: Responsibilities, Expectations and Supports (CHORES) questionnaire was created with the objective of measuring child and teenager participation in daily household tasks. To translate the CHORES questionnaire into Brazilian Portuguese, evaluate semantic, idiomatic, experiential, and conceptual equivalences, apply the questionnaire to children and teenagers with and without disabilities, and test its test-retest reliability. Methodological study developed through the following stages: (1) translation of the questionnaire by two different translators; (2) synthesis of translations; (3) back-translation into English; (4) analysis by an expert committee to develop the pre-final version; (5) test-retest reliability; (6) administration to a sample of 50 parents of children with and without disabilities. The CHORES translation was validated in all stages. The implemented adaptations aimed to improve the understanding of the instrument's content by families of different socioeconomic and educational levels. The questionnaire showed strong consistency within a 7 to 14-day interval (ICCs=0.93 a 0.97; p=0.0001). After application, there was no need to change any items in the questionnaire. The translation of the CHORES questionnaire into Brazilian Portuguese offers a unique instrument for health professionals in Brazil, enabling the documentation of child and teenager participation in daily household tasks and making it possible to develop scientific investigation on the topic.

  7. A neutron dose equivalent meter at CAEP

    International Nuclear Information System (INIS)

    Tian Shihai; Lu Yan; Wang Heyi; Yuan Yonggang; Chen Xu

    2012-01-01

    The measurement of neutron dose equivalent has been a widespread need in industry and research. In this paper, aimed at improving the accuracy of neutron dose equivalent meter: a neutron dose counter is simulated with MCNP5, and the energy response curve is optimized. The results show that the energy response factor is from 0.2 to 1.8 for neutrons in the energy range of 2.53×10 -8 MeV to 10 MeV Compared with other related meters, it turns that the design of this meter is right. (authors)

  8. Gravitational leptogenesis, C, CP and strong equivalence

    International Nuclear Information System (INIS)

    McDonald, Jamie I.; Shore, Graham M.

    2015-01-01

    The origin of matter-antimatter asymmetry is one of the most important outstanding problems at the interface of particle physics and cosmology. Gravitational leptogenesis (baryogenesis) provides a possible mechanism through explicit couplings of spacetime curvature to appropriate lepton (or baryon) currents. In this paper, the idea that these strong equivalence principle violating interactions could be generated automatically through quantum loop effects in curved spacetime is explored, focusing on the realisation of the discrete symmetries C, CP and CPT which must be broken to induce matter-antimatter asymmetry. The related issue of quantum corrections to the dispersion relation for neutrino propagation in curved spacetime is considered within a fully covariant framework.

  9. Gravitational leptogenesis, C, CP and strong equivalence

    Energy Technology Data Exchange (ETDEWEB)

    McDonald, Jamie I.; Shore, Graham M. [Department of Physics, Swansea University,Swansea, SA2 8PP (United Kingdom)

    2015-02-12

    The origin of matter-antimatter asymmetry is one of the most important outstanding problems at the interface of particle physics and cosmology. Gravitational leptogenesis (baryogenesis) provides a possible mechanism through explicit couplings of spacetime curvature to appropriate lepton (or baryon) currents. In this paper, the idea that these strong equivalence principle violating interactions could be generated automatically through quantum loop effects in curved spacetime is explored, focusing on the realisation of the discrete symmetries C, CP and CPT which must be broken to induce matter-antimatter asymmetry. The related issue of quantum corrections to the dispersion relation for neutrino propagation in curved spacetime is considered within a fully covariant framework.

  10. The Source Equivalence Acceleration Method

    International Nuclear Information System (INIS)

    Everson, Matthew S.; Forget, Benoit

    2015-01-01

    Highlights: • We present a new acceleration method, the Source Equivalence Acceleration Method. • SEAM forms an equivalent coarse group problem for any spatial method. • Equivalence is also formed across different spatial methods and angular quadratures. • Testing is conducted using OpenMOC and performance is compared with CMFD. • Results show that SEAM is preferable for very expensive transport calculations. - Abstract: Fine-group whole-core reactor analysis remains one of the long sought goals of the reactor physics community. Such a detailed analysis is typically too computationally expensive to be realized on anything except the largest of supercomputers. Recondensation using the Discrete Generalized Multigroup (DGM) method, though, offers a relatively cheap alternative to solving the fine group transport problem. DGM, however, suffered from inconsistencies when applied to high-order spatial methods. While an exact spatial recondensation method was developed and provided full spatial consistency with the fine group problem, this approach substantially increased memory requirements for realistic problems. The method described in this paper, called the Source Equivalence Acceleration Method (SEAM), forms a coarse-group problem which preserves the fine-group problem even when using higher order spatial methods. SEAM allows recondensation to converge to the fine-group solution with minimal memory requirements and little additional overhead. This method also provides for consistency when using different spatial methods and angular quadratures between the coarse group and fine group problems. SEAM was implemented in OpenMOC, a 2D MOC code developed at MIT, and its performance tested against Coarse Mesh Finite Difference (CMFD) acceleration on the C5G7 benchmark problem and on a 361 group version of the problem. For extremely expensive transport calculations, SEAM was able to outperform CMFD, resulting in speed-ups of 20–45 relative to the normal power

  11. Establishing Substantial Equivalence: Transcriptomics

    Science.gov (United States)

    Baudo, María Marcela; Powers, Stephen J.; Mitchell, Rowan A. C.; Shewry, Peter R.

    Regulatory authorities in Western Europe require transgenic crops to be substantially equivalent to conventionally bred forms if they are to be approved for commercial production. One way to establish substantial equivalence is to compare the transcript profiles of developing grain and other tissues of transgenic and conventionally bred lines, in order to identify any unintended effects of the transformation process. We present detailed protocols for transcriptomic comparisons of developing wheat grain and leaf material, and illustrate their use by reference to our own studies of lines transformed to express additional gluten protein genes controlled by their own endosperm-specific promoters. The results show that the transgenes present in these lines (which included those encoding marker genes) did not have any significant unpredicted effects on the expression of endogenous genes and that the transgenic plants were therefore substantially equivalent to the corresponding parental lines.

  12. Lagrangian Curves on Spectral Curves of Monopoles

    International Nuclear Information System (INIS)

    Guilfoyle, Brendan; Khalid, Madeeha; Ramon Mari, Jose J.

    2010-01-01

    We study Lagrangian points on smooth holomorphic curves in TP 1 equipped with a natural neutral Kaehler structure, and prove that they must form real curves. By virtue of the identification of TP 1 with the space LE 3 of oriented affine lines in Euclidean 3-space, these Lagrangian curves give rise to ruled surfaces in E 3 , which we prove have zero Gauss curvature. Each ruled surface is shown to be the tangent lines to a curve in E 3 , called the edge of regression of the ruled surface. We give an alternative characterization of these curves as the points in E 3 where the number of oriented lines in the complex curve Σ that pass through the point is less than the degree of Σ. We then apply these results to the spectral curves of certain monopoles and construct the ruled surfaces and edges of regression generated by the Lagrangian curves.

  13. The principle of equivalence

    International Nuclear Information System (INIS)

    Unnikrishnan, C.S.

    1994-01-01

    Principle of equivalence was the fundamental guiding principle in the formulation of the general theory of relativity. What are its key elements? What are the empirical observations which establish it? What is its relevance to some new experiments? These questions are discussed in this article. (author). 11 refs., 5 figs

  14. Radioactive waste equivalence

    International Nuclear Information System (INIS)

    Orlowski, S.; Schaller, K.H.

    1990-01-01

    The report reviews, for the Member States of the European Community, possible situations in which an equivalence concept for radioactive waste may be used, analyses the various factors involved, and suggests guidelines for the implementation of such a concept. Only safety and technical aspects are covered. Other aspects such as commercial ones are excluded. Situations where the need for an equivalence concept has been identified are processes where impurities are added as a consequence of the treatment and conditioning process, the substitution of wastes from similar waste streams due to the treatment process, and exchange of waste belonging to different waste categories. The analysis of factors involved and possible ways for equivalence evaluation, taking into account in particular the chemical, physical and radiological characteristics of the waste package, and the potential risks of the waste form, shows that no simple all-encompassing equivalence formula may be derived. Consequently, a step-by-step approach is suggested, which avoids complex evaluations in the case of simple exchanges

  15. Equivalent Colorings with "Maple"

    Science.gov (United States)

    Cecil, David R.; Wang, Rongdong

    2005-01-01

    Many counting problems can be modeled as "colorings" and solved by considering symmetries and Polya's cycle index polynomial. This paper presents a "Maple 7" program link http://users.tamuk.edu/kfdrc00/ that, given Polya's cycle index polynomial, determines all possible associated colorings and their partitioning into equivalence classes. These…

  16. Correspondences. Equivalence relations

    International Nuclear Information System (INIS)

    Bouligand, G.M.

    1978-03-01

    We comment on sections paragraph 3 'Correspondences' and paragraph 6 'Equivalence Relations' in chapter II of 'Elements de mathematique' by N. Bourbaki in order to simplify their comprehension. Paragraph 3 exposes the ideas of a graph, correspondence and map or of function, and their composition laws. We draw attention to the following points: 1) Adopting the convention of writting from left to right, the composition law for two correspondences (A,F,B), (U,G,V) of graphs F, G is written in full generality (A,F,B)o(U,G,V) = (A,FoG,V). It is not therefore assumed that the co-domain B of the first correspondence is identical to the domain U of the second (EII.13 D.7), (1970). 2) The axiom of choice consists of creating the Hilbert terms from the only relations admitting a graph. 3) The statement of the existence theorem of a function h such that f = goh, where f and g are two given maps having the same domain (of definition), is completed if h is more precisely an injection. Paragraph 6 considers the generalisation of equality: First, by 'the equivalence relation associated with a map f of a set E identical to (x is a member of the set E and y is a member of the set E and x:f = y:f). Consequently, every relation R(x,y) which is equivalent to this is an equivalence relation in E (symmetrical, transitive, reflexive); then R admits a graph included in E x E, etc. Secondly, by means of the Hilbert term of a relation R submitted to the equivalence. In this last case, if R(x,y) is separately collectivizing in x and y, theta(x) is not the class of objects equivalent to x for R (EII.47.9), (1970). The interest of bringing together these two subjects, apart from this logical order, resides also in the fact that the theorem mentioned in 3) can be expressed by means of the equivalence relations associated with the functions f and g. The solutions of the examples proposed reveal their simplicity [fr

  17. Equivalent Josephson junctions

    International Nuclear Information System (INIS)

    Boyadzhiev, T.L.; ); Semerdzhieva, E.G.; Shukrinov, Yu.M.; Fiziko-Tekhnicheskij Inst., Dushanbe

    2008-01-01

    The magnetic field dependences of critical current are numerically constructed for a long Josephson junction with a shunt- or resistor-type microscopic inhomogeneities and compared to the critical curve of a junction with exponentially varying width. The numerical results show that it is possible to replace the distributed inhomogeneity of a long Josephson junction by an inhomogeneity localized at one of its ends, which has certain technological advantages. It is also shown that the critical curves of junctions with exponentially varying width and inhomogeneities localized at the ends are unaffected by the mixed fluxon-antifluxon distributions of the magnetic flux [ru

  18. 51Cr - erythrocyte survival curves

    International Nuclear Information System (INIS)

    Paiva Costa, J. de.

    1982-07-01

    Sixteen patients were studied, being fifteen patients in hemolytic state, and a normal individual as a witness. The aim was to obtain better techniques for the analysis of the erythrocytes, survival curves, according to the recommendations of the International Committee of Hematology. It was used the radiochromatic method as a tracer. Previously a revisional study of the International Literature was made in its aspects inherent to the work in execution, rendering possible to establish comparisons and clarify phonomena observed in cur investigation. Several parameters were considered in this study, hindering both the exponential and the linear curves. The analysis of the survival curves of the erythrocytes in the studied group, revealed that the elution factor did not present a homogeneous answer quantitatively to all, though, the result of the analysis of these curves have been established, through listed programs in the electronic calculator. (Author) [pt

  19. Atlas of stress-strain curves

    CERN Document Server

    2002-01-01

    The Atlas of Stress-Strain Curves, Second Edition is substantially bigger in page dimensions, number of pages, and total number of curves than the previous edition. It contains over 1,400 curves, almost three times as many as in the 1987 edition. The curves are normalized in appearance to aid making comparisons among materials. All diagrams include metric (SI) units, and many also include U.S. customary units. All curves are captioned in a consistent format with valuable information including (as available) standard designation, the primary source of the curve, mechanical properties (including hardening exponent and strength coefficient), condition of sample, strain rate, test temperature, and alloy composition. Curve types include monotonic and cyclic stress-strain, isochronous stress-strain, and tangent modulus. Curves are logically arranged and indexed for fast retrieval of information. The book also includes an introduction that provides background information on methods of stress-strain determination, on...

  20. The equivalence theorem

    International Nuclear Information System (INIS)

    Veltman, H.

    1990-01-01

    The equivalence theorem states that, at an energy E much larger than the vector-boson mass M, the leading order of the amplitude with longitudinally polarized vector bosons on mass shell is given by the amplitude in which these vector bosons are replaced by the corresponding Higgs ghosts. We prove the equivalence theorem and show its validity in every order in perturbation theory. We first derive the renormalized Ward identities by using the diagrammatic method. Only the Feynman-- 't Hooft gauge is discussed. The last step of the proof includes the power-counting method evaluated in the large-Higgs-boson-mass limit, needed to estimate the leading energy behavior of the amplitudes involved. We derive expressions for the amplitudes involving longitudinally polarized vector bosons for all orders in perturbation theory. The fermion mass has not been neglected and everything is evaluated in the region m f ∼M much-lt E much-lt m Higgs

  1. Equivalent physical models and formulation of equivalent source layer in high-resolution EEG imaging

    International Nuclear Information System (INIS)

    Yao Dezhong; He Bin

    2003-01-01

    In high-resolution EEG imaging, both equivalent dipole layer (EDL) and equivalent charge layer (ECL) assumed to be located just above the cortical surface have been proposed as high-resolution imaging modalities or as intermediate steps to estimate the epicortical potential. Presented here are the equivalent physical models of these two equivalent source layers (ESL) which show that the strength of EDL is proportional to the surface potential of the layer when the outside of the layer is filled with an insulator, and that the strength of ECL is the normal current of the layer when the outside is filled with a perfect conductor. Based on these equivalent physical models, closed solutions of ECL and EDL corresponding to a dipole enclosed by a spherical layer are given. These results provide the theoretical basis of ESL applications in high-resolution EEG mapping

  2. Equivalent physical models and formulation of equivalent source layer in high-resolution EEG imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yao Dezhong [School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu City, 610054, Sichuan Province (China); He Bin [The University of Illinois at Chicago, IL (United States)

    2003-11-07

    In high-resolution EEG imaging, both equivalent dipole layer (EDL) and equivalent charge layer (ECL) assumed to be located just above the cortical surface have been proposed as high-resolution imaging modalities or as intermediate steps to estimate the epicortical potential. Presented here are the equivalent physical models of these two equivalent source layers (ESL) which show that the strength of EDL is proportional to the surface potential of the layer when the outside of the layer is filled with an insulator, and that the strength of ECL is the normal current of the layer when the outside is filled with a perfect conductor. Based on these equivalent physical models, closed solutions of ECL and EDL corresponding to a dipole enclosed by a spherical layer are given. These results provide the theoretical basis of ESL applications in high-resolution EEG mapping.

  3. Evaluation of the directional dose equivalent H,(0.07) for ring dosemeters

    International Nuclear Information System (INIS)

    Alvarez R, J.T.; Tovar M, V.M.

    2006-01-01

    The personnel dosimetry laboratory (LDP) of the Metrology department received an user's of radiation beta application that incidentally had irradiated 14 couples of ring dosemeters for extremities of the type TLD-100 given by the LDP. This sample of 14 couples of rings tentatively it was irradiated in the months of July-August of the year 2004, and he requested in an expedite way the evaluation of the received dose equivalent. The LSCD builds two calibration curves in terms of the directional dose equivalent H'(0.07) using two sources patterns of 90 Sr- 90 Y for beta radiation: one of 74 MBq and another of 1850 MBq with traceability to the PTB. The first curve in the interval of 0 to 5 mSv, the second in the range of 5 to 50 mSv, taking into account effects by positioned of the rings in the phantom. Both calibration curves were validated by adjustment lack, symmetry of residuals and normality of the same ones. It is evaluated and analyzing the H'(0.007) for these 14 couples of rings using the Tukey test of media of a single road. It was found that the H , its could be classified in 4 groups, and that the probability that its has irradiated in a random way it was smaller to the level at α = 0.05. (Author)

  4. ECM using Edwards curves

    DEFF Research Database (Denmark)

    Bernstein, Daniel J.; Birkner, Peter; Lange, Tanja

    2013-01-01

    -arithmetic level are as follows: (1) use Edwards curves instead of Montgomery curves; (2) use extended Edwards coordinates; (3) use signed-sliding-window addition-subtraction chains; (4) batch primes to increase the window size; (5) choose curves with small parameters and base points; (6) choose curves with large...

  5. Wind Turbine Power Curves Incorporating Turbulence Intensity

    DEFF Research Database (Denmark)

    Sørensen, Emil Hedevang Lohse

    2014-01-01

    . The model and method are parsimonious in the sense that only a single function (the zero-turbulence power curve) and a single auxiliary parameter (the equivalent turbulence factor) are needed to predict the mean power at any desired turbulence intensity. The method requires only ten minute statistics......The performance of a wind turbine in terms of power production (the power curve) is important to the wind energy industry. The current IEC-61400-12-1 standard for power curve evaluation recognizes only the mean wind speed at hub height and the air density as relevant to the power production...

  6. Equivalence, commensurability, value

    DEFF Research Database (Denmark)

    Albertsen, Niels

    2017-01-01

    Deriving value in Capital Marx uses three commensurability arguments (CA1-3). CA1 establishes equivalence in exchange as exchangeability with the same third commodity. CA2 establishes value as common denominator in commodities: embodied abstract labour. CA3 establishes value substance...... as commonality of labour: physiological labour. Tensions between these logics have permeated Marxist interpretations of value. Some have supported value as embodied labour (CA2, 3), others a monetary theory of value and value as ‘pure’ societal abstraction (ultimately CA1). They all are grounded in Marx....

  7. Remote sensing used for power curves

    DEFF Research Database (Denmark)

    Wagner, Rozenn; Ejsing Jørgensen, Hans; Schmidt Paulsen, Uwe

    2008-01-01

    Power curve measurement for large wind turbines requires taking into account more parameters than only the wind speed at hub height. Based on results from aerodynamic simulations, an equivalent wind speed taking the wind shear into account was defined and found to reduce the power standard deviat...

  8. Waste Determination Equivalency - 12172

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, Rebecca D. [Savannah River Remediation (United States)

    2012-07-01

    by the Secretary of Energy in January of 2006 based on proposed processing techniques with the expectation that it could be revised as new processing capabilities became viable. Once signed, however, it became evident that any changes would require lengthy review and another determination signed by the Secretary of Energy. With the maturation of additional salt removal technologies and the extension of the SWPF start-up date, it becomes necessary to define 'equivalency' to the processes laid out in the original determination. For the purposes of SRS, any waste not processed through Interim Salt Processing must be processed through SWPF or an equivalent process, and therefore a clear statement of the requirements for a process to be equivalent to SWPF becomes necessary. (authors)

  9. Establishing Substantial Equivalence: Proteomics

    Science.gov (United States)

    Lovegrove, Alison; Salt, Louise; Shewry, Peter R.

    Wheat is a major crop in world agriculture and is consumed after processing into a range of food products. It is therefore of great importance to determine the consequences (intended and unintended) of transgenesis in wheat and whether genetically modified lines are substantially equivalent to those produced by conventional plant breeding. Proteomic analysis is one of several approaches which can be used to address these questions. Two-dimensional PAGE (2D PAGE) remains the most widely available method for proteomic analysis, but is notoriously difficult to reproduce between laboratories. We therefore describe methods which have been developed as standard operating procedures in our laboratory to ensure the reproducibility of proteomic analyses of wheat using 2D PAGE analysis of grain proteins.

  10. Using frequency equivalency in stability calculations

    Energy Technology Data Exchange (ETDEWEB)

    Gruzdev, I.A.; Temirbulatov, R.A.; Tereshko, L.A.

    1981-01-01

    A methodology for calculating oscillatory instability that involves using frequency equivalency is employed in carrying out the following proceedures: dividing an electric power system into subgroups; determining the adjustments to the automatic excitation control in each subsystem; simplifying the mathematical definition of the separate subsystems by using frequency equivalency; gradually re-tuning the automatic excitation control in the separate subsystems to account for neighboring subsystems by using their equivalent frequency characteristics. The methodology is to be used with a computer program to determine the gain in the stabilization channels of the automatic excitation control unit in which static stability of the entire aggregate of normal and post-breakdown conditions acceptable damping of transient processes are provided. The possibility of reducing the equation series to apply to chosen regions of the existing range of frequencies is demonstrated. The use of the methodology is illustrated in a sample study on stability in a Siberian unified power system.

  11. Exact equivalent straight waveguide model for bent and twisted waveguides

    DEFF Research Database (Denmark)

    Shyroki, Dzmitry

    2008-01-01

    Exact equivalent straight waveguide representation is given for a waveguide of arbitrary curvature and torsion. No assumptions regarding refractive index contrast, isotropy of materials, or particular morphology in the waveguide cross section are made. This enables rigorous full-vector modeling...... of in-plane curved or helically wound waveguides with use of available simulators for straight waveguides without the restrictions of the known approximate equivalent-index formulas....

  12. Alexander-equivalent Zariski pairs of irreducible sextics

    DEFF Research Database (Denmark)

    Eyral, Christophe; Oka, Mutsuo

    2009-01-01

    The existence of Alexander-equivalent Zariski pairs dealing with irreducible curves of degree 6 was proved by Degtyarev. However, no explicit example of such a pair is available (only the existence is known) in the literature. In this paper, we construct the first concrete example.......The existence of Alexander-equivalent Zariski pairs dealing with irreducible curves of degree 6 was proved by Degtyarev. However, no explicit example of such a pair is available (only the existence is known) in the literature. In this paper, we construct the first concrete example....

  13. Quantification of the equivalence principle

    International Nuclear Information System (INIS)

    Epstein, K.J.

    1978-01-01

    Quantitative relationships illustrate Einstein's equivalence principle, relating it to Newton's ''fictitious'' forces arising from the use of noninertial frames, and to the form of the relativistic time dilatation in local Lorentz frames. The equivalence principle can be interpreted as the equivalence of general covariance to local Lorentz covariance, in a manner which is characteristic of Riemannian and pseudo-Riemannian geometries

  14. Equivalent models of wind farms by using aggregated wind turbines and equivalent winds

    International Nuclear Information System (INIS)

    Fernandez, L.M.; Garcia, C.A.; Saenz, J.R.; Jurado, F.

    2009-01-01

    As a result of the increasing wind farms penetration on power systems, the wind farms begin to influence power system, and therefore the modeling of wind farms has become an interesting research topic. In this paper, new equivalent models of wind farms equipped with wind turbines based on squirrel-cage induction generators and doubly-fed induction generators are proposed to represent the collective behavior on large power systems simulations, instead of using a complete model of wind farms where all the wind turbines are modeled. The models proposed here are based on aggregating wind turbines into an equivalent wind turbine which receives an equivalent wind of the ones incident on the aggregated wind turbines. The equivalent wind turbine presents re-scaled power capacity and the same complete model as the individual wind turbines, which supposes the main feature of the present equivalent models. Two equivalent winds are evaluated in this work: (1) the average wind from the ones incident on the aggregated wind turbines with similar winds, and (2) an equivalent incoming wind derived from the power curve and the wind incident on each wind turbine. The effectiveness of the equivalent models to represent the collective response of the wind farm at the point of common coupling to grid is demonstrated by comparison with the wind farm response obtained from the detailed model during power system dynamic simulations, such as wind fluctuations and a grid disturbance. The present models can be used for grid integration studies of large power system with an important reduction of the model order and the computation time

  15. New recommendations for dose equivalent

    International Nuclear Information System (INIS)

    Bengtsson, G.

    1985-01-01

    In its report 39, the International Commission on Radiation Units and Measurements (ICRU), has defined four new quantities for the determination of dose equivalents from external sources: the ambient dose equivalent, the directional dose equivalent, the individual dose equivalent, penetrating and the individual dose equivalent, superficial. The rationale behind these concepts and their practical application are discussed. Reference is made to numerical values of these quantities which will be the subject of a coming publication from the International Commission on Radiological Protection, ICRP. (Author)

  16. System equivalent model mixing

    Science.gov (United States)

    Klaassen, Steven W. B.; van der Seijs, Maarten V.; de Klerk, Dennis

    2018-05-01

    This paper introduces SEMM: a method based on Frequency Based Substructuring (FBS) techniques that enables the construction of hybrid dynamic models. With System Equivalent Model Mixing (SEMM) frequency based models, either of numerical or experimental nature, can be mixed to form a hybrid model. This model follows the dynamic behaviour of a predefined weighted master model. A large variety of applications can be thought of, such as the DoF-space expansion of relatively small experimental models using numerical models, or the blending of different models in the frequency spectrum. SEMM is outlined, both mathematically and conceptually, based on a notation commonly used in FBS. A critical physical interpretation of the theory is provided next, along with a comparison to similar techniques; namely DoF expansion techniques. SEMM's concept is further illustrated by means of a numerical example. It will become apparent that the basic method of SEMM has some shortcomings which warrant a few extensions to the method. One of the main applications is tested in a practical case, performed on a validated benchmark structure; it will emphasize the practicality of the method.

  17. The equivalence principle

    International Nuclear Information System (INIS)

    Smorodinskij, Ya.A.

    1980-01-01

    The prerelativistic history of the equivalence principle (EP) is presented briefly. Its role in history of the general relativity theory (G.R.T.) discovery is elucidated. A modern idea states that the ratio of inert and gravitational masses does not differ from 1 at least up to the 12 sign after comma. Attention is paid to the difference of the gravitational field from electromagnetic one. The difference is as follows, the energy of the gravitational field distributed in space is the source of the field. These fields always interact at superposition. Electromagnetic fields from different sources are put together. On the basis of EP it is established the Sun field interact with the Earth gravitational energy in the same way as with any other one. The latter proves the existence of gravitation of the very gravitational field to a heavy body. A problem on gyroscope movement in the Earth gravitational field is presented as a paradox. The calculation has shown that gyroscope at satellite makes a positive precession, and its axis turns in an angle equal to α during a turn of the satellite round the Earth, but because of the space curvature - into the angle two times larger than α. A resulting turn is equal to 3α. It is shown on the EP basis that the polarization plane in any coordinate system does not turn when the ray of light passes in the gravitational field. Together with the historical value of EP noted is the necessity to take into account the requirements claimed by the EP at description of the physical world

  18. Growth curves for Laron syndrome.

    Science.gov (United States)

    Laron, Z; Lilos, P; Klinger, B

    1993-01-01

    Growth curves for children with Laron syndrome were constructed on the basis of repeated measurements made throughout infancy, childhood, and puberty in 24 (10 boys, 14 girls) of the 41 patients with this syndrome investigated in our clinic. Growth retardation was already noted at birth, the birth length ranging from 42 to 46 cm in the 12/20 available measurements. The postnatal growth curves deviated sharply from the normal from infancy on. Both sexes showed no clear pubertal spurt. Girls completed their growth between the age of 16-19 years to a final mean (SD) height of 119 (8.5) cm whereas the boys continued growing beyond the age of 20 years, achieving a final height of 124 (8.5) cm. At all ages the upper to lower body segment ratio was more than 2 SD above the normal mean. These growth curves constitute a model not only for primary, hereditary insulin-like growth factor-I (IGF-I) deficiency (Laron syndrome) but also for untreated secondary IGF-I deficiencies such as growth hormone gene deletion and idiopathic congenital isolated growth hormone deficiency. They should also be useful in the follow up of children with Laron syndrome treated with biosynthetic recombinant IGF-I. PMID:8333769

  19. Logically automorphically equivalent knowledge bases

    OpenAIRE

    Aladova, Elena; Plotkin, Tatjana

    2017-01-01

    Knowledge bases theory provide an important example of the field where applications of universal algebra and algebraic logic look very natural, and their interaction with practical problems arising in computer science might be very productive. In this paper we study the equivalence problem for knowledge bases. Our interest is to find out how the informational equivalence is related to the logical description of knowledge. Studying various equivalences of knowledge bases allows us to compare d...

  20. Testing statistical hypotheses of equivalence

    CERN Document Server

    Wellek, Stefan

    2010-01-01

    Equivalence testing has grown significantly in importance over the last two decades, especially as its relevance to a variety of applications has become understood. Yet published work on the general methodology remains scattered in specialists' journals, and for the most part, it focuses on the relatively narrow topic of bioequivalence assessment.With a far broader perspective, Testing Statistical Hypotheses of Equivalence provides the first comprehensive treatment of statistical equivalence testing. The author addresses a spectrum of specific, two-sided equivalence testing problems, from the

  1. JUMPING THE CURVE

    Directory of Open Access Journals (Sweden)

    René Pellissier

    2012-01-01

    Full Text Available This paper explores the notion ofjump ing the curve,following from Handy 's S-curve onto a new curve with new rules policies and procedures. . It claims that the curve does not generally lie in wait but has to be invented by leadership. The focus of this paper is the identification (mathematically and inferentially ofthat point in time, known as the cusp in catastrophe theory, when it is time to change - pro-actively, pre-actively or reactively. These three scenarios are addressed separately and discussed in terms ofthe relevance ofeach.

  2. Rational Multi-curve Models with Counterparty-risk Valuation Adjustments

    DEFF Research Database (Denmark)

    Crépey, Stéphane; Macrina, Andrea; Nguyen, Tuyet Mai

    2016-01-01

    We develop a multi-curve term structure set-up in which the modelling ingredients are expressed by rational functionals of Markov processes. We calibrate to London Interbank Offer Rate swaptions data and show that a rational two-factor log-normal multi-curve model is sufficient to match market da...... with regulatory obligations. In order to compute counterparty-risk valuation adjustments, such as credit valuation adjustment, we show how default intensity processes with rational form can be derived. We flesh out our study by applying the results to a basis swap contract....... with accuracy. We elucidate the relationship between the models developed and calibrated under a risk-neutral measure Q and their consistent equivalence class under the real-world probability measure P. The consistent P-pricing models are applied to compute the risk exposures which may be required to comply...

  3. SAPONIFICATION EQUIVALENT OF DASAMULA TAILA

    OpenAIRE

    Saxena, R. B.

    1994-01-01

    Saponification equivalent values of Dasamula taila are very useful for the technical and analytical work. It gives the mean molecular weight of the glycerides and acids present in Dasamula Taila. Saponification equivalent values of Dasamula taila are reported in different packings.

  4. Saponification equivalent of dasamula taila.

    Science.gov (United States)

    Saxena, R B

    1994-07-01

    Saponification equivalent values of Dasamula taila are very useful for the technical and analytical work. It gives the mean molecular weight of the glycerides and acids present in Dasamula Taila. Saponification equivalent values of Dasamula taila are reported in different packings.

  5. A study on lead equivalent

    International Nuclear Information System (INIS)

    Lin Guanxin

    1991-01-01

    A study on the rules in which the lead equivalent of lead glass changes with the energy of X rays or γ ray is described. The reason of this change is discussed and a new testing method of lead equivalent is suggested

  6. Feynman propagator in curved space-time

    International Nuclear Information System (INIS)

    Candelas, P.; Raine, D.J.

    1977-01-01

    The Wick rotation is generalized in a covariant manner so as to apply to curved manifolds in a way that is independent of the analytic properties of the manifold. This enables us to show that various methods for defining a Feynman propagator to be found in the literature are equivalent where they are applicable. We are also able to discuss the relation between certain regularization methods that have been employed

  7. Tornado-Shaped Curves

    Science.gov (United States)

    Martínez, Sol Sáez; de la Rosa, Félix Martínez; Rojas, Sergio

    2017-01-01

    In Advanced Calculus, our students wonder if it is possible to graphically represent a tornado by means of a three-dimensional curve. In this paper, we show it is possible by providing the parametric equations of such tornado-shaped curves.

  8. Simulating Supernova Light Curves

    International Nuclear Information System (INIS)

    Even, Wesley Paul; Dolence, Joshua C.

    2016-01-01

    This report discusses supernova light simulations. A brief review of supernovae, basics of supernova light curves, simulation tools used at LANL, and supernova results are included. Further, it happens that many of the same methods used to generate simulated supernova light curves can also be used to model the emission from fireballs generated by explosions in the earth's atmosphere.

  9. Simulating Supernova Light Curves

    Energy Technology Data Exchange (ETDEWEB)

    Even, Wesley Paul [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dolence, Joshua C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-05

    This report discusses supernova light simulations. A brief review of supernovae, basics of supernova light curves, simulation tools used at LANL, and supernova results are included. Further, it happens that many of the same methods used to generate simulated supernova light curves can also be used to model the emission from fireballs generated by explosions in the earth’s atmosphere.

  10. Image scaling curve generation

    NARCIS (Netherlands)

    2012-01-01

    The present invention relates to a method of generating an image scaling curve, where local saliency is detected in a received image. The detected local saliency is then accumulated in the first direction. A final scaling curve is derived from the detected local saliency and the image is then

  11. Image scaling curve generation.

    NARCIS (Netherlands)

    2011-01-01

    The present invention relates to a method of generating an image scaling curve, where local saliency is detected in a received image. The detected local saliency is then accumulated in the first direction. A final scaling curve is derived from the detected local saliency and the image is then

  12. Tempo curves considered harmful

    NARCIS (Netherlands)

    Desain, P.; Honing, H.

    1993-01-01

    In the literature of musicology, computer music research and the psychology of music, timing or tempo measurements are mostly presented in the form of continuous curves. The notion of these tempo curves is dangerous, despite its widespread use, because it lulls its users into the false impression

  13. The curve shortening problem

    CERN Document Server

    Chou, Kai-Seng

    2001-01-01

    Although research in curve shortening flow has been very active for nearly 20 years, the results of those efforts have remained scattered throughout the literature. For the first time, The Curve Shortening Problem collects and illuminates those results in a comprehensive, rigorous, and self-contained account of the fundamental results.The authors present a complete treatment of the Gage-Hamilton theorem, a clear, detailed exposition of Grayson''s convexity theorem, a systematic discussion of invariant solutions, applications to the existence of simple closed geodesics on a surface, and a new, almost convexity theorem for the generalized curve shortening problem.Many questions regarding curve shortening remain outstanding. With its careful exposition and complete guide to the literature, The Curve Shortening Problem provides not only an outstanding starting point for graduate students and new investigations, but a superb reference that presents intriguing new results for those already active in the field.

  14. Measurement of extrapolation curves for the secondary pattern of beta radiation Nr. 86 calibrated in rapidity of absorbed dose for tissue equivalent by the Physikalisch Technische Bundesanstalt; Medicion de curvas de extrapolacion para el patron secundario de radiacion beta Nr. 86 calibrado en rapidez de dosis absorbida para tejido equivalente por el Physikalisch Technische Bundesanstalt

    Energy Technology Data Exchange (ETDEWEB)

    Alvarez R, J.T

    1988-10-15

    The following report has as objective to present the obtained results of measuring - with a camera of extrapolation of variable electrodes (CE) - the dose speed absorbed in equivalent fabric given by the group of sources of the secondary pattern of radiation Beta Nr. 86, (PSB), and to compare this results with those presented by the calibration certificates that accompany the PSB extended by the primary laboratory Physikalisch Technische Bundesanstalt, (PTB), of the R.F.A. as well as the uncertainties associated to the measure process. (Author)

  15. Non normal and non quadratic anisotropic plasticity coupled with ductile damage in sheet metal forming: Application to the hydro bulging test

    International Nuclear Information System (INIS)

    Badreddine, Houssem; Saanouni, Khemaies; Dogui, Abdelwaheb

    2007-01-01

    In this work an improved material model is proposed that shows good agreement with experimental data for both hardening curves and plastic strain ratios in uniaxial and equibiaxial proportional loading paths for steel metal until the final fracture. This model is based on non associative and non normal flow rule using two different orthotropic equivalent stresses in both yield criterion and plastic potential functions. For the plastic potential the classical Hill 1948 quadratic equivalent stress is considered while for the yield criterion the Karafillis and Boyce 1993 non quadratic equivalent stress is used taking into account the non linear mixed (kinematic and isotropic) hardening. Applications are made to hydro bulging tests using both circular and elliptical dies. The results obtained with different particular cases of the model such as the normal quadratic and the non normal non quadratic cases are compared and discussed with respect to the experimental results

  16. Environmental bias and elastic curves on surfaces

    International Nuclear Information System (INIS)

    Guven, Jemal; María Valencia, Dulce; Vázquez-Montejo, Pablo

    2014-01-01

    The behavior of an elastic curve bound to a surface will reflect the geometry of its environment. This may occur in an obvious way: the curve may deform freely along directions tangent to the surface, but not along the surface normal. However, even if the energy itself is symmetric in the curve's geodesic and normal curvatures, which control these modes, very distinct roles are played by the two. If the elastic curve binds preferentially on one side, or is itself assembled on the surface, not only would one expect the bending moduli associated with the two modes to differ, binding along specific directions, reflected in spontaneous values of these curvatures, may be favored. The shape equations describing the equilibrium states of a surface curve described by an elastic energy accommodating environmental factors will be identified by adapting the method of Lagrange multipliers to the Darboux frame associated with the curve. The forces transmitted to the surface along the surface normal will be determined. Features associated with a number of different energies, both of physical relevance and of mathematical interest, are described. The conservation laws associated with trajectories on surface geometries exhibiting continuous symmetries are also examined. (paper)

  17. Determination of dose equivalent with tissue-equivalent proportional counters

    International Nuclear Information System (INIS)

    Dietze, G.; Schuhmacher, H.; Menzel, H.G.

    1989-01-01

    Low pressure tissue-equivalent proportional counters (TEPC) are instruments based on the cavity chamber principle and provide spectral information on the energy loss of single charged particles crossing the cavity. Hence such detectors measure absorbed dose or kerma and are able to provide estimates on radiation quality. During recent years TEPC based instruments have been developed for radiation protection applications in photon and neutron fields. This was mainly based on the expectation that the energy dependence of their dose equivalent response is smaller than that of other instruments in use. Recently, such instruments have been investigated by intercomparison measurements in various neutron and photon fields. Although their principles of measurements are more closely related to the definition of dose equivalent quantities than those of other existing dosemeters, there are distinct differences and limitations with respect to the irradiation geometry and the determination of the quality factor. The application of such instruments for measuring ambient dose equivalent is discussed. (author)

  18. Clarifying Normalization

    Science.gov (United States)

    Carpenter, Donald A.

    2008-01-01

    Confusion exists among database textbooks as to the goal of normalization as well as to which normal form a designer should aspire. This article discusses such discrepancies with the intention of simplifying normalization for both teacher and student. This author's industry and classroom experiences indicate such simplification yields quicker…

  19. What is correct: equivalent dose or dose equivalent

    International Nuclear Information System (INIS)

    Franic, Z.

    1994-01-01

    In Croatian language some physical quantities in radiation protection dosimetry have not precise names. Consequently, in practice either terms in English or mathematical formulas are used. The situation is even worse since the Croatian language only a limited number of textbooks, reference books and other papers are available. This paper compares the concept of ''dose equivalent'' as outlined in International Commission on Radiological Protection (ICRP) recommendations No. 26 and newest, conceptually different concept of ''equivalent dose'' which is introduced in ICRP 60. It was found out that Croatian terminology is both not uniform and unprecise. For the term ''dose equivalent'' was, under influence of Russian and Serbian languages, often used as term ''equivalent dose'' even from the point of view of ICRP 26 recommendations, which was not justified. Unfortunately, even now, in Croatia the legal unit still ''dose equivalent'' defined as in ICRP 26, but the term used for it is ''equivalent dose''. Therefore, in Croatian legislation a modified set of quantities introduced in ICRP 60, should be incorporated as soon as possible

  20. Learning Curve? Which One?

    Directory of Open Access Journals (Sweden)

    Paulo Prochno

    2004-07-01

    Full Text Available Learning curves have been studied for a long time. These studies provided strong support to the hypothesis that, as organizations produce more of a product, unit costs of production decrease at a decreasing rate (see Argote, 1999 for a comprehensive review of learning curve studies. But the organizational mechanisms that lead to these results are still underexplored. We know some drivers of learning curves (ADLER; CLARK, 1991; LAPRE et al., 2000, but we still lack a more detailed view of the organizational processes behind those curves. Through an ethnographic study, I bring a comprehensive account of the first year of operations of a new automotive plant, describing what was taking place on in the assembly area during the most relevant shifts of the learning curve. The emphasis is then on how learning occurs in that setting. My analysis suggests that the overall learning curve is in fact the result of an integration process that puts together several individual ongoing learning curves in different areas throughout the organization. In the end, I propose a model to understand the evolution of these learning processes and their supporting organizational mechanisms.

  1. Compact Hilbert Curve Index Algorithm Based on Gray Code

    Directory of Open Access Journals (Sweden)

    CAO Xuefeng

    2016-12-01

    Full Text Available Hilbert curve has best clustering in various kinds of space filling curves, and has been used as an important tools in discrete global grid spatial index design field. But there are lots of redundancies in the standard Hilbert curve index when the data set has large differences between dimensions. In this paper, the construction features of Hilbert curve is analyzed based on Gray code, and then the compact Hilbert curve index algorithm is put forward, in which the redundancy problem has been avoided while Hilbert curve clustering preserved. Finally, experiment results shows that the compact Hilbert curve index outperforms the standard Hilbert index, their 1 computational complexity is nearly equivalent, but the real data set test shows the coding time and storage space decrease 40%, the speedup ratio of sorting speed is nearly 4.3.

  2. The crime kuznets curve

    OpenAIRE

    Buonanno, Paolo; Fergusson, Leopoldo; Vargas, Juan Fernando

    2014-01-01

    We document the existence of a Crime Kuznets Curve in US states since the 1970s. As income levels have risen, crime has followed an inverted U-shaped pattern, first increasing and then dropping. The Crime Kuznets Curve is not explained by income inequality. In fact, we show that during the sample period inequality has risen monotonically with income, ruling out the traditional Kuznets Curve. Our finding is robust to adding a large set of controls that are used in the literature to explain the...

  3. Experimental simulation of closed timelike curves.

    Science.gov (United States)

    Ringbauer, Martin; Broome, Matthew A; Myers, Casey R; White, Andrew G; Ralph, Timothy C

    2014-06-19

    Closed timelike curves are among the most controversial features of modern physics. As legitimate solutions to Einstein's field equations, they allow for time travel, which instinctively seems paradoxical. However, in the quantum regime these paradoxes can be resolved, leaving closed timelike curves consistent with relativity. The study of these systems therefore provides valuable insight into nonlinearities and the emergence of causal structures in quantum mechanics--essential for any formulation of a quantum theory of gravity. Here we experimentally simulate the nonlinear behaviour of a qubit interacting unitarily with an older version of itself, addressing some of the fascinating effects that arise in systems traversing a closed timelike curve. These include perfect discrimination of non-orthogonal states and, most intriguingly, the ability to distinguish nominally equivalent ways of preparing pure quantum states. Finally, we examine the dependence of these effects on the initial qubit state, the form of the unitary interaction and the influence of decoherence.

  4. Minimal families of curves on surfaces

    KAUST Repository

    Lubbes, Niels

    2014-11-01

    A minimal family of curves on an embedded surface is defined as a 1-dimensional family of rational curves of minimal degree, which cover the surface. We classify such minimal families using constructive methods. This allows us to compute the minimal families of a given surface.The classification of minimal families of curves can be reduced to the classification of minimal families which cover weak Del Pezzo surfaces. We classify the minimal families of weak Del Pezzo surfaces and present a table with the number of minimal families of each weak Del Pezzo surface up to Weyl equivalence.As an application of this classification we generalize some results of Schicho. We classify algebraic surfaces that carry a family of conics. We determine the minimal lexicographic degree for the parametrization of a surface that carries at least 2 minimal families. © 2014 Elsevier B.V.

  5. Bond yield curve construction

    Directory of Open Access Journals (Sweden)

    Kožul Nataša

    2014-01-01

    Full Text Available In the broadest sense, yield curve indicates the market's view of the evolution of interest rates over time. However, given that cost of borrowing it closely linked to creditworthiness (ability to repay, different yield curves will apply to different currencies, market sectors, or even individual issuers. As government borrowing is indicative of interest rate levels available to other market players in a particular country, and considering that bond issuance still remains the dominant form of sovereign debt, this paper describes yield curve construction using bonds. The relationship between zero-coupon yield, par yield and yield to maturity is given and their usage in determining curve discount factors is described. Their usage in deriving forward rates and pricing related derivative instruments is also discussed.

  6. SRHA calibration curve

    Data.gov (United States)

    U.S. Environmental Protection Agency — an UV calibration curve for SRHA quantitation. This dataset is associated with the following publication: Chang, X., and D. Bouchard. Surfactant-Wrapped Multiwalled...

  7. Bragg Curve Spectroscopy

    International Nuclear Information System (INIS)

    Gruhn, C.R.

    1981-05-01

    An alternative utilization is presented for the gaseous ionization chamber in the detection of energetic heavy ions, which is called Bragg Curve Spectroscopy (BCS). Conceptually, BCS involves using the maximum data available from the Bragg curve of the stopping heavy ion (HI) for purposes of identifying the particle and measuring its energy. A detector has been designed that measures the Bragg curve with high precision. From the Bragg curve the range from the length of the track, the total energy from the integral of the specific ionization over the track, the dE/dx from the specific ionization at the beginning of the track, and the Bragg peak from the maximum of the specific ionization of the HI are determined. This last signal measures the atomic number, Z, of the HI unambiguously

  8. ROBUST DECLINE CURVE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Sutawanir Darwis

    2012-05-01

    Full Text Available Empirical decline curve analysis of oil production data gives reasonable answer in hyperbolic type curves situations; however the methodology has limitations in fitting real historical production data in present of unusual observations due to the effect of the treatment to the well in order to increase production capacity. The development ofrobust least squares offers new possibilities in better fitting production data using declinecurve analysis by down weighting the unusual observations. This paper proposes a robustleast squares fitting lmRobMM approach to estimate the decline rate of daily production data and compares the results with reservoir simulation results. For case study, we usethe oil production data at TBA Field West Java. The results demonstrated that theapproach is suitable for decline curve fitting and offers a new insight in decline curve analysis in the present of unusual observations.

  9. Power Curve Measurements FGW

    DEFF Research Database (Denmark)

    Georgieva Yankova, Ginka; Federici, Paolo

    This report describes power curve measurements carried out on a given turbine in a chosen period. The measurements are carried out in accordance to IEC 61400-12-1 Ed. 1 and FGW Teil 2.......This report describes power curve measurements carried out on a given turbine in a chosen period. The measurements are carried out in accordance to IEC 61400-12-1 Ed. 1 and FGW Teil 2....

  10. Curves and Abelian varieties

    CERN Document Server

    Alexeev, Valery; Clemens, C Herbert; Beauville, Arnaud

    2008-01-01

    This book is devoted to recent progress in the study of curves and abelian varieties. It discusses both classical aspects of this deep and beautiful subject as well as two important new developments, tropical geometry and the theory of log schemes. In addition to original research articles, this book contains three surveys devoted to singularities of theta divisors, of compactified Jacobians of singular curves, and of "strange duality" among moduli spaces of vector bundles on algebraic varieties.

  11. Modeling error distributions of growth curve models through Bayesian methods.

    Science.gov (United States)

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  12. Global equivalent magnetization of the oceanic lithosphere

    Science.gov (United States)

    Dyment, J.; Choi, Y.; Hamoudi, M.; Lesur, V.; Thebault, E.

    2015-11-01

    As a by-product of the construction of a new World Digital Magnetic Anomaly Map over oceanic areas, we use an original approach based on the global forward modeling of seafloor spreading magnetic anomalies and their comparison to the available marine magnetic data to derive the first map of the equivalent magnetization over the World's ocean. This map reveals consistent patterns related to the age of the oceanic lithosphere, the spreading rate at which it was formed, and the presence of mantle thermal anomalies which affects seafloor spreading and the resulting lithosphere. As for the age, the equivalent magnetization decreases significantly during the first 10-15 Myr after its formation, probably due to the alteration of crustal magnetic minerals under pervasive hydrothermal alteration, then increases regularly between 20 and 70 Ma, reflecting variations in the field strength or source effects such as the acquisition of a secondary magnetization. As for the spreading rate, the equivalent magnetization is twice as strong in areas formed at fast rate than in those formed at slow rate, with a threshold at ∼40 km/Myr, in agreement with an independent global analysis of the amplitude of Anomaly 25. This result, combined with those from the study of the anomalous skewness of marine magnetic anomalies, allows building a unified model for the magnetic structure of normal oceanic lithosphere as a function of spreading rate. Finally, specific areas affected by thermal mantle anomalies at the time of their formation exhibit peculiar equivalent magnetization signatures, such as the cold Australian-Antarctic Discordance, marked by a lower magnetization, and several hotspots, marked by a high magnetization.

  13. Birkhoff normalization

    NARCIS (Netherlands)

    Broer, H.; Hoveijn, I.; Lunter, G.; Vegter, G.

    2003-01-01

    The Birkhoff normal form procedure is a widely used tool for approximating a Hamiltonian systems by a simpler one. This chapter starts out with an introduction to Hamiltonian mechanics, followed by an explanation of the Birkhoff normal form procedure. Finally we discuss several algorithms for

  14. Approximation by planar elastic curves

    DEFF Research Database (Denmark)

    Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge

    2016-01-01

    We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....

  15. Symmetries of dynamically equivalent theories

    Energy Technology Data Exchange (ETDEWEB)

    Gitman, D.M.; Tyutin, I.V. [Sao Paulo Univ., SP (Brazil). Inst. de Fisica; Lebedev Physics Institute, Moscow (Russian Federation)

    2006-03-15

    A natural and very important development of constrained system theory is a detail study of the relation between the constraint structure in the Hamiltonian formulation with specific features of the theory in the Lagrangian formulation, especially the relation between the constraint structure with the symmetries of the Lagrangian action. An important preliminary step in this direction is a strict demonstration, and this is the aim of the present article, that the symmetry structures of the Hamiltonian action and of the Lagrangian action are the same. This proved, it is sufficient to consider the symmetry structure of the Hamiltonian action. The latter problem is, in some sense, simpler because the Hamiltonian action is a first-order action. At the same time, the study of the symmetry of the Hamiltonian action naturally involves Hamiltonian constraints as basic objects. One can see that the Lagrangian and Hamiltonian actions are dynamically equivalent. This is why, in the present article, we consider from the very beginning a more general problem: how the symmetry structures of dynamically equivalent actions are related. First, we present some necessary notions and relations concerning infinitesimal symmetries in general, as well as a strict definition of dynamically equivalent actions. Finally, we demonstrate that there exists an isomorphism between classes of equivalent symmetries of dynamically equivalent actions. (author)

  16. Development of a statistically-based lower bound fracture toughness curve (Ksub(IR) curve)

    International Nuclear Information System (INIS)

    Wullaert, R.A.; Server, W.L.; Oldfield, W.; Stahlkopf, K.E.

    1977-01-01

    A program of initiation fracture toughness measurements on fifty heats of nuclear pressure vessel production materials (including weldments) was used to develop a methodology for establishing a revised reference toughness curve. The new methodology was statistically developed and provides a predefined confidence limit (or tolerance limit) for fracture toughness based upon many heats of a particular type of material. Overall reference curves were developed for seven specific materials using large specimen static and dynamic fracture toughness results. The heat-to-heat variation was removed by normalizing both the fracture toughness and temperature data with the precracked Charpy tanh curve coefficients for each particular heat. The variance and distribution about the curve were determined, and lower bounds of predetermined statistical significance were drawn based upon a Pearson distribution in the lower shelf region (since the data were skewed to high values) and a t-distribution in the transition temperature region (since the data were normally distributed)

  17. Uniformly accelerating charged particles. A threat to the equivalence principle

    International Nuclear Information System (INIS)

    Lyle, Stephen N.

    2008-01-01

    There has been a long debate about whether uniformly accelerated charges should radiate electromagnetic energy and how one should describe their worldline through a flat spacetime, i.e., whether the Lorentz-Dirac equation is right. There are related questions in curved spacetimes, e.g., do different varieties of equivalence principle apply to charged particles, and can a static charge in a static spacetime radiate electromagnetic energy? The problems with the LD equation in flat spacetime are spelt out in some detail here, and its extension to curved spacetime is discussed. Different equivalence principles are compared and some vindicated. The key papers are discussed in detail and many of their conclusions are significantly revised by the present solution. (orig.)

  18. Curved electromagnetic missiles

    International Nuclear Information System (INIS)

    Myers, J.M.; Shen, H.M.; Wu, T.T.

    1989-01-01

    Transient electromagnetic fields can exhibit interesting behavior in the limit of great distances from their sources. In situations of finite total radiated energy, the energy reaching a distant receiver can decrease with distance much more slowly than the usual r - 2 . Cases of such slow decrease have been referred to as electromagnetic missiles. All of the wide variety of known missiles propagate in essentially straight lines. A sketch is presented here of a missile that can follow a path that is strongly curved. An example of a curved electromagnetic missile is explicitly constructed and some of its properties are discussed. References to details available elsewhere are given

  19. Algebraic curves and cryptography

    CERN Document Server

    Murty, V Kumar

    2010-01-01

    It is by now a well-known paradigm that public-key cryptosystems can be built using finite Abelian groups and that algebraic geometry provides a supply of such groups through Abelian varieties over finite fields. Of special interest are the Abelian varieties that are Jacobians of algebraic curves. All of the articles in this volume are centered on the theme of point counting and explicit arithmetic on the Jacobians of curves over finite fields. The topics covered include Schoof's \\ell-adic point counting algorithm, the p-adic algorithms of Kedlaya and Denef-Vercauteren, explicit arithmetic on

  20. IGMtransmission: Transmission curve computation

    Science.gov (United States)

    Harrison, Christopher M.; Meiksin, Avery; Stock, David

    2015-04-01

    IGMtransmission is a Java graphical user interface that implements Monte Carlo simulations to compute the corrections to colors of high-redshift galaxies due to intergalactic attenuation based on current models of the Intergalactic Medium. The effects of absorption due to neutral hydrogen are considered, with particular attention to the stochastic effects of Lyman Limit Systems. Attenuation curves are produced, as well as colors for a wide range of filter responses and model galaxy spectra. Photometric filters are included for the Hubble Space Telescope, the Keck telescope, the Mt. Palomar 200-inch, the SUBARU telescope and UKIRT; alternative filter response curves and spectra may be readily uploaded.

  1. Teleparallel equivalent of Lovelock gravity

    Science.gov (United States)

    González, P. A.; Vásquez, Yerko

    2015-12-01

    There is a growing interest in modified gravity theories based on torsion, as these theories exhibit interesting cosmological implications. In this work inspired by the teleparallel formulation of general relativity, we present its extension to Lovelock gravity known as the most natural extension of general relativity in higher-dimensional space-times. First, we review the teleparallel equivalent of general relativity and Gauss-Bonnet gravity, and then we construct the teleparallel equivalent of Lovelock gravity. In order to achieve this goal, we use the vielbein and the connection without imposing the Weitzenböck connection. Then, we extract the teleparallel formulation of the theory by setting the curvature to null.

  2. Attainment of radiation equivalency principle

    International Nuclear Information System (INIS)

    Shmelev, A.N.; Apseh, V.A.

    2004-01-01

    Problems connected with the prospects for long-term development of the nuclear energetics are discussed. Basic principles of the future large-scale nuclear energetics are listed, primary attention is the safety of radioactive waste management of nuclear energetics. The radiation equivalence principle means close of fuel cycle and management of nuclear materials transportation with low losses on spent fuel and waste processing. Two aspects are considered: radiation equivalence in global and local aspects. The necessity of looking for other strategies of fuel cycle management in full-scale nuclear energy on radioactive waste management is supported [ru

  3. Experimental and statistical requirements for developing a well-defined K/sub IR/ curve. Final report

    International Nuclear Information System (INIS)

    Server, W.L.; Oldfield, W.; Wullaert, R.A.

    1977-05-01

    Further development of a statistically well-defined reference fracture toughness curve to verify and compliment the K/sub IR/ curve presently specified in Appendix G, Section III of the ASME Code was accomplished by performing critical experiments in small specimen fracture mechanics and improving techniques for statistical analysis of the data. Except for cleavage-initiated fracture, crack initiation was observed to occur prior to maximum load for all of the materials investigated. Initiation fracture toughness values (K/sub Jc/) based on R-curve heat-tinting studies were up to 50 percent less than the previously reported equivalent energy values (K*/sub d/). At upper shelf temperatures, the initiation fracture toughness (K/sub Jc/) generally increased with stress intensification rate. Both K/sub Jc/--Charpy V-notch and K/sub Ic/--specimen strength ratio correlations are promising methods for predicting thick-section behavior from small specimens. The previously developed tanh curve fitting procedure was improved to permit estimates of the variances and covariances of the regression coefficients to be computed. The distribution of the fracture toughness data was determined as a function of temperature. Instrumented precracked Charpy results were used to normalize the larger specimen fracture toughness data. The transformed large specimen fracture toughness data are used to generate statistically based lower-bound fracture toughness curves for either static or dynamic test results. A comparison of these lower bound curves with the K/sub IR/ curve shows that the K/sub IR/ curve is more conservative over most of its range. 143 figures, 26 tables

  4. Learning from uncertain curves

    DEFF Research Database (Denmark)

    Mallasto, Anton; Feragen, Aasa

    2017-01-01

    We introduce a novel framework for statistical analysis of populations of nondegenerate Gaussian processes (GPs), which are natural representations of uncertain curves. This allows inherent variation or uncertainty in function-valued data to be properly incorporated in the population analysis. Us...

  5. Power Curve Measurements

    DEFF Research Database (Denmark)

    Federici, Paolo; Kock, Carsten Weber

    This report describes the power curve measurements performed with a nacelle LIDAR on a given wind turbine in a wind farm and during a chosen measurement period. The measurements and analysis are carried out in accordance to the guidelines in the procedure “DTU Wind Energy-E-0019” [1]. The reporting...

  6. Power Curve Measurements, FGW

    DEFF Research Database (Denmark)

    Vesth, Allan; Kock, Carsten Weber

    The report describes power curve measurements carried out on a given wind turbine. The measurements are carried out in accordance to Ref. [1]. A site calibration has been carried out; see Ref. [2], and the measured flow correction factors for different wind directions are used in the present...... analyze of power performance of the turbine....

  7. Power Curve Measurements

    DEFF Research Database (Denmark)

    Federici, Paolo; Vesth, Allan

    The report describes power curve measurements carried out on a given wind turbine. The measurements are carried out in accordance to Ref. [1]. A site calibration has been carried out; see Ref. [2], and the measured flow correction factors for different wind directions are used in the present...... analyze of power performance of the turbine....

  8. Power Curve Measurements

    DEFF Research Database (Denmark)

    Villanueva, Héctor; Gómez Arranz, Paula

    The report describes power curve measurements carried out on a given wind turbine. The measurements are carried out in accordance to Ref. [1]. A site calibration has been carried out; see Ref. [2], and the measured flow correction factors for different wind directions are used in the present...... analyze of power performance of the turbine...

  9. Carbon Lorenz Curves

    NARCIS (Netherlands)

    Groot, L.F.M.|info:eu-repo/dai/nl/073642398

    2008-01-01

    The purpose of this paper is twofold. First, it exhibits that standard tools in the measurement of income inequality, such as the Lorenz curve and the Gini-index, can successfully be applied to the issues of inequality measurement of carbon emissions and the equity of abatement policies across

  10. The Axial Curve Rotator.

    Science.gov (United States)

    Hunter, Walter M.

    This document contains detailed directions for constructing a device that mechanically produces the three-dimensional shape resulting from the rotation of any algebraic line or curve around either axis on the coordinate plant. The device was developed in response to student difficulty in visualizing, and thus grasping the mathematical principles…

  11. Nacelle lidar power curve

    DEFF Research Database (Denmark)

    Gómez Arranz, Paula; Wagner, Rozenn

    This report describes the power curve measurements performed with a nacelle LIDAR on a given wind turbine in a wind farm and during a chosen measurement period. The measurements and analysis are carried out in accordance to the guidelines in the procedure “DTU Wind Energy-E-0019” [1]. The reporting...

  12. Power curve report

    DEFF Research Database (Denmark)

    Vesth, Allan; Kock, Carsten Weber

    The report describes power curve measurements carried out on a given wind turbine. The measurements are carried out in accordance to Ref. [1]. A site calibration has been carried out; see Ref. [2], and the measured flow correction factors for different wind directions are used in the present...

  13. Textbook Factor Demand Curves.

    Science.gov (United States)

    Davis, Joe C.

    1994-01-01

    Maintains that teachers and textbook graphics follow the same basic pattern in illustrating changes in demand curves when product prices increase. Asserts that the use of computer graphics will enable teachers to be more precise in their graphic presentation of price elasticity. (CFR)

  14. ECM using Edwards curves

    NARCIS (Netherlands)

    Bernstein, D.J.; Birkner, P.; Lange, T.; Peters, C.P.

    2013-01-01

    This paper introduces EECM-MPFQ, a fast implementation of the elliptic-curve method of factoring integers. EECM-MPFQ uses fewer modular multiplications than the well-known GMP-ECM software, takes less time than GMP-ECM, and finds more primes than GMP-ECM. The main improvements above the

  15. Power Curve Measurements FGW

    DEFF Research Database (Denmark)

    Federici, Paolo; Kock, Carsten Weber

    The report describes power curve measurements carried out on a given wind turbine. The measurements are carried out in accordance to Ref. [1]. A site calibration has been carried out; see Ref. [2], and the measured flow correction factors for different wind directions are used in the present...... analyze of power performance of the turbine...

  16. Comments on field equivalence principles

    DEFF Research Database (Denmark)

    Appel-Hansen, Jørgen

    1987-01-01

    It is pointed Out that often-used arguments based on a short-circuit concept in presentations of field equivalence principles are not correct. An alternative presentation based on the uniqueness theorem is given. It does not contradict the results obtained by using the short-circuit concept...

  17. EQUIVALENCE VERSUS NON-EQUIVALENCE IN ECONOMIC TRANSLATION

    Directory of Open Access Journals (Sweden)

    Cristina, Chifane

    2012-01-01

    Full Text Available This paper aims at highlighting the fact that “equivalence” represents a concept worth revisiting and detailing upon when tackling the translation process of economic texts both from English into Romanian and from Romanian into English. Far from being exhaustive, our analysis will focus upon the problems arising from the lack of equivalence at the word level. Consequently, relevant examples from the economic field will be provided to account for the following types of non-equivalence at word level: culturespecific concepts; the source language concept is not lexicalised in the target language; the source language word is semantically complex; differences in physical and interpersonal perspective; differences in expressive meaning; differences in form; differences in frequency and purpose of using specific forms and the use of loan words in the source text. Likewise, we shall illustrate a number of translation strategies necessary to deal with the afore-mentioned cases of non-equivalence: translation by a more general word (superordinate; translation by a more neutral/less expressive word; translation by cultural substitution; translation using a loan word or loan word plus explanation; translation by paraphrase using a related word; translation by paraphrase using unrelated words; translation by omission and translation by illustration.

  18. Determination of electron depth-dose curves for water, ICRU tissue, and PMMA and their application to radiation protection dosimetry

    International Nuclear Information System (INIS)

    Grosswendt, B.

    1994-01-01

    For monoenergetic electrons in the energy range between 60 keV and 10 MeV, normally incident on water, 4-element ICRU tissue and PMMA phantoms, depth-dose curves have been calculated using the Monte Carlo method. The phantoms' shape was that of a rectangular solid with a square front face of 30 cm x 30 cm and a thickness of 15 cm; it corresponds to that recommended by the ICRU for use in the procedure of calibrating radiation protection dosemeters. The depth-dose curves have been used to determine practical ranges, half-value depths, electron fluence to maximum absorbed dose conversion factors, and conversion factors between electron fluence and absorbed dose at depths d corresponding to 0.007 g.cm -2 , 0.3 g.cm -2 , and 1.0 g.cm -2 . The latter data can be used as fluence to dose equivalent conversion factors for extended parallel electron beams. (Author)

  19. Codes and curves

    CERN Document Server

    Walker, Judy L

    2000-01-01

    When information is transmitted, errors are likely to occur. Coding theory examines efficient ways of packaging data so that these errors can be detected, or even corrected. The traditional tools of coding theory have come from combinatorics and group theory. Lately, however, coding theorists have added techniques from algebraic geometry to their toolboxes. In particular, by re-interpreting the Reed-Solomon codes, one can see how to define new codes based on divisors on algebraic curves. For instance, using modular curves over finite fields, Tsfasman, Vladut, and Zink showed that one can define a sequence of codes with asymptotically better parameters than any previously known codes. This monograph is based on a series of lectures the author gave as part of the IAS/PCMI program on arithmetic algebraic geometry. Here, the reader is introduced to the exciting field of algebraic geometric coding theory. Presenting the material in the same conversational tone of the lectures, the author covers linear codes, inclu...

  20. Carbon Lorenz Curves

    Energy Technology Data Exchange (ETDEWEB)

    Groot, L. [Utrecht University, Utrecht School of Economics, Janskerkhof 12, 3512 BL Utrecht (Netherlands)

    2008-11-15

    The purpose of this paper is twofold. First, it exhibits that standard tools in the measurement of income inequality, such as the Lorenz curve and the Gini-index, can successfully be applied to the issues of inequality measurement of carbon emissions and the equity of abatement policies across countries. These tools allow policy-makers and the general public to grasp at a single glance the impact of conventional distribution rules such as equal caps or grandfathering, or more sophisticated ones, on the distribution of greenhouse gas emissions. Second, using the Samuelson rule for the optimal provision of a public good, the Pareto-optimal distribution of carbon emissions is compared with the distribution that follows if countries follow Nash-Cournot abatement strategies. It is shown that the Pareto-optimal distribution under the Samuelson rule can be approximated by the equal cap division, represented by the diagonal in the Lorenz curve diagram.

  1. Dynamics of curved fronts

    CERN Document Server

    Pelce, Pierre

    1989-01-01

    In recent years, much progress has been made in the understanding of interface dynamics of various systems: hydrodynamics, crystal growth, chemical reactions, and combustion. Dynamics of Curved Fronts is an important contribution to this field and will be an indispensable reference work for researchers and graduate students in physics, applied mathematics, and chemical engineering. The book consist of a 100 page introduction by the editor and 33 seminal articles from various disciplines.

  2. International Wage Curves

    OpenAIRE

    David G. Blanchflower; Andrew J. Oswald

    1992-01-01

    The paper provides evidence for the existence of a negatively sloped locus linking the level of pay to the rate of regional (or industry) unemployment. This "wage curve" is estimated using microeconomic data for Britain, the US, Canada, Korea, Austria, Italy, Holland, Switzerland, Norway, and Germany, The average unemployment elasticity of pay is approximately -0.1. The paper sets out a multi-region efficiency wage model and argues that its predictions are consistent with the data.

  3. Anatomical curve identification

    Science.gov (United States)

    Bowman, Adrian W.; Katina, Stanislav; Smith, Joanna; Brown, Denise

    2015-01-01

    Methods for capturing images in three dimensions are now widely available, with stereo-photogrammetry and laser scanning being two common approaches. In anatomical studies, a number of landmarks are usually identified manually from each of these images and these form the basis of subsequent statistical analysis. However, landmarks express only a very small proportion of the information available from the images. Anatomically defined curves have the advantage of providing a much richer expression of shape. This is explored in the context of identifying the boundary of breasts from an image of the female torso and the boundary of the lips from a facial image. The curves of interest are characterised by ridges or valleys. Key issues in estimation are the ability to navigate across the anatomical surface in three-dimensions, the ability to recognise the relevant boundary and the need to assess the evidence for the presence of the surface feature of interest. The first issue is addressed by the use of principal curves, as an extension of principal components, the second by suitable assessment of curvature and the third by change-point detection. P-spline smoothing is used as an integral part of the methods but adaptations are made to the specific anatomical features of interest. After estimation of the boundary curves, the intermediate surfaces of the anatomical feature of interest can be characterised by surface interpolation. This allows shape variation to be explored using standard methods such as principal components. These tools are applied to a collection of images of women where one breast has been reconstructed after mastectomy and where interest lies in shape differences between the reconstructed and unreconstructed breasts. They are also applied to a collection of lip images where possible differences in shape between males and females are of interest. PMID:26041943

  4. Estimating Corporate Yield Curves

    OpenAIRE

    Antionio Diaz; Frank Skinner

    2001-01-01

    This paper represents the first study of retail deposit spreads of UK financial institutions using stochastic interest rate modelling and the market comparable approach. By replicating quoted fixed deposit rates using the Black Derman and Toy (1990) stochastic interest rate model, we find that the spread between fixed and variable rates of interest can be modeled (and priced) using an interest rate swap analogy. We also find that we can estimate an individual bank deposit yield curve as a spr...

  5. LCC: Light Curves Classifier

    Science.gov (United States)

    Vo, Martin

    2017-08-01

    Light Curves Classifier uses data mining and machine learning to obtain and classify desired objects. This task can be accomplished by attributes of light curves or any time series, including shapes, histograms, or variograms, or by other available information about the inspected objects, such as color indices, temperatures, and abundances. After specifying features which describe the objects to be searched, the software trains on a given training sample, and can then be used for unsupervised clustering for visualizing the natural separation of the sample. The package can be also used for automatic tuning parameters of used methods (for example, number of hidden neurons or binning ratio). Trained classifiers can be used for filtering outputs from astronomical databases or data stored locally. The Light Curve Classifier can also be used for simple downloading of light curves and all available information of queried stars. It natively can connect to OgleII, OgleIII, ASAS, CoRoT, Kepler, Catalina and MACHO, and new connectors or descriptors can be implemented. In addition to direct usage of the package and command line UI, the program can be used through a web interface. Users can create jobs for ”training” methods on given objects, querying databases and filtering outputs by trained filters. Preimplemented descriptors, classifier and connectors can be picked by simple clicks and their parameters can be tuned by giving ranges of these values. All combinations are then calculated and the best one is used for creating the filter. Natural separation of the data can be visualized by unsupervised clustering.

  6. Denotational Aspects of Untyped Normalization by Evaluation

    DEFF Research Database (Denmark)

    Filinski, Andrzej; Rohde, Henning Korsholm

    2005-01-01

    of soundness (the output term, if any, is in normal form and ß-equivalent to the input term); identification (ß-equivalent terms are mapped to the same result); and completeness (the function is defined for all terms that do have normal forms). We also show how the semantic construction enables a simple yet...... formal correctness proof for the normalization algorithm, expressed as a functional program in an ML-like, call-by-value language. Finally, we generalize the construction to produce an infinitary variant of normal forms, namely Böhm trees. We show that the three-part characterization of correctness...

  7. Equivalent statistics and data interpretation.

    Science.gov (United States)

    Francis, Gregory

    2017-08-01

    Recent reform efforts in psychological science have led to a plethora of choices for scientists to analyze their data. A scientist making an inference about their data must now decide whether to report a p value, summarize the data with a standardized effect size and its confidence interval, report a Bayes Factor, or use other model comparison methods. To make good choices among these options, it is necessary for researchers to understand the characteristics of the various statistics used by the different analysis frameworks. Toward that end, this paper makes two contributions. First, it shows that for the case of a two-sample t test with known sample sizes, many different summary statistics are mathematically equivalent in the sense that they are based on the very same information in the data set. When the sample sizes are known, the p value provides as much information about a data set as the confidence interval of Cohen's d or a JZS Bayes factor. Second, this equivalence means that different analysis methods differ only in their interpretation of the empirical data. At first glance, it might seem that mathematical equivalence of the statistics suggests that it does not matter much which statistic is reported, but the opposite is true because the appropriateness of a reported statistic is relative to the inference it promotes. Accordingly, scientists should choose an analysis method appropriate for their scientific investigation. A direct comparison of the different inferential frameworks provides some guidance for scientists to make good choices and improve scientific practice.

  8. Equivalent nozzle in thermomechanical problems

    International Nuclear Information System (INIS)

    Cesari, F.

    1977-01-01

    When analyzing nuclear vessels, it is most important to study the behavior of the nozzle cylinder-cylinder intersection. For the elastic field, this analysis in three dimensions is quite easy using the method of finite elements. The same analysis in the non-linear field becomes difficult for designs in 3-D. It is therefore necessary to resolve a nozzle in two dimensions equivalent to a 3-D nozzle. The purpose of the present work is to find an equivalent nozzle both with a mechanical and thermal load. This has been achieved by the analysis in three dimensions of a nozzle and a nozzle cylinder-sphere intersection, of a different radius. The equivalent nozzle will be a nozzle with a sphere radius in a given ratio to the radius of a cylinder; thus, the maximum equivalent stress is the same in both 2-D and 3-D. The nozzle examined derived from the intersection of a cylindrical vessel of radius R=191.4 mm and thickness T=6.7 mm with a cylindrical nozzle of radius r=24.675 mm and thickness t=1.350 mm, for which the experimental results for an internal pressure load are known. The structure was subdivided into 96 finite, three-dimensional and isoparametric elements with 60 degrees of freedom and 661 total nodes. Both the analysis with a mechanical load as well as the analysis with a thermal load were carried out on this structure according to the Bersafe system. The thermal load consisted of a transient typical of an accident occurring in a sodium-cooled fast reactor, with a peak of the temperature (540 0 C) for the sodium inside the vessel with an insulating argon temperature constant at 525 0 C. The maximum value of the equivalent tension was found in the internal area at the union towards the vessel side. The analysis of the nozzle in 2-D consists in schematizing the structure as a cylinder-sphere intersection, where the sphere has a given relation to the

  9. 21 CFR 26.9 - Equivalence determination.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Equivalence determination. 26.9 Section 26.9 Food... Specific Sector Provisions for Pharmaceutical Good Manufacturing Practices § 26.9 Equivalence determination... document insufficient evidence of equivalence, lack of opportunity to assess equivalence or a determination...

  10. Information Leakage from Logically Equivalent Frames

    Science.gov (United States)

    Sher, Shlomi; McKenzie, Craig R. M.

    2006-01-01

    Framing effects are said to occur when equivalent frames lead to different choices. However, the equivalence in question has been incompletely conceptualized. In a new normative analysis of framing effects, we complete the conceptualization by introducing the notion of information equivalence. Information equivalence obtains when no…

  11. Wijsman Orlicz Asymptotically Ideal -Statistical Equivalent Sequences

    Directory of Open Access Journals (Sweden)

    Bipan Hazarika

    2013-01-01

    in Wijsman sense and present some definitions which are the natural combination of the definition of asymptotic equivalence, statistical equivalent, -statistical equivalent sequences in Wijsman sense. Finally, we introduce the notion of Cesaro Orlicz asymptotically -equivalent sequences in Wijsman sense and establish their relationship with other classes.

  12. Equivalence relations of AF-algebra extensions

    Indian Academy of Sciences (India)

    In this paper, we consider equivalence relations of *-algebra extensions and describe the relationship between the isomorphism equivalence and the unitary equivalence. We also show that a certain group homomorphism is the obstruction for these equivalence relations to be the same.

  13. Mixed field dose equivalent measuring instruments

    International Nuclear Information System (INIS)

    Brackenbush, L.W.; McDonald, J.C.; Endres, G.W.R.; Quam, W.

    1985-01-01

    In the past, separate instruments have been used to monitor dose equivalent from neutrons and gamma rays. It has been demonstrated that it is now possible to measure simultaneously neutron and gamma dose with a single instrument, the tissue equivalent proportional counter (TEPC). With appropriate algorithms dose equivalent can also be determined from the TEPC. A simple ''pocket rem meter'' for measuring neutron dose equivalent has already been developed. Improved algorithms for determining dose equivalent for mixed fields are presented. (author)

  14. Malware Normalization

    OpenAIRE

    Christodorescu, Mihai; Kinder, Johannes; Jha, Somesh; Katzenbeisser, Stefan; Veith, Helmut

    2005-01-01

    Malware is code designed for a malicious purpose, such as obtaining root privilege on a host. A malware detector identifies malware and thus prevents it from adversely affecting a host. In order to evade detection by malware detectors, malware writers use various obfuscation techniques to transform their malware. There is strong evidence that commercial malware detectors are susceptible to these evasion tactics. In this paper, we describe the design and implementation of a malware normalizer ...

  15. Uniformization of elliptic curves

    OpenAIRE

    Ülkem, Özge; Ulkem, Ozge

    2015-01-01

    Every elliptic curve E defined over C is analytically isomorphic to C*=qZ for some q ∊ C*. Similarly, Tate has shown that if E is defined over a p-adic field K, then E is analytically isomorphic to K*=qZ for some q ∊ K . Further the isomorphism E(K) ≅ K*/qZ respects the action of the Galois group GK/K, where K is the algebraic closure of K. I will explain the construction of this isomorphism.

  16. The radiobiology of boron neutron capture therapy: Are ''photon-equivalent'' doses really photon-equivalent?

    International Nuclear Information System (INIS)

    Coderre, J.A.; Diaz, A.Z.; Ma, R.

    2001-01-01

    Boron neutron capture therapy (BNCT) produces a mixture of radiation dose components. The high-linear energy transfer (LET) particles are more damaging in tissue than equal doses of low-LET radiation. Each of the high-LET components can multiplied by an experimentally determined factor to adjust for the increased biological effectiveness and the resulting sum expressed in photon-equivalent units (Gy-Eq). BNCT doses in photon-equivalent units are based on a number of assumptions. It may be possible to test the validity of these assumptions and the accuracy of the calculated BNCT doses by 1) comparing the effects of BNCT in other animal or biological models where the effects of photon radiation are known, or 2) if there are endpoints reached in the BNCT dose escalation clinical trials that can be related to the known response to photons of the tissue in question. The calculated Gy-Eq BNCT doses delivered to dogs and to humans with BPA and the epithermal neutron beam of the Brookhaven Medical Research Reactor were compared to expected responses to photon irradiation. The data indicate that Gy-Eq doses in brain may be underestimated. Doses to skin are consistent with the expected response to photons. Gy-Eq doses to tumor are significantly overestimated. A model system of cells in culture irradiated at various depths in a lucite phantom using the epithermal beam is under development. Preliminary data indicate that this approach can be used to detect differences in the relative biological effectiveness of the beam. The rat 9L gliosarcoma cell survival data was converted to photon-equivalent doses using the same factors assumed in the clinical studies. The results superimposed on the survival curve derived from irradiation with Cs-137 photons indicating the potential utility of this model system. (author)

  17. Roc curves for continuous data

    CERN Document Server

    Krzanowski, Wojtek J

    2009-01-01

    Since ROC curves have become ubiquitous in many application areas, the various advances have been scattered across disparate articles and texts. ROC Curves for Continuous Data is the first book solely devoted to the subject, bringing together all the relevant material to provide a clear understanding of how to analyze ROC curves.The fundamental theory of ROC curvesThe book first discusses the relationship between the ROC curve and numerous performance measures and then extends the theory into practice by describing how ROC curves are estimated. Further building on the theory, the authors prese

  18. Derived equivalences for group rings

    CERN Document Server

    König, Steffen

    1998-01-01

    A self-contained introduction is given to J. Rickard's Morita theory for derived module categories and its recent applications in representation theory of finite groups. In particular, Broué's conjecture is discussed, giving a structural explanation for relations between the p-modular character table of a finite group and that of its "p-local structure". The book is addressed to researchers or graduate students and can serve as material for a seminar. It surveys the current state of the field, and it also provides a "user's guide" to derived equivalences and tilting complexes. Results and proofs are presented in the generality needed for group theoretic applications.

  19. Normal accidents

    International Nuclear Information System (INIS)

    Perrow, C.

    1989-01-01

    The author has chosen numerous concrete examples to illustrate the hazardousness inherent in high-risk technologies. Starting with the TMI reactor accident in 1979, he shows that it is not only the nuclear energy sector that bears the risk of 'normal accidents', but also quite a number of other technologies and industrial sectors, or research fields. The author refers to the petrochemical industry, shipping, air traffic, large dams, mining activities, and genetic engineering, showing that due to the complexity of the systems and their manifold, rapidly interacting processes, accidents happen that cannot be thoroughly calculated, and hence are unavoidable. (orig./HP) [de

  20. Calculation methods for determining dose equivalent

    International Nuclear Information System (INIS)

    Endres, G.W.R.; Tanner, J.E.; Scherpelz, R.I.; Hadlock, D.E.

    1987-11-01

    A series of calculations of neutron fluence as a function of energy in an anthropomorphic phantom was performed to develop a system for determining effective dose equivalent for external radiation sources. Critical organ dose equivalents are calculated and effective dose equivalents are determined using ICRP-26 [1] methods. Quality factors based on both present definitions and ICRP-40 definitions are used in the analysis. The results of these calculations are presented and discussed. The effective dose equivalent determined using ICRP-26 methods is significantly smaller than the dose equivalent determined by traditional methods. No existing personnel dosimeter or health physics instrument can determine effective dose equivalent. At the present time, the conversion of dosimeter response to dose equivalent is based on calculations for maximal or ''cap'' values using homogeneous spherical or cylindrical phantoms. The evaluated dose equivalent is, therefore, a poor approximation of the effective dose equivalent as defined by ICRP Publication 26. 3 refs., 2 figs., 1 tab

  1. Object-Image Correspondence for Algebraic Curves under Projections

    Directory of Open Access Journals (Sweden)

    Joseph M. Burdis

    2013-03-01

    Full Text Available We present a novel algorithm for deciding whether a given planar curve is an image of a given spatial curve, obtained by a central or a parallel projection with unknown parameters. The motivation comes from the problem of establishing a correspondence between an object and an image, taken by a camera with unknown position and parameters. A straightforward approach to this problem consists of setting up a system of conditions on the projection parameters and then checking whether or not this system has a solution. The computational advantage of the algorithm presented here, in comparison to algorithms based on the straightforward approach, lies in a significant reduction of a number of real parameters that need to be eliminated in order to establish existence or non-existence of a projection that maps a given spatial curve to a given planar curve. Our algorithm is based on projection criteria that reduce the projection problem to a certain modification of the equivalence problem of planar curves under affine and projective transformations. To solve the latter problem we make an algebraic adaptation of signature construction that has been used to solve the equivalence problems for smooth curves. We introduce a notion of a classifying set of rational differential invariants and produce explicit formulas for such invariants for the actions of the projective and the affine groups on the plane.

  2. Editorial: New operational dose equivalent quantities

    International Nuclear Information System (INIS)

    Harvey, J.R.

    1985-01-01

    The ICRU Report 39 entitled ''Determination of Dose Equivalents Resulting from External Radiation Sources'' is briefly discussed. Four new operational dose equivalent quantities have been recommended in ICRU 39. The 'ambient dose equivalent' and the 'directional dose equivalent' are applicable to environmental monitoring and the 'individual dose equivalent, penetrating' and the 'individual dose equivalent, superficial' are applicable to individual monitoring. The quantities should meet the needs of day-to-day operational practice, while being acceptable to those concerned with metrological precision, and at the same time be used to give effective control consistent with current perceptions of the risks associated with exposure to ionizing radiations. (U.K.)

  3. Curved Josephson junction

    International Nuclear Information System (INIS)

    Dobrowolski, Tomasz

    2012-01-01

    The constant curvature one and quasi-one dimensional Josephson junction is considered. On the base of Maxwell equations, the sine–Gordon equation that describes an influence of curvature on the kink motion was obtained. It is showed that the method of geometrical reduction of the sine–Gordon model from three to lower dimensional manifold leads to an identical form of the sine–Gordon equation. - Highlights: ► The research on dynamics of the phase in a curved Josephson junction is performed. ► The geometrical reduction is applied to the sine–Gordon model. ► The results of geometrical reduction and the fundamental research are compared.

  4. Curved-Duct

    Directory of Open Access Journals (Sweden)

    Je Hyun Baekt

    2000-01-01

    Full Text Available A numerical study is conducted on the fully-developed laminar flow of an incompressible viscous fluid in a square duct rotating about a perpendicular axis to the axial direction of the duct. At the straight duct, the rotation produces vortices due to the Coriolis force. Generally two vortex cells are formed and the axial velocity distribution is distorted by the effect of this Coriolis force. When a convective force is weak, two counter-rotating vortices are shown with a quasi-parabolic axial velocity profile for weak rotation rates. As the rotation rate increases, the axial velocity on the vertical centreline of the duct begins to flatten and the location of vorticity center is moved near to wall by the effect of the Coriolis force. When the convective inertia force is strong, a double-vortex secondary flow appears in the transverse planes of the duct for weak rotation rates but as the speed of rotation increases the secondary flow is shown to split into an asymmetric configuration of four counter-rotating vortices. If the rotation rates are increased further, the secondary flow restabilizes to a slightly asymmetric double-vortex configuration. Also, a numerical study is conducted on the laminar flow of an incompressible viscous fluid in a 90°-bend square duct that rotates about axis parallel to the axial direction of the inlet. At a 90°-bend square duct, the feature of flow by the effect of a Coriolis force and a centrifugal force, namely a secondary flow by the centrifugal force in the curved region and the Coriolis force in the downstream region, is shown since the centrifugal force in curved region and the Coriolis force in downstream region are dominant respectively.

  5. Elliptic curves for applications (Tutorial)

    NARCIS (Netherlands)

    Lange, T.; Bernstein, D.J.; Chatterjee, S.

    2011-01-01

    More than 25 years ago, elliptic curves over finite fields were suggested as a group in which the Discrete Logarithm Problem (DLP) can be hard. Since then many researchers have scrutinized the security of the DLP on elliptic curves with the result that for suitably chosen curves only exponential

  6. Titration Curves: Fact and Fiction.

    Science.gov (United States)

    Chamberlain, John

    1997-01-01

    Discusses ways in which datalogging equipment can enable titration curves to be measured accurately and how computing power can be used to predict the shape of curves. Highlights include sources of error, use of spreadsheets to generate titration curves, titration of a weak acid with a strong alkali, dibasic acids, weak acid and weak base, and…

  7. Reconstructing Normality

    DEFF Research Database (Denmark)

    Gildberg, Frederik Alkier; Bradley, Stephen K.; Fristed, Peter Billeskov

    2012-01-01

    Forensic psychiatry is an area of priority for the Danish Government. As the field expands, this calls for increased knowledge about mental health nursing practice, as this is part of the forensic psychiatry treatment offered. However, only sparse research exists in this area. The aim of this study...... was to investigate the characteristics of forensic mental health nursing staff interaction with forensic mental health inpatients and to explore how staff give meaning to these interactions. The project included 32 forensic mental health staff members, with over 307 hours of participant observations, 48 informal....... The intention is to establish a trusting relationship to form behaviour and perceptual-corrective care, which is characterized by staff's endeavours to change, halt, or support the patient's behaviour or perception in relation to staff's perception of normality. The intention is to support and teach the patient...

  8. Pursuing Normality

    DEFF Research Database (Denmark)

    Madsen, Louise Sofia; Handberg, Charlotte

    2018-01-01

    implying an influence on whether to participate in cancer survivorship care programs. Because of "pursuing normality," 8 of 9 participants opted out of cancer survivorship care programming due to prospects of "being cured" and perceptions of cancer survivorship care as "a continuation of the disease......BACKGROUND: The present study explored the reflections on cancer survivorship care of lymphoma survivors in active treatment. Lymphoma survivors have survivorship care needs, yet their participation in cancer survivorship care programs is still reported as low. OBJECTIVE: The aim of this study...... was to understand the reflections on cancer survivorship care of lymphoma survivors to aid the future planning of cancer survivorship care and overcome barriers to participation. METHODS: Data were generated in a hematological ward during 4 months of ethnographic fieldwork, including participant observation and 46...

  9. Foreword: Biomonitoring Equivalents special issue.

    Science.gov (United States)

    Meek, M E; Sonawane, B; Becker, R A

    2008-08-01

    The challenge of interpreting results of biomonitoring for environmental chemicals in humans is highlighted in this Foreword to the Biomonitoring Equivalents (BEs) special issue of Regulatory Toxicology and Pharmacology. There is a pressing need to develop risk-based tools in order to empower scientists and health professionals to interpret and communicate the significance of human biomonitoring data. The BE approach, which integrates dosimetry and risk assessment methods, represents an important advancement on the path toward achieving this objective. The articles in this issue, developed as a result of an expert panel meeting, present guidelines for derivation of BEs, guidelines for communication using BEs and several case studies illustrating application of the BE approach for specific substances.

  10. Radiological equivalent of chemical pollutants

    International Nuclear Information System (INIS)

    Medina, V.O.

    1982-01-01

    The development of the peaceful uses of nuclear energy has caused continued effort toward public safety through radiation health protection measures and nuclear management practices. However, concern has not been focused on the development specifically in the operation of chemical pestrochemical industries as well as other industrial processes brought about by technological advancements. This article presents the comparison of the risk of radiation and chemicals. The methods used for comparing the risks of late effects of radiation and chemicals are considered at three levels. (a) as a frame of reference to give an impression of resolving power of biological tests; (b) as methods to quantify risks; (c) as instruments for an epidemiological survey of human populations. There are marked dissimilarities between chemicals and radiation and efforts to interpret chemical activity may not be achieved. Applicability of the concept of rad equivalence has many restrictions and as pointed out this approach is not an established one. (RTD)

  11. Tissue equivalence in neutron dosimetry

    International Nuclear Information System (INIS)

    Nutton, D.H.; Harris, S.J.

    1980-01-01

    A brief review is presented of the essential features of neutron tissue equivalence for radiotherapy and gives the results of a computation of relative absorbed dose for 14 MeV neutrons, using various tissue models. It is concluded that for the Bragg-Gray equation for ionometric dosimetry it is not sufficient to define the value of W to high accuracy and that it is essential that, for dosimetric measurements to be applicable to real body tissue to an accuracy of better than several per cent, a correction to the total absorbed dose must be made according to the test and tissue atomic composition, although variations in patient anatomy and other radiotherapy parameters will often limit the benefits of such detailed dosimetry. (U.K.)

  12. Rotor equivalent wind speed for power curve measurement – comparative exercise for IEA Wind Annex 32

    DEFF Research Database (Denmark)

    Wagner, Rozenn; Cañadillas, B.; Clifton, A.

    2014-01-01

    was the definition of the segment area used as weighting for the wind speeds measured at the various heights in the calculation of the REWS. This comparative exercise showed that the REWS method results in a significant difference compared to the standard method using the wind speed at hub height in conditions...

  13. Expanding the Interaction Equivalency Theorem

    Directory of Open Access Journals (Sweden)

    Brenda Cecilia Padilla Rodriguez

    2015-06-01

    Full Text Available Although interaction is recognised as a key element for learning, its incorporation in online courses can be challenging. The interaction equivalency theorem provides guidelines: Meaningful learning can be supported as long as one of three types of interactions (learner-content, learner-teacher and learner-learner is present at a high level. This study sought to apply this theorem to the corporate sector, and to expand it to include other indicators of course effectiveness: satisfaction, knowledge transfer, business results and return on expectations. A large Mexican organisation participated in this research, with 146 learners, 30 teachers and 3 academic assistants. Three versions of an online course were designed, each emphasising a different type of interaction. Data were collected through surveys, exams, observations, activity logs, think aloud protocols and sales records. All course versions yielded high levels of effectiveness, in terms of satisfaction, learning and return on expectations. Yet, course design did not dictate the types of interactions in which students engaged within the courses. Findings suggest that the interaction equivalency theorem can be reformulated as follows: In corporate settings, an online course can be effective in terms of satisfaction, learning, knowledge transfer, business results and return on expectations, as long as (a at least one of three types of interaction (learner-content, learner-teacher or learner-learner features prominently in the design of the course, and (b course delivery is consistent with the chosen type of interaction. Focusing on only one type of interaction carries a high risk of confusion, disengagement or missed learning opportunities, which can be managed by incorporating other forms of interactions.

  14. A Journey Between Two Curves

    Directory of Open Access Journals (Sweden)

    Sergey A. Cherkis

    2007-03-01

    Full Text Available A typical solution of an integrable system is described in terms of a holomorphic curve and a line bundle over it. The curve provides the action variables while the time evolution is a linear flow on the curve's Jacobian. Even though the system of Nahm equations is closely related to the Hitchin system, the curves appearing in these two cases have very different nature. The former can be described in terms of some classical scattering problem while the latter provides a solution to some Seiberg-Witten gauge theory. This note identifies the setup in which one can formulate the question of relating the two curves.

  15. Equivalent damage of loads on pavements

    CSIR Research Space (South Africa)

    Prozzi, JA

    2009-05-26

    Full Text Available This report describes a new methodology for the determination of Equivalent Damage Factors (EDFs) of vehicles with multiple axle and wheel configurations on pavements. The basic premise of this new procedure is that "equivalent pavement response...

  16. Investigation of Equivalent Circuit for PEMFC Assessment

    International Nuclear Information System (INIS)

    Myong, Kwang Jae

    2011-01-01

    Chemical reactions occurring in a PEMFC are dominated by the physical conditions and interface properties, and the reactions are expressed in terms of impedance. The performance of a PEMFC can be simply diagnosed by examining the impedance because impedance characteristics can be expressed by an equivalent electrical circuit. In this study, the characteristics of a PEMFC are assessed using the AC impedance and various equivalent circuits such as a simple equivalent circuit, equivalent circuit with a CPE, equivalent circuit with two RCs, and equivalent circuit with two CPEs. It was found in this study that the characteristics of a PEMFC could be assessed using impedance and an equivalent circuit, and the accuracy was highest for an equivalent circuit with two CPEs

  17. 46 CFR 175.540 - Equivalents.

    Science.gov (United States)

    2010-10-01

    ... Safety Management (ISM) Code (IMO Resolution A.741(18)) for the purpose of determining that an equivalent... Organization (IMO) “Code of Safety for High Speed Craft” as an equivalent to compliance with applicable...

  18. Equivalence principle and quantum mechanics: quantum simulation with entangled photons.

    Science.gov (United States)

    Longhi, S

    2018-01-15

    Einstein's equivalence principle (EP) states the complete physical equivalence of a gravitational field and corresponding inertial field in an accelerated reference frame. However, to what extent the EP remains valid in non-relativistic quantum mechanics is a controversial issue. To avoid violation of the EP, Bargmann's superselection rule forbids a coherent superposition of states with different masses. Here we suggest a quantum simulation of non-relativistic Schrödinger particle dynamics in non-inertial reference frames, which is based on the propagation of polarization-entangled photon pairs in curved and birefringent optical waveguides and Hong-Ou-Mandel quantum interference measurement. The photonic simulator can emulate superposition of mass states, which would lead to violation of the EP.

  19. Development of the curve of Spee.

    Science.gov (United States)

    Marshall, Steven D; Caspersen, Matthew; Hardinger, Rachel R; Franciscus, Robert G; Aquilino, Steven A; Southard, Thomas E

    2008-09-01

    Ferdinand Graf von Spee is credited with characterizing human occlusal curvature viewed in the sagittal plane. This naturally occurring phenomenon has clinical importance in orthodontics and restorative dentistry, yet we have little understanding of when, how, or why it develops. The purpose of this study was to expand our understanding by examining the development of the curve of Spee longitudinally in a sample of untreated subjects with normal occlusion from the deciduous dentition to adulthood. Records of 16 male and 17 female subjects from the Iowa Facial Growth Study were selected and examined. The depth of the curve of Spee was measured on their study models at 7 time points from ages 4 (deciduous dentition) to 26 (adult dentition) years. The Wilcoxon signed rank test was used to compare changes in the curve of Spee depth between time points. For each subject, the relative eruption of the mandibular teeth was measured from corresponding cephalometric radiographs, and its contribution to the developing curve of Spee was ascertained. In the deciduous dentition, the curve of Spee is minimal. At mean ages of 4.05 and 5.27 years, the average curve of Spee depths are 0.24 and 0.25 mm, respectively. With change to the transitional dentition, corresponding to the eruption of the mandibular permanent first molars and central incisors (mean age, 6.91 years), the curve of Spee depth increases significantly (P < 0.0001) to a mean maximum depth of 1.32 mm. The curve of Spee then remains essentially unchanged until eruption of the second molars (mean age, 12.38 years), when the depth increases (P < 0.0001) to a mean maximum depth of 2.17 mm. In the adolescent dentition (mean age, 16.21 years), the depth decreases slightly (P = 0.0009) to a mean maximum depth of 1.98 mm, and, in the adult dentition (mean age 26.98 years), the curve remains unchanged (P = 0.66), with a mean maximum depth of 2.02 mm. No significant differences in curve of Spee development were found between

  20. Soil Water Retention Curve

    Science.gov (United States)

    Johnson, L. E.; Kim, J.; Cifelli, R.; Chandra, C. V.

    2016-12-01

    Potential water retention, S, is one of parameters commonly used in hydrologic modeling for soil moisture accounting. Physically, S indicates total amount of water which can be stored in soil and is expressed in units of depth. S can be represented as a change of soil moisture content and in this context is commonly used to estimate direct runoff, especially in the Soil Conservation Service (SCS) curve number (CN) method. Generally, the lumped and the distributed hydrologic models can easily use the SCS-CN method to estimate direct runoff. Changes in potential water retention have been used in previous SCS-CN studies; however, these studies have focused on long-term hydrologic simulations where S is allowed to vary at the daily time scale. While useful for hydrologic events that span multiple days, the resolution is too coarse for short-term applications such as flash flood events where S may not recover its full potential. In this study, a new method for estimating a time-variable potential water retention at hourly time-scales is presented. The methodology is applied for the Napa River basin, California. The streamflow gage at St Helena, located in the upper reaches of the basin, is used as the control gage site to evaluate the model performance as it is has minimal influences by reservoirs and diversions. Rainfall events from 2011 to 2012 are used for estimating the event-based SCS CN to transfer to S. As a result, we have derived the potential water retention curve and it is classified into three sections depending on the relative change in S. The first is a negative slope section arising from the difference in the rate of moving water through the soil column, the second is a zero change section representing the initial recovery the potential water retention, and the third is a positive change section representing the full recovery of the potential water retention. Also, we found that the soil water moving has traffic jam within 24 hours after finished first

  1. Some spectral equivalences between Schroedinger operators

    International Nuclear Information System (INIS)

    Dunning, C; Hibberd, K E; Links, J

    2008-01-01

    Spectral equivalences of the quasi-exactly solvable sectors of two classes of Schroedinger operators are established, using Gaudin-type Bethe ansatz equations. In some instances the results can be extended leading to full isospectrality. In this manner we obtain equivalences between PT-symmetric problems and Hermitian problems. We also find equivalences between some classes of Hermitian operators

  2. The definition of the individual dose equivalent

    International Nuclear Information System (INIS)

    Ehrlich, Margarete

    1986-01-01

    A brief note examines the choice of the present definition of the individual dose equivalent, the new operational dosimetry quantity for external exposure. The consequences of the use of the individual dose equivalent and the danger facing the individual dose equivalent, as currently defined, are briefly discussed. (UK)

  3. A NURBS approximation of experimental stress-strain curves

    International Nuclear Information System (INIS)

    Fedorov, Timofey V.; Morrev, Pavel G.

    2016-01-01

    A compact universal representation of monotonic experimental stress-strain curves of metals and alloys is proposed. It is based on the nonuniform rational Bezier splines (NURBS) of second order and may be used in a computer library of materials. Only six parameters per curve are needed; this is equivalent to a specification of only three points in a stress-strain plane. NURBS-functions of higher order prove to be surplus. Explicit expressions for both yield stress and hardening modulus are given. Two types of curves are considered: at a finite interval of strain and at infinite one. A broad class of metals and alloys of various chemical compositions subjected to various types of preliminary thermo-mechanical working is selected from a comprehensive data base in order to test the methodology proposed. The results demonstrate excellent correspondence to the experimental data. Keywords: work hardening, stress-strain curve, spline approximation, nonuniform rational B-spline, NURBS.

  4. Strength Estimation of Die Cast Beams Considering Equivalent Porous Defects

    Energy Technology Data Exchange (ETDEWEB)

    Park, Moon Shik [Hannam Univ., Daejeon (Korea, Republic of)

    2017-05-15

    As a shop practice, a strength estimation method for die cast parts is suggested, in which various defects such as pores can be allowed. The equivalent porosity is evaluated by combining the stiffness data from a simple elastic test at the part level during the shop practice and the theoretical stiffness data, which are defect free. A porosity equation is derived from Eshelby's inclusion theory. Then, using the Mori-Tanaka method, the porosity value is used to draw a stress-strain curve for the porous material. In this paper, the Hollomon equation is used to capture the strain hardening effect. This stress-strain curve can be used to estimate the strength of a die cast part with porous defects. An elastoplastic theoretical solution is derived for the three-point bending of a die cast beam by using the plastic hinge method as a reference solution for a part with porous defects.

  5. Fermions in curved spacetimes

    Energy Technology Data Exchange (ETDEWEB)

    Lippoldt, Stefan

    2016-01-21

    In this thesis we study a formulation of Dirac fermions in curved spacetime that respects general coordinate invariance as well as invariance under local spin base transformations. We emphasize the advantages of the spin base invariant formalism both from a conceptual as well as from a practical viewpoint. This suggests that local spin base invariance should be added to the list of (effective) properties of (quantum) gravity theories. We find support for this viewpoint by the explicit construction of a global realization of the Clifford algebra on a 2-sphere which is impossible in the spin-base non-invariant vielbein formalism. The natural variables for this formulation are spacetime-dependent Dirac matrices subject to the Clifford-algebra constraint. In particular, a coframe, i.e. vielbein field is not required. We disclose the hidden spin base invariance of the vielbein formalism. Explicit formulas for the spin connection as a function of the Dirac matrices are found. This connection consists of a canonical part that is completely fixed in terms of the Dirac matrices and a free part that can be interpreted as spin torsion. The common Lorentz symmetric gauge for the vielbein is constructed for the Dirac matrices, even for metrics which are not linearly connected. Under certain criteria, it constitutes the simplest possible gauge, demonstrating why this gauge is so useful. Using the spin base formulation for building a field theory of quantized gravity and matter fields, we show that it suffices to quantize the metric and the matter fields. This observation is of particular relevance for field theory approaches to quantum gravity, as it can serve for a purely metric-based quantization scheme for gravity even in the presence of fermions. Hence, in the second part of this thesis we critically examine the gauge, and the field-parametrization dependence of renormalization group flows in the vicinity of non-Gaussian fixed points in quantum gravity. While physical

  6. Electron fluence to dose equivalent conversion factors calculated with EGS3 for electrons and positrons with energies from 100 keV to 20 GeV

    International Nuclear Information System (INIS)

    Rogers, D.W.O.

    1983-01-01

    At NRC the general purpose Monte-Carlo electron-photon transport code EGS3 is being applied to a variety of radiation dosimetry problems. To test its accuracy at low energies a detailed set of depth-dose curves for electrons and photons has been generated and compared to previous calculations. It was found that by changing the default step-size algorithm in EGS3, significant changes were obtained for incident electron beam cases. It was found that restricting the step-size to a 4% energy loss was appropriate below incident electron beam energies of 10 MeV. With this change, the calculated depth-dose curves were found to be in reasonable agreement with other calculations right down to incident electron energies of 100 keV although small (less than or equal to 10%) but persistent discrepancies with the NBS code ETRAN were obtained. EGS3 predicts higher initial dose and shorter range than ETRAN. These discrepancies are typical of a wide range of energies as is the better agreement with the results of Nahum. Data is presented for the electron fluence to maximal dose equivalent in a 30 cm thick slab of ICRU 4-element tissue irradiated by broad parallel beams of electrons incident normal to the surface. On their own, these values only give an indication of the dose equivalent expected from a spectrum of electrons since one needs to fold the spectrum maximal dose equivalent value. Calculations have also been done for incident positron beams. Despite the large statistical uncertainties, maximal dose equivalent although their values are 5 to 10% lower in a band around 10 MeV

  7. Contribution to the boiling curve of sodium

    International Nuclear Information System (INIS)

    Schins, H.E.J.

    1975-01-01

    Sodium in a pool was preheated to saturation temperatures at system pressures of 200, 350 and 500 torr. A test section of normal stainless steel was then extra heated by means of the conical fitting condenser zone of a heat pipe. Measurements were made of heat transfer fluxes, q in W/cm 2 , as a function of wall excess temperature above saturation, THETA = Tsub(w) - Tsub(s) in 0 C, both, in natural convection and in boiling regimes. These measurements make it possible to select the Subbotin natural convection and nucleate boiling curves among other variants proposed in literature. Further it is empirically demonstrated on water that the minimum film boiling point corresponds to the homogeneous nucleation temperature calculated by the Doering formula. Assuming that the minimum film boiling point of sodium can be obtained in the same manner, it is then possible to give an appoximate boiling curve of sodium for the use in thermal interaction studies. At 1 atm the heat transfer fluxes q versus wall temperatures THETA are for a point on the natural convection curve 0.3 W/cm 2 and 2 0 C; for start of boiling 1.6 W/cm 2 and 6 0 C; for peak heat flux 360 W/cm 2 and 37 0 C; for minimum film boiling 30 W/cm 2 and 905 0 C and for a point on the film boiling curve 160 W/cm 2 and 2,000 0 C. (orig.) [de

  8. Equivalent Sensor Radiance Generation and Remote Sensing from Model Parameters. Part 1; Equivalent Sensor Radiance Formulation

    Science.gov (United States)

    Wind, Galina; DaSilva, Arlindo M.; Norris, Peter M.; Platnick, Steven E.

    2013-01-01

    In this paper we describe a general procedure for calculating equivalent sensor radiances from variables output from a global atmospheric forecast model. In order to take proper account of the discrepancies between model resolution and sensor footprint the algorithm takes explicit account of the model subgrid variability, in particular its description of the probably density function of total water (vapor and cloud condensate.) The equivalent sensor radiances are then substituted into an operational remote sensing algorithm processing chain to produce a variety of remote sensing products that would normally be produced from actual sensor output. This output can then be used for a wide variety of purposes such as model parameter verification, remote sensing algorithm validation, testing of new retrieval methods and future sensor studies. We show a specific implementation using the GEOS-5 model, the MODIS instrument and the MODIS Adaptive Processing System (MODAPS) Data Collection 5.1 operational remote sensing cloud algorithm processing chain (including the cloud mask, cloud top properties and cloud optical and microphysical properties products.) We focus on clouds and cloud/aerosol interactions, because they are very important to model development and improvement.

  9. Models of genus one curves

    OpenAIRE

    Sadek, Mohammad

    2010-01-01

    In this thesis we give insight into the minimisation problem of genus one curves defined by equations other than Weierstrass equations. We are interested in genus one curves given as double covers of P1, plane cubics, or complete intersections of two quadrics in P3. By minimising such a curve we mean making the invariants associated to its defining equations as small as possible using a suitable change of coordinates. We study the non-uniqueness of minimisations of the genus one curves des...

  10. An investigation on vulnerability assessment of steel structures with thin steel shear wall through development of fragility curves

    OpenAIRE

    Mohsen Gerami; Saeed Ghaffari; Amir Mahdi Heidari Tafreshi

    2017-01-01

    Fragility curves play an important role in damage assessment of buildings. Probability of damage induction to the structure against seismic events can be investigated upon generation of afore mentioned curves. In current research 360 time history analyses have been carried out on structures of 3, 10 and 20 story height and subsequently fragility curves have been adopted. The curves are developed based on two indices of inter story drifts and equivalent strip axial strains of the shear wall. T...

  11. Curved butterfly bileaflet prosthetic cardiac valve

    Science.gov (United States)

    McQueen, David M.; Peskin, Charles S.

    1991-06-25

    An annular valve body having a central passageway for the flow of blood therethrough with two curved leaflets each of which is pivotally supported on an accentric positioned axis in the central passageway for moving between a closed position and an open position. The leaflets are curved in a plane normal to the eccentric axis and positioned with the convex side of the leaflets facing each other when the leaflets are in the open position. Various parameters such as the curvature of the leaflets, the location of the eccentric axis, and the maximum opening angle of the leaflets are optimized according to the following performance criteria: maximize the minimum peak velocity through the valve, maximize the net stroke volume, and minimize the mean forward pressure difference, thereby reducing thrombosis and improving the hemodynamic performance.

  12. Two R curves for partially stabilized zirconia

    International Nuclear Information System (INIS)

    Rose, L.R.F.; Swain, M.V.

    1986-01-01

    The enhanced fracture toughness due to stress-induced transformation can be explained from two view points: (1) the increase can be attributed to the need to supply a work of transformation, or (2) the transformation can be considered to result in internal stresses which oppose crack opening. Experimental results for magnesia-partially-stabilized zirconia are presented for the two experimental measures of toughness corresponding to these two viewpoints, namely (1) the specific work of fracture, R, and (2) the nominal stress intensity factor, K/sup R/. It is observed that these two measures are not equivalent during the initial stage of R-curve behavior, prior to reaching steady-state cracking. The theoretical reason for this difference is discussed. In particular, it is noted that the usual definition for the crack extension force does not correspond to the experimentally measured work of fracture in the presence of stress-induced (or pre-existing) sources of internal stress

  13. Nonequilibrium recombination after a curved shock wave

    Science.gov (United States)

    Wen, Chihyung; Hornung, Hans

    2010-02-01

    The effect of nonequilibrium recombination after a curved two-dimensional shock wave in a hypervelocity dissociating flow of an inviscid Lighthill-Freeman gas is considered. An analytical solution is obtained with the effective shock values derived by Hornung (1976) [5] and the assumption that the flow is ‘quasi-frozen’ after a thin dissociating layer near the shock. The solution gives the expression of dissociation fraction as a function of temperature on a streamline. A rule of thumb can then be provided to check the validity of binary scaling for experimental conditions and a tool to determine the limiting streamline that delineates the validity zone of binary scaling. The effects on the nonequilibrium chemical reaction of the large difference in free stream temperature between free-piston shock tunnel and equivalent flight conditions are discussed. Numerical examples are presented and the results are compared with solutions obtained with two-dimensional Euler equations using the code of Candler (1988) [10].

  14. An index formula for the self-linking number of a space curve

    DEFF Research Database (Denmark)

    Røgen, Peter

    2008-01-01

    Given an embedded closed space curve with non-vanishing curvature, its self-linking number is defined as the linking number between the original curve and a curve pushed slightly off in the direction of its principal normals. We present an index formula for the self-linking number in terms of the...

  15. Change of annual collective dose equivalent of radiation workers at KURRI

    International Nuclear Information System (INIS)

    Okamoto, Kenichi

    1994-01-01

    The change of exposure dose equivalent of radiation workers at KURRI (Kyoto University Research Reactor Institute) in the past 30 years is reported together with the operational accomplishments. The reactor achieved criticality on June 24, 1964 and reached the normal power of 1000 kW on August 17 of the same year, and the normal power was elevated to 5000 kW on July 16, 1968 until today. The change of the annual effective dose equivalent, the collective dose equivalent, the average annual dose equivalent and the maximum dose equivalent are indicated in the table and the figure. The chronological table on the activities of the reactor is added. (T.H.)

  16. SOCIAL EQUIVALENT OF FREE ENERGY

    Directory of Open Access Journals (Sweden)

    Josip Stepanic

    2004-06-01

    Full Text Available Characterisation of unbounded resources of a social system within the sociological interpretation has resulted in a large number of different notions, which are relevant in different situations. From the view point of statistical mechanics, these notions resemble free energy. In this paper the concept of social free energy is introduced and first steps toward its development presented. The social free energy is a function equal to physical free energy appropriately determined for the social system, with intrinsically sociological interpretation as a measure of social action obtainable in a given social system without changes in its structure. Its construction is a consequence of response of a social system to recognised parts of environment dynamics. It is argued that development of a social system response resembles exciting the normal modes of a general, physical system.

  17. Trend analyses with river sediment rating curves

    Science.gov (United States)

    Warrick, Jonathan A.

    2015-01-01

    Sediment rating curves, which are fitted relationships between river discharge (Q) and suspended-sediment concentration (C), are commonly used to assess patterns and trends in river water quality. In many of these studies it is assumed that rating curves have a power-law form (i.e., C = aQb, where a and b are fitted parameters). Two fundamental questions about the utility of these techniques are assessed in this paper: (i) How well to the parameters, a and b, characterize trends in the data? (ii) Are trends in rating curves diagnostic of changes to river water or sediment discharge? As noted in previous research, the offset parameter, a, is not an independent variable for most rivers, but rather strongly dependent on b and Q. Here it is shown that a is a poor metric for trends in the vertical offset of a rating curve, and a new parameter, â, as determined by the discharge-normalized power function [C = â (Q/QGM)b], where QGM is the geometric mean of the Q values sampled, provides a better characterization of trends. However, these techniques must be applied carefully, because curvature in the relationship between log(Q) and log(C), which exists for many rivers, can produce false trends in â and b. Also, it is shown that trends in â and b are not uniquely diagnostic of river water or sediment supply conditions. For example, an increase in â can be caused by an increase in sediment supply, a decrease in water supply, or a combination of these conditions. Large changes in water and sediment supplies can occur without any change in the parameters, â and b. Thus, trend analyses using sediment rating curves must include additional assessments of the time-dependent rates and trends of river water, sediment concentrations, and sediment discharge.

  18. Observer-dependent quantum vacua in curved space. II

    International Nuclear Information System (INIS)

    Castagnino, M.A.; Sztrajman, J.B.

    1989-01-01

    An observer-dependent Hamiltonian is introduced in order to describe massless spin-1 particles in curved space-times. The vacuum state is defined by means of Hamiltonian diagonalization and minimization, which turns out to be equivalent criteria. This method works in an arbitrary geometry, although a condition on the fluid of observers is required. Computations give the vacua commonly accepted in the literature

  19. Quantum fields in curved space

    International Nuclear Information System (INIS)

    Birrell, N.D.; Davies, P.C.W.

    1982-01-01

    The book presents a comprehensive review of the subject of gravitational effects in quantum field theory. Quantum field theory in Minkowski space, quantum field theory in curved spacetime, flat spacetime examples, curved spacetime examples, stress-tensor renormalization, applications of renormalization techniques, quantum black holes and interacting fields are all discussed in detail. (U.K.)

  20. Technical note: Equivalent genomic models with a residual polygenic effect.

    Science.gov (United States)

    Liu, Z; Goddard, M E; Hayes, B J; Reinhardt, F; Reents, R

    2016-03-01

    Routine genomic evaluations in animal breeding are usually based on either a BLUP with genomic relationship matrix (GBLUP) or single nucleotide polymorphism (SNP) BLUP model. For a multi-step genomic evaluation, these 2 alternative genomic models were proven to give equivalent predictions for genomic reference animals. The model equivalence was verified also for young genotyped animals without phenotypes. Due to incomplete linkage disequilibrium of SNP markers to genes or causal mutations responsible for genetic inheritance of quantitative traits, SNP markers cannot explain all the genetic variance. A residual polygenic effect is normally fitted in the genomic model to account for the incomplete linkage disequilibrium. In this study, we start by showing the proof that the multi-step GBLUP and SNP BLUP models are equivalent for the reference animals, when they have a residual polygenic effect included. Second, the equivalence of both multi-step genomic models with a residual polygenic effect was also verified for young genotyped animals without phenotypes. Additionally, we derived formulas to convert genomic estimated breeding values of the GBLUP model to its components, direct genomic values and residual polygenic effect. Third, we made a proof that the equivalence of these 2 genomic models with a residual polygenic effect holds also for single-step genomic evaluation. Both the single-step GBLUP and SNP BLUP models lead to equal prediction for genotyped animals with phenotypes (e.g., reference animals), as well as for (young) genotyped animals without phenotypes. Finally, these 2 single-step genomic models with a residual polygenic effect were proven to be equivalent for estimation of SNP effects, too. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  1. Revealing the equivalence of two clonal survival models by principal component analysis

    International Nuclear Information System (INIS)

    Lachet, Bernard; Dufour, Jacques

    1976-01-01

    The principal component analysis of 21 chlorella cell survival curves, adjusted by one-hit and two-hit target models, lead to quite similar projections on the principal plan: the homologous parameters of these models are linearly correlated; the reason for the statistical equivalence of these two models, in the present state of experimental inaccuracy, is revealed [fr

  2. Minimally invasive estimation of ventricular dead space volume through use of Frank-Starling curves.

    Directory of Open Access Journals (Sweden)

    Shaun Davidson

    Full Text Available This paper develops a means of more easily and less invasively estimating ventricular dead space volume (Vd, an important, but difficult to measure physiological parameter. Vd represents a subject and condition dependent portion of measured ventricular volume that is not actively participating in ventricular function. It is employed in models based on the time varying elastance concept, which see widespread use in haemodynamic studies, and may have direct diagnostic use. The proposed method involves linear extrapolation of a Frank-Starling curve (stroke volume vs end-diastolic volume and its end-systolic equivalent (stroke volume vs end-systolic volume, developed across normal clinical procedures such as recruitment manoeuvres, to their point of intersection with the y-axis (where stroke volume is 0 to determine Vd. To demonstrate the broad applicability of the method, it was validated across a cohort of six sedated and anaesthetised male Pietrain pigs, encompassing a variety of cardiac states from healthy baseline behaviour to circulatory failure due to septic shock induced by endotoxin infusion. Linear extrapolation of the curves was supported by strong linear correlation coefficients of R = 0.78 and R = 0.80 average for pre- and post- endotoxin infusion respectively, as well as good agreement between the two linearly extrapolated y-intercepts (Vd for each subject (no more than 7.8% variation. Method validity was further supported by the physiologically reasonable Vd values produced, equivalent to 44.3-53.1% and 49.3-82.6% of baseline end-systolic volume before and after endotoxin infusion respectively. This method has the potential to allow Vd to be estimated without a particularly demanding, specialised protocol in an experimental environment. Further, due to the common use of both mechanical ventilation and recruitment manoeuvres in intensive care, this method, subject to the availability of multi-beat echocardiography, has the potential to

  3. The Complexity of Identifying Large Equivalence Classes

    DEFF Research Database (Denmark)

    Skyum, Sven; Frandsen, Gudmund Skovbjerg; Miltersen, Peter Bro

    1999-01-01

    We prove that at least 3k−4/k(2k−3)(n/2) – O(k)equivalence tests and no more than 2/k (n/2) + O(n) equivalence tests are needed in the worst case to identify the equivalence classes with at least k members in set of n elements. The upper bound is an improvement by a factor 2 compared to known res...

  4. Equivalent Simplification Method of Micro-Grid

    OpenAIRE

    Cai Changchun; Cao Xiangqin

    2013-01-01

    The paper concentrates on the equivalent simplification method for the micro-grid system connection into distributed network. The equivalent simplification method proposed for interaction study between micro-grid and distributed network. Micro-grid network, composite load, gas turbine synchronous generation, wind generation are equivalent simplification and parallel connect into the point of common coupling. A micro-grid system is built and three phase and single phase grounded faults are per...

  5. Calculation methods for determining dose equivalent

    International Nuclear Information System (INIS)

    Endres, G.W.R.; Tanner, J.E.; Scherpelz, R.I.; Hadlock, D.E.

    1988-01-01

    A series of calculations of neutron fluence as a function of energy in an anthropomorphic phantom was performed to develop a system for determining effective dose equivalent for external radiation sources. critical organ dose equivalents are calculated and effective dose equivalents are determined using ICRP-26 methods. Quality factors based on both present definitions and ICRP-40 definitions are used in the analysis. The results of these calculations are presented and discussed

  6. Equivalences of real submanifolds in complex space.

    OpenAIRE

    ZAITSEV, DMITRI

    2001-01-01

    PUBLISHED We show that for any real-analytic submanifold M in CN there is a proper real-analytic subvariety V contained in M such that for any p ? M \\ V , any real-analytic submanifold M? in CN, and any p? ? M?, the germs of the submanifolds M and M? at p and p? respectively are formally equivalent if and only if they are biholomorphically equivalent. More general results for k-equivalences are also stated and proved.

  7. Relations of equivalence of conditioned radioactive waste

    International Nuclear Information System (INIS)

    Kumer, L.; Szeless, A.; Oszuszky, F.

    1982-01-01

    A compensation for the wastes remaining with the operator of a waste management center, to be given by the agent having caused the waste, may be assured by effecting a financial valuation (equivalence) of wastes. Technically and logically, this equivalence between wastes (or specifically between different waste categories) and financial valuation has been established as reasonable. In this paper, the possibility of establishing such equivalences are developed, and their suitability for waste management concepts is quantitatively expressed

  8. Behavioural equivalence for infinite systems - Partially decidable!

    DEFF Research Database (Denmark)

    Sunesen, Kim; Nielsen, Mogens

    1996-01-01

    languages with two generalizations based on traditional approaches capturing non-interleaving behaviour, pomsets representing global causal dependency, and locality representing spatial distribution of events. We first study equivalences on Basic Parallel Processes, BPP, a process calculus equivalent...... of processes between BPP and TCSP, not only are the two equivalences different, but one (locality) is decidable whereas the other (pomsets) is not. The decidability result for locality is proved by a reduction to the reachability problem for Petri nets....

  9. Equivalence in Bilingual Lexicography: Criticism and Suggestions*

    Directory of Open Access Journals (Sweden)

    Herbert Ernst Wiegand

    2011-10-01

    Full Text Available

    Abstract: A reminder of general problems in the formation of terminology, as illustrated by theGerman Äquivalence (Eng. equivalence and äquivalent (Eng. equivalent, is followed by a critical discussionof the concept of equivalence in contrastive lexicology. It is shown that especially the conceptof partial equivalence is contradictory in its different manifestations. Consequently attemptsare made to give a more precise indication of the concept of equivalence in the metalexicography,with regard to the domain of the nominal lexicon. The problems of especially the metalexicographicconcept of partial equivalence as well as that of divergence are fundamentally expounded.In conclusion the direction is indicated to find more appropriate metalexicographic versions of theconcept of equivalence.

    Keywords: EQUIVALENCE, LEXICOGRAPHIC EQUIVALENT, PARTIAL EQUIVALENCE,CONGRUENCE, DIVERGENCE, CONVERGENCE, POLYDIVERGENCE, SYNTAGM-EQUIVALENCE,ZERO EQUIVALENCE, CORRESPONDENCE

    Abstrakt: Äquivalenz in der zweisprachigen Lexikographie: Kritik und Vorschläge.Nachdem an allgemeine Probleme der Begriffsbildung am Beispiel von dt. Äquivalenzund dt. äquivalent erinnert wurde, wird zunächst auf Äquivalenzbegriffe in der kontrastiven Lexikologiekritisch eingegangen. Es wird gezeigt, dass insbesondere der Begriff der partiellen Äquivalenzin seinen verschiedenen Ausprägungen widersprüchlich ist. Sodann werden Präzisierungenzu den Äquivalenzbegriffen in der Metalexikographie versucht, die sich auf den Bereich der Nennlexikbeziehen. Insbesondere der metalexikographische Begriff der partiellen Äquivalenz sowie derder Divergenz werden grundsätzlich problematisiert. In welche Richtung man gehen kann, umangemessenere metalexikographische Fassungen des Äquivalenzbegriffs zu finden, wird abschließendangedeutet.

    Stichwörter: ÄQUIVALENZ, LEXIKOGRAPHISCHES ÄQUIVALENT, PARTIELLE ÄQUIVALENZ,KONGRUENZ, DIVERGENZ, KONVERGENZ, POLYDIVERGENZ

  10. Extended analysis of cooling curves

    International Nuclear Information System (INIS)

    Djurdjevic, M.B.; Kierkus, W.T.; Liliac, R.E.; Sokolowski, J.H.

    2002-01-01

    Thermal Analysis (TA) is the measurement of changes in a physical property of a material that is heated through a phase transformation temperature range. The temperature changes in the material are recorded as a function of the heating or cooling time in such a manner that allows for the detection of phase transformations. In order to increase accuracy, characteristic points on the cooling curve have been identified using the first derivative curve plotted versus time. In this paper, an alternative approach to the analysis of the cooling curve has been proposed. The first derivative curve has been plotted versus temperature and all characteristic points have been identified with the same accuracy achieved using the traditional method. The new cooling curve analysis also enables the Dendrite Coherency Point (DCP) to be detected using only one thermocouple. (author)

  11. Plasma flow in a curved magnetic field

    International Nuclear Information System (INIS)

    Lindberg, L.

    1977-09-01

    A beam of collisionless plasma is injected along a longitudinal magnetic field into a region of curved magnetic field. Two unpredicted phenomena are observed: The beam becomes deflected in the direction opposite to that in which the field is curved, and it contracts to a flat slab in the plane of curvature of the magnetic field. The phenomenon is of a general character and can be expected to occur in a very wide range of densities. The lower density limit is set by the condition for self-polarization, nm sub(i)/epsilon 0 B 2 >> 1 or, which is equivalent, c 2 /v 2 sub(A) >> 1, where c is the velocity of light, and v sup(A) the Alfven velocity. The upper limit is presumably set by the requirement ωsub(e)tau(e) >> 1. The phenomenon is likely to be of importance e.g. for injection of plasma into magnetic bottles and in space and solar physics. The paper illustrates the comlexity of plasma flow phenomena and the importance of close contact between experimental and theoretical work. (author)

  12. MAGNETIC CIRCUIT EQUIVALENT OF THE SYNCHRONOUS MOTOR WITH INCORPORATED MAGNETS

    Directory of Open Access Journals (Sweden)

    Fyong Le Ngo

    2015-01-01

    Full Text Available Magnetic circuitry computation is one of the central stages of designing a synchronous motor with incorporated magnets, which can be performed by means of a simplified method of the magnetic-circuits equivalent modeling. The article studies the magnetic circuit of the motor with the rotor-incorporated magnets, which includes four sectors: constant magnets with the field pole extension made of magnetically soft steel, magniflux dispersion sections containing air barriers and steel bridges; the air gap; the stator grooves, cogs and the frame yoke. The authors introduce an equivalent model of the magnetic circuit. High-energy magnets with a linear demagnetization curve are employed in the capacity of constant magnets. Two magnets create the magnetic flux for one pole. The decline of magnetic potential in the steel of the pole is negligible consequent on the admission that the poles magnetic inductivity µ = ∞. The rotor design provides for the air barriers and the steel bridges that close leakage flux. The induction-permeability curve linearization serves for the bridges magnetic saturation accountability and presents a polygonal line consisting of two linear sections. The estimation of the magnet circuit section including the cogs and the frame yoke is executed with account of the steel saturation, their magnetic conductivities thereat being dependent on the saturation rate. Relying on the equivalent model of the magnetic circuit, the authors deduce a system of two equations written from the first and the second Kirchhoff laws of the magnetic circuits. These equations allow solving two problems: specifying dimensions of the magnets by the preset value of the magnetic flow in the clearance and determining the clearance magnetic flow at the preset motor rotor-and-stator design.

  13. Curve fitting for RHB Islamic Bank annual net profit

    Science.gov (United States)

    Nadarajan, Dineswary; Noor, Noor Fadiya Mohd

    2015-05-01

    The RHB Islamic Bank net profit data are obtained from 2004 to 2012. Curve fitting is done by assuming the data are exact or experimental due to smoothing process. Higher order Lagrange polynomial and cubic spline with curve fitting procedure are constructed using Maple software. Normality test is performed to check the data adequacy. Regression analysis with curve estimation is conducted in SPSS environment. All the eleven models are found to be acceptable at 10% significant level of ANOVA. Residual error and absolute relative true error are calculated and compared. The optimal model based on the minimum average error is proposed.

  14. Investigation of radiological properties and water equivalency of PRESAGE dosimeters

    International Nuclear Information System (INIS)

    Gorjiara, Tina; Hill, Robin; Kuncic, Zdenka; Adamovics, John; Bosi, Stephen; Kim, Jung-Ha; Baldock, Clive

    2011-01-01

    Purpose: PRESAGE is a dosimeter made of polyurethane, which is suitable for 3D dosimetry in modern radiation treatment techniques. Since an ideal dosimeter is radiologically water equivalent, the authors investigated water equivalency and the radiological properties of three different PRESAGE formulations that differ primarily in their elemental compositions. Two of the formulations are new and have lower halogen content than the original formulation. Methods: The radiological water equivalence was assessed by comparing the densities, interaction probabilities, and radiation dosimetry properties of the three different PRESAGE formulations to the corresponding values for water. The relative depth doses were calculated using Monte Carlo methods for 50, 100, 200, and 350 kVp and 6 MV x-ray beams. Results: The mass densities of the three PRESAGE formulations varied from 5.3% higher than that of water to as much as 10% higher than that of water for the original formulation. The probability of photoelectric absorption in the three different PRESAGE formulations varied from 2.2 times greater than that of water for the new formulations to 3.5 times greater than that of water for the original formulation. The mass attenuation coefficient for the three formulations is 12%-50% higher than the value for water. These differences occur over an energy range (10-100 keV) in which the photoelectric effect is the dominant interaction. The collision mass stopping powers of the relatively lower halogen-containing PRESAGE formulations also exhibit marginally better water equivalency than the original higher halogen-containing PRESAGE formulation. Furthermore, the depth dose curves for the lower halogen-containing PRESAGE formulations are slightly closer to that of water for a 6 MV beam. In the kilovoltage energy range, the depth dose curves for the lower halogen-containing PRESAGE formulations are in better agreement with water than the original PRESAGE formulation. Conclusions: Based

  15. Sample size determination for equivalence assessment with multiple endpoints.

    Science.gov (United States)

    Sun, Anna; Dong, Xiaoyu; Tsong, Yi

    2014-01-01

    Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.

  16. Families of bitangent planes of space curves and minimal non-fibration families

    KAUST Repository

    Lubbes, Niels

    2014-01-01

    A cone curve is a reduced sextic space curve which lies on a quadric cone and does not pass through the vertex. We classify families of bitangent planes of cone curves. The methods we apply can be used for any space curve with ADE singularities, though in this paper we concentrate on cone curves. An embedded complex projective surface which is adjoint to a degree one weak Del Pezzo surface contains families of minimal degree rational curves, which cannot be defined by the fibers of a map. Such families are called minimal non-fibration families. Families of bitangent planes of cone curves correspond to minimal non-fibration families. The main motivation of this paper is to classify minimal non-fibration families. We present algorithms which compute all bitangent families of a given cone curve and their geometric genus. We consider cone curves to be equivalent if they have the same singularity configuration. For each equivalence class of cone curves we determine the possible number of bitangent families and the number of rational bitangent families. Finally we compute an example of a minimal non-fibration family on an embedded weak degree one Del Pezzo surface.

  17. Multi-MW wind turbine power curve measurements using remote sensing instruments – the first Høvsøre campaign

    DEFF Research Database (Denmark)

    Wagner, Rozenn; Courtney, Michael

    curve significantly. Two LiDARs and a SoDAR are used to measure the wind profile in front of a wind turbine. These profiles are used to calculate the equivalent wind speed. LiDAR are found to be more accurate than SoDAR and therefore more suitable for power performance measurement. The equivalent wind...... that used of the equivalent wind speed at least results in a power curve with no more scatter than using the conventional method....

  18. Computational aspects of algebraic curves

    CERN Document Server

    Shaska, Tanush

    2005-01-01

    The development of new computational techniques and better computing power has made it possible to attack some classical problems of algebraic geometry. The main goal of this book is to highlight such computational techniques related to algebraic curves. The area of research in algebraic curves is receiving more interest not only from the mathematics community, but also from engineers and computer scientists, because of the importance of algebraic curves in applications including cryptography, coding theory, error-correcting codes, digital imaging, computer vision, and many more.This book cove

  19. Equivalent drawbead performance in deep drawing simulations

    NARCIS (Netherlands)

    Meinders, Vincent T.; Geijselaers, Hubertus J.M.; Huetink, Han

    1999-01-01

    Drawbeads are applied in the deep drawing process to improve the control of the material flow during the forming operation. In simulations of the deep drawing process these drawbeads can be replaced by an equivalent drawbead model. In this paper the usage of an equivalent drawbead model in the

  20. On uncertainties in definition of dose equivalent

    International Nuclear Information System (INIS)

    Oda, Keiji

    1995-01-01

    The author has entertained always the doubt that in a neutron field, if the measured value of the absorbed dose with a tissue equivalent ionization chamber is 1.02±0.01 mGy, may the dose equivalent be taken as 10.2±0.1 mSv. Should it be 10.2 or 11, but the author considers it is 10 or 20. Even if effort is exerted for the precision measurement of absorbed dose, if the coefficient being multiplied to it is not precise, it is meaningless. [Absorbed dose] x [Radiation quality fctor] = [Dose equivalent] seems peculiar. How accurately can dose equivalent be evaluated ? The descriptions related to uncertainties in the publications of ICRU and ICRP are introduced, which are related to radiation quality factor, the accuracy of measuring dose equivalent and so on. Dose equivalent shows the criterion for the degree of risk, or it is considered only as a controlling quantity. The description in the ICRU report 1973 related to dose equivalent and its unit is cited. It was concluded that dose equivalent can be considered only as the absorbed dose being multiplied by a dimensionless factor. The author presented the questions. (K.I.)

  1. Orientifold Planar Equivalence: The Chiral Condensate

    DEFF Research Database (Denmark)

    Armoni, Adi; Lucini, Biagio; Patella, Agostino

    2008-01-01

    The recently introduced orientifold planar equivalence is a promising tool for solving non-perturbative problems in QCD. One of the predictions of orientifold planar equivalence is that the chiral condensates of a theory with $N_f$ flavours of Dirac fermions in the symmetric (or antisymmetric...

  2. 7 CFR 1005.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1005.54 Section 1005.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1005.54 Equivalent price. See § 1000.54. Uniform Prices ...

  3. 7 CFR 1126.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1126.54 Section 1126.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1126.54 Equivalent price. See § 1000.54. Producer Price Differential ...

  4. 7 CFR 1001.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1001.54 Section 1001.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1001.54 Equivalent price. See § 1000.54. Producer Price Differential ...

  5. 7 CFR 1032.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1032.54 Section 1032.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1032.54 Equivalent price. See § 1000.54. Producer Price Differential ...

  6. 7 CFR 1124.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1124.54 Section 1124.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Class Prices § 1124.54 Equivalent price. See § 1000.54. Producer Price Differential ...

  7. 7 CFR 1030.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1030.54 Section 1030.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1030.54 Equivalent price. See § 1000.54. ...

  8. 7 CFR 1033.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1033.54 Section 1033.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1033.54 Equivalent price. See § 1000.54. Producer Price Differential ...

  9. 7 CFR 1131.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1131.54 Section 1131.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1131.54 Equivalent price. See § 1000.54. Uniform Prices ...

  10. 7 CFR 1006.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1006.54 Section 1006.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1006.54 Equivalent price. See § 1000.54. Uniform Prices ...

  11. 7 CFR 1007.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1007.54 Section 1007.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1007.54 Equivalent price. See § 1000.54. Uniform Prices ...

  12. 7 CFR 1000.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1000.54 Section 1000.54 Agriculture... Prices § 1000.54 Equivalent price. If for any reason a price or pricing constituent required for computing the prices described in § 1000.50 is not available, the market administrator shall use a price or...

  13. Finding small equivalent decision trees is hard

    NARCIS (Netherlands)

    Zantema, H.; Bodlaender, H.L.

    2000-01-01

    Two decision trees are called decision equivalent if they represent the same function, i.e., they yield the same result for every possible input. We prove that given a decision tree and a number, to decide if there is a decision equivalent decision tree of size at most that number is NPcomplete. As

  14. What is Metaphysical Equivalence? | Miller | Philosophical Papers

    African Journals Online (AJOL)

    Theories are metaphysically equivalent just if there is no fact of the matter that could render one theory true and the other false. In this paper I argue that if we are judiciously to resolve disputes about whether theories are equivalent or not, we need to develop testable criteria that will give us epistemic access to the obtaining ...

  15. EQUIVALENT MODELS IN COVARIANCE STRUCTURE-ANALYSIS

    NARCIS (Netherlands)

    LUIJBEN, TCW

    1991-01-01

    Defining equivalent models as those that reproduce the same set of covariance matrices, necessary and sufficient conditions are stated for the local equivalence of two expanded identified models M1 and M2 when fitting the more restricted model M0. Assuming several regularity conditions, the rank

  16. Modeling Patterns of Activities using Activity Curves.

    Science.gov (United States)

    Dawadi, Prafulla N; Cook, Diane J; Schmitter-Edgecombe, Maureen

    2016-06-01

    Pervasive computing offers an unprecedented opportunity to unobtrusively monitor behavior and use the large amount of collected data to perform analysis of activity-based behavioral patterns. In this paper, we introduce the notion of an activity curve , which represents an abstraction of an individual's normal daily routine based on automatically-recognized activities. We propose methods to detect changes in behavioral routines by comparing activity curves and use these changes to analyze the possibility of changes in cognitive or physical health. We demonstrate our model and evaluate our change detection approach using a longitudinal smart home sensor dataset collected from 18 smart homes with older adult residents. Finally, we demonstrate how big data-based pervasive analytics such as activity curve-based change detection can be used to perform functional health assessment. Our evaluation indicates that correlations do exist between behavior and health changes and that these changes can be automatically detected using smart homes, machine learning, and big data-based pervasive analytics.

  17. Statistical and biophysical aspects of survival curve

    International Nuclear Information System (INIS)

    Kellerer, A.M.

    1980-01-01

    Statistic fluctuation in a series of consequently taken survival curves of asynchronous cells of a hamster of the V79 line during X-ray irradiation, are considered. In each of the experiments fluctuations are close to those expected on the basis of the Poisson distribution. The fluctuation of cell sensitivity in different experiments of one series can reach 10%. The normalization of each experiment in mean values permits to obtain the ''idealized'' survival curve. The survival logarithm in this curve is proportional to the absorbed dose and its square only at low radiation doses. Such proportionality in V lab 79 cells in the late S-phase is observed at all doses. Using the microdosimetric approach, the distance where the interaction of radiolysis products or subinjury takes place to make the dependence of injury on the dose non-linear, is determined. In the case of interaction distances of 10-100 nm, the linear component is shown to become comparable in value with the linear injury component at doses of the order of several hundred rad only in the case, when the interaction distance is close to micrometre [ru

  18. Estimation of Curve Tracing Time in Supercapacitor based PV Characterization

    Science.gov (United States)

    Basu Pal, Sudipta; Das Bhattacharya, Konika; Mukherjee, Dipankar; Paul, Debkalyan

    2017-08-01

    Smooth and noise-free characterisation of photovoltaic (PV) generators have been revisited with renewed interest in view of large size PV arrays making inroads into the urban sector of major developing countries. Such practice has recently been observed to be confronted by the use of a suitable data acquisition system and also the lack of a supporting theoretical analysis to justify the accuracy of curve tracing. However, the use of a selected bank of supercapacitors can mitigate the said problems to a large extent. Assuming a piecewise linear analysis of the V-I characteristics of a PV generator, an accurate analysis of curve plotting time has been possible. The analysis has been extended to consider the effect of equivalent series resistance of the supercapacitor leading to increased accuracy (90-95%) of curve plotting times.

  19. Irregular conformal block, spectral curve and flow equations

    International Nuclear Information System (INIS)

    Choi, Sang Kwan; Rim, Chaiho; Zhang, Hong

    2016-01-01

    Irregular conformal block is motivated by the Argyres-Douglas type of N=2 super conformal gauge theory. We investigate the classical/NS limit of irregular conformal block using the spectral curve on a Riemann surface with irregular punctures, which is equivalent to the loop equation of irregular matrix model. The spectral curve is reduced to the second order (Virasoro symmetry, SU(2) for the gauge theory) and third order (W_3 symmetry, SU(3)) differential equations of a polynomial with finite degree. The conformal and W symmetry generate the flow equations in the spectral curve and determine the irregular conformal block, hence the partition function of the Argyres-Douglas theory ala AGT conjecture.

  20. Equivalence in Ventilation and Indoor Air Quality

    Energy Technology Data Exchange (ETDEWEB)

    Sherman, Max; Walker, Iain; Logue, Jennifer

    2011-08-01

    We ventilate buildings to provide acceptable indoor air quality (IAQ). Ventilation standards (such as American Society of Heating, Refrigerating, and Air-Conditioning Enginners [ASHRAE] Standard 62) specify minimum ventilation rates without taking into account the impact of those rates on IAQ. Innovative ventilation management is often a desirable element of reducing energy consumption or improving IAQ or comfort. Variable ventilation is one innovative strategy. To use variable ventilation in a way that meets standards, it is necessary to have a method for determining equivalence in terms of either ventilation or indoor air quality. This study develops methods to calculate either equivalent ventilation or equivalent IAQ. We demonstrate that equivalent ventilation can be used as the basis for dynamic ventilation control, reducing peak load and infiltration of outdoor contaminants. We also show that equivalent IAQ could allow some contaminants to exceed current standards if other contaminants are more stringently controlled.

  1. Beyond Language Equivalence on Visibly Pushdown Automata

    DEFF Research Database (Denmark)

    Srba, Jiri

    2009-01-01

    We study (bi)simulation-like preorder/equivalence checking on the class of visibly pushdown automata and its natural subclasses visibly BPA (Basic Process Algebra) and visibly one-counter automata. We describe generic methods for proving complexity upper and lower bounds for a number of studied...... preorders and equivalences like simulation, completed simulation, ready simulation, 2-nested simulation preorders/equivalences and bisimulation equivalence. Our main results are that all the mentioned equivalences and preorders are EXPTIME-complete on visibly pushdown automata, PSPACE-complete on visibly...... one-counter automata and P-complete on visibly BPA. Our PSPACE lower bound for visibly one-counter automata improves also the previously known DP-hardness results for ordinary one-counter automata and one-counter nets. Finally, we study regularity checking problems for visibly pushdown automata...

  2. On the electrical equivalent circuits of gravitational-wave antennas

    International Nuclear Information System (INIS)

    Pallottino, G.V.; Pizzella, G.; Rome Univ.

    1978-01-01

    The electrical equivalent circuit of a Weber gravitational-wave antenna with piezoelectric transducers is derived for the various longitudinal normal modes by using the Lagrangian formalism. The analysis is applied to the antenna without piezoelectric ceramics, as well as with one or more ceramics operated in both passive and active mode. Particular attention is given to the dissipation problem in order to obtain an expression of the overall merit factor directly related to the physics of the actual dissipation processes. As an example the results are applied to a cylindrical bar with two ceramics: one for calibrating the antenna, the other as sensor of the motion. The values of the physical parameters and of the pertinent parameters of the equivalent circuit for the small antenna (20 kg) and those (predicted) for the intermediate antenna (390 kg) of the Rome group are given in the appendix. (author)

  3. Dirac equation on a curved surface

    Energy Technology Data Exchange (ETDEWEB)

    Brandt, F.T., E-mail: fbrandt@usp.br; Sánchez-Monroy, J.A., E-mail: antosan@usp.br

    2016-09-07

    The dynamics of Dirac particles confined to a curved surface is examined employing the thin-layer method. We perform a perturbative expansion to first-order and split the Dirac field into normal and tangential components to the surface. In contrast to the known behavior of second order equations like Schrödinger, Maxwell and Klein–Gordon, we find that there is no geometric potential for the Dirac equation on a surface. This implies that the non-relativistic limit does not commute with the thin-layer method. Although this problem can be overcome when second-order terms are retained in the perturbative expansion, this would preclude the decoupling of the normal and tangential degrees of freedom. Therefore, we propose to introduce a first-order term which rescues the non-relativistic limit and also clarifies the effect of the intrinsic and extrinsic curvatures on the dynamics of the Dirac particles. - Highlights: • The thin-layer method is employed to derive the Dirac equation on a curved surface. • A geometric potential is absent at least to first-order in the perturbative expansion. • The effects of the extrinsic curvature are included to rescue the non-relativistic limit. • The resulting Dirac equation is consistent with the Heisenberg uncertainty principle.

  4. Melting curves of gammairradiated DNA

    International Nuclear Information System (INIS)

    Hofer, H.; Altmann, H.; Kehrer, M.

    1978-08-01

    Melting curves of gammairradiated DNA and data derived of them, are reported. The diminished stability is explained by basedestruction. DNA denatures completely at room temperature, if at least every fifth basepair is broken or weakened by irradiation. (author)

  5. Management of the learning curve

    DEFF Research Database (Denmark)

    Pedersen, Peter-Christian; Slepniov, Dmitrij

    2016-01-01

    Purpose – This paper focuses on the management of the learning curve in overseas capacity expansions. The purpose of this paper is to unravel the direct as well as indirect influences on the learning curve and to advance the understanding of how these affect its management. Design...... the dimensions of the learning process involved in a capacity expansion project and identified the direct and indirect labour influences on the production learning curve. On this basis, the study proposes solutions to managing learning curves in overseas capacity expansions. Furthermore, the paper concludes...... with measures that have the potential to significantly reduce the non-value-added time when establishing new capacities overseas. Originality/value – The paper uses a longitudinal in-depth case study of a Danish wind turbine manufacturer and goes beyond a simplistic treatment of the lead time and learning...

  6. Flow over riblet curved surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Loureiro, J B R; Freire, A P Silva, E-mail: atila@mecanica.ufrj.br [Mechanical Engineering Program, Federal University of Rio de Janeiro (COPPE/UFRJ), C.P. 68503, 21.941-972, Rio de Janeiro, RJ (Brazil)

    2011-12-22

    The present work studies the mechanics of turbulent drag reduction over curved surfaces by riblets. The effects of surface modification on flow separation over steep and smooth curved surfaces are investigated. Four types of two-dimensional surfaces are studied based on the morphometric parameters that describe the body of a blue whale. Local measurements of mean velocity and turbulence profiles are obtained through laser Doppler anemometry (LDA) and particle image velocimetry (PIV).

  7. Intersection numbers of spectral curves

    CERN Document Server

    Eynard, B.

    2011-01-01

    We compute the symplectic invariants of an arbitrary spectral curve with only 1 branchpoint in terms of integrals of characteristic classes in the moduli space of curves. Our formula associates to any spectral curve, a characteristic class, which is determined by the laplace transform of the spectral curve. This is a hint to the key role of Laplace transform in mirror symmetry. When the spectral curve is y=\\sqrt{x}, the formula gives Kontsevich--Witten intersection numbers, when the spectral curve is chosen to be the Lambert function \\exp{x}=y\\exp{-y}, the formula gives the ELSV formula for Hurwitz numbers, and when one chooses the mirror of C^3 with framing f, i.e. \\exp{-x}=\\exp{-yf}(1-\\exp{-y}), the formula gives the Marino-Vafa formula, i.e. the generating function of Gromov-Witten invariants of C^3. In some sense this formula generalizes ELSV, Marino-Vafa formula, and Mumford formula.

  8. Dissolution glow curve in LLD

    International Nuclear Information System (INIS)

    Haverkamp, U.; Wiezorek, C.; Poetter, R.

    1990-01-01

    Lyoluminescence dosimetry is based upon light emission during dissolution of previously irradiated dosimetric materials. The lyoluminescence signal is expressed in the dissolution glow curve. These curves begin, depending on the dissolution system, with a high peak followed by an exponentially decreasing intensity. System parameters that influence the graph of the dissolution glow curve, are, for example, injection speed, temperature and pH value of the solution and the design of the dissolution cell. The initial peak does not significantly correlate with the absorbed dose, it is mainly an effect of the injection. The decay of the curve consists of two exponential components: one fast and one slow. The components depend on the absorbed dose and the dosimetric materials used. In particular, the slow component correlates with the absorbed dose. In contrast to the fast component the argument of the exponential function of the slow component is independent of the dosimetric materials investigated: trehalose, glucose and mannitol. The maximum value, following the peak of the curve, and the integral light output are a measure of the absorbed dose. The reason for the different light outputs of various dosimetric materials after irradiation with the same dose is the differing solubility. The character of the dissolution glow curves is the same following irradiation with photons, electrons or neutrons. (author)

  9. 78 FR 67360 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Five New Equivalent Methods

    Science.gov (United States)

    2013-11-12

    ... Methods: Designation of Five New Equivalent Methods AGENCY: Office of Research and Development; Environmental Protection Agency (EPA). ACTION: Notice of the designation of five new equivalent methods for...) has designated, in accordance with 40 CFR Part 53, five new equivalent methods, one for measuring...

  10. 77 FR 60985 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Three New Equivalent Methods

    Science.gov (United States)

    2012-10-05

    ... Methods: Designation of Three New Equivalent Methods AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of three new equivalent methods for monitoring ambient air quality. SUMMARY... equivalent methods, one for measuring concentrations of PM 2.5 , one for measuring concentrations of PM 10...

  11. Curve Evolution in Subspaces and Exploring the Metameric Class of Histogram of Gradient Orientation based Features using Nonlinear Projection Methods

    DEFF Research Database (Denmark)

    Tatu, Aditya Jayant

    This thesis deals with two unrelated issues, restricting curve evolution to subspaces and computing image patches in the equivalence class of Histogram of Gradient orientation based features using nonlinear projection methods. Curve evolution is a well known method used in various applications like...... tracking interfaces, active contour based segmentation methods and others. It can also be used to study shape spaces, as deforming a shape can be thought of as evolving its boundary curve. During curve evolution a curve traces out a path in the infinite dimensional space of curves. Due to application...... specific requirements like shape priors or a given data model, and due to limitations of the computer, the computed curve evolution forms a path in some finite dimensional subspace of the space of curves. We give methods to restrict the curve evolution to a finite dimensional linear or implicitly defined...

  12. Preparation of data relevant to ''Equivalent Uniform Burnup'' and Equivalent Initial Enrichment'' for burnup credit evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Nomura, Yasushi; Okuno, Hiroshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Murazaki, Minoru [Tokyo Nuclear Service Inc., Tokyo (Japan)

    2001-11-01

    Based on the PWR spent fuel composition data measured at JAERI, two kinds of simplified methods such as ''Equivalent Uniform Burnup'' and ''Equivalent Initial Enrichment'' have been introduced. And relevant evaluation curves have been prepared for criticality safety evaluation of spent fuel storage pool and transport casks, taking burnup of spent fuel into consideration. These simplified methods can be used to obtain an effective neutron multiplication factor for a spent fuel storage/transportation system by using the ORIGEN2.1 burnup code and the KENO-Va criticality code without considering axial burnup profile in spent fuel and other various factors introducing calculated errors. ''Equivalent Uniform Burnup'' is set up for its criticality analysis to be reactivity equivalent with the detailed analysis, in which the experimentally obtained isotopic composition together with a typical axial burnup profile and various factors such as irradiation history are considered on the conservative side. On the other hand, Equivalent Initial Enrichment'' is set up for its criticality analysis to be reactivity equivalent with the detailed analysis such as above when it is used in the so called fresh fuel assumption. (author)

  13. Curve Boxplot: Generalization of Boxplot for Ensembles of Curves.

    Science.gov (United States)

    Mirzargar, Mahsa; Whitaker, Ross T; Kirby, Robert M

    2014-12-01

    In simulation science, computational scientists often study the behavior of their simulations by repeated solutions with variations in parameters and/or boundary values or initial conditions. Through such simulation ensembles, one can try to understand or quantify the variability or uncertainty in a solution as a function of the various inputs or model assumptions. In response to a growing interest in simulation ensembles, the visualization community has developed a suite of methods for allowing users to observe and understand the properties of these ensembles in an efficient and effective manner. An important aspect of visualizing simulations is the analysis of derived features, often represented as points, surfaces, or curves. In this paper, we present a novel, nonparametric method for summarizing ensembles of 2D and 3D curves. We propose an extension of a method from descriptive statistics, data depth, to curves. We also demonstrate a set of rendering and visualization strategies for showing rank statistics of an ensemble of curves, which is a generalization of traditional whisker plots or boxplots to multidimensional curves. Results are presented for applications in neuroimaging, hurricane forecasting and fluid dynamics.

  14. Analytical and numerical construction of equivalent cables.

    Science.gov (United States)

    Lindsay, K A; Rosenberg, J R; Tucker, G

    2003-08-01

    The mathematical complexity experienced when applying cable theory to arbitrarily branched dendrites has lead to the development of a simple representation of any branched dendrite called the equivalent cable. The equivalent cable is an unbranched model of a dendrite and a one-to-one mapping of potentials and currents on the branched model to those on the unbranched model, and vice versa. The piecewise uniform cable, with a symmetrised tri-diagonal system matrix, is shown to represent the canonical form for an equivalent cable. Through a novel application of the Laplace transform it is demonstrated that an arbitrary branched model of a dendrite can be transformed to the canonical form of an equivalent cable. The characteristic properties of the equivalent cable are extracted from the matrix for the transformed branched model. The one-to-one mapping follows automatically from the construction of the equivalent cable. The equivalent cable is used to provide a new procedure for characterising the location of synaptic contacts on spinal interneurons.

  15. Geometry of the local equivalence of states

    Energy Technology Data Exchange (ETDEWEB)

    Sawicki, A; Kus, M, E-mail: assawi@cft.edu.pl, E-mail: marek.kus@cft.edu.pl [Center for Theoretical Physics, Polish Academy of Sciences, Al Lotnikow 32/46, 02-668 Warszawa (Poland)

    2011-12-09

    We present a description of locally equivalent states in terms of symplectic geometry. Using the moment map between local orbits in the space of states and coadjoint orbits of the local unitary group, we reduce the problem of local unitary equivalence to an easy part consisting of identifying the proper coadjoint orbit and a harder problem of the geometry of fibers of the moment map. We give a detailed analysis of the properties of orbits of 'equally entangled states'. In particular, we show connections between certain symplectic properties of orbits such as their isotropy and coisotropy with effective criteria of local unitary equivalence. (paper)

  16. Considerations for reference pump curves

    International Nuclear Information System (INIS)

    Stockton, N.B.

    1992-01-01

    This paper examines problems associated with inservice testing (IST) of pumps to assess their hydraulic performance using reference pump curves to establish acceptance criteria. Safety-related pumps at nuclear power plants are tested under the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code (the Code), Section 11. The Code requires testing pumps at specific reference points of differential pressure or flow rate that can be readily duplicated during subsequent tests. There are many cases where test conditions cannot be duplicated. For some pumps, such as service water or component cooling pumps, the flow rate at any time depends on plant conditions and the arrangement of multiple independent and constantly changing loads. System conditions cannot be controlled to duplicate a specific reference value. In these cases, utilities frequently request to use pump curves for comparison of test data for acceptance. There is no prescribed method for developing a pump reference curve. The methods vary and may yield substantially different results. Some results are conservative when compared to the Code requirements; some are not. The errors associated with different curve testing techniques should be understood and controlled within reasonable bounds. Manufacturer's pump curves, in general, are not sufficiently accurate to use as reference pump curves for IST. Testing using reference curves generated with polynomial least squares fits over limited ranges of pump operation, cubic spline interpolation, or cubic spline least squares fits can provide a measure of pump hydraulic performance that is at least as accurate as the Code required method. Regardless of the test method, error can be reduced by using more accurate instruments, by correcting for systematic errors, by increasing the number of data points, and by taking repetitive measurements at each data point

  17. Normal Pressure Hydrocephalus (NPH)

    Science.gov (United States)

    ... local chapter Join our online community Normal Pressure Hydrocephalus (NPH) Normal pressure hydrocephalus is a brain disorder ... Symptoms Diagnosis Causes & risks Treatments About Normal Pressure Hydrocephalus Normal pressure hydrocephalus occurs when excess cerebrospinal fluid ...

  18. Curve Digitizer – A software for multiple curves digitizing

    Directory of Open Access Journals (Sweden)

    Florentin ŞPERLEA

    2010-06-01

    Full Text Available The Curve Digitizer is software that extracts data from an image file representing a graphicand returns them as pairs of numbers which can then be used for further analysis and applications.Numbers can be read on a computer screen stored in files or copied on paper. The final result is adata set that can be used with other tools such as MSEXCEL. Curve Digitizer provides a useful toolfor any researcher or engineer interested in quantifying the data displayed graphically. The image filecan be obtained by scanning a document

  19. Equivalence Between Squirrel Cage and Sheet Rotor Induction Motor

    Science.gov (United States)

    Dwivedi, Ankita; Singh, S. K.; Srivastava, R. K.

    2016-06-01

    Due to topological changes in dual stator induction motor and high cost of its fabrication, it is convenient to replace the squirrel cage rotor with a composite sheet rotor. For an experimental machine, the inner and outer stator stampings are normally available whereas the procurement of rotor stampings is quite cumbersome and is not always cost effective. In this paper, the equivalence between sheet/solid rotor induction motor and squirrel cage induction motor has been investigated using layer theory of electrical machines, so as to enable one to utilize sheet/solid rotor in dual port experimental machines.

  20. Graphical interpretation of confidence curves in rankit plots

    DEFF Research Database (Denmark)

    Hyltoft Petersen, Per; Blaabjerg, Ole; Andersen, Marianne

    2004-01-01

    A well-known transformation from the bell-shaped Gaussian (normal) curve to a straight line in the rankit plot is investigated, and a tool for evaluation of the distribution of reference groups is presented. It is based on the confidence intervals for percentiles of the calculated Gaussian distri...

  1. On some Closed Magnetic Curves on a 3-torus

    Energy Technology Data Exchange (ETDEWEB)

    Munteanu, Marian Ioan, E-mail: marian.ioan.munteanu@gmail.com [Alexandru Ioan Cuza University of Iaşi, Faculty of Mathematics (Romania); Nistor, Ana Irina, E-mail: ana.irina.nistor@gmail.com [Gh. Asachi Technical University of Iaşi, Department of Mathematics and Informatics (Romania)

    2017-06-15

    We consider two magnetic fields on the 3-torus obtained from two different contact forms on the Euclidean 3-space and we study when their corresponding normal magnetic curves are closed. We obtain periodicity conditions analogues to those for the closed geodesics on the torus.

  2. Estimation of blocking temperatures from ZFC/FC curves

    DEFF Research Database (Denmark)

    Hansen, Mikkel Fougt; Mørup, Steen

    1999-01-01

    We present a new method to extract the parameters of a log-normal distribution of energy barriers in an assembly of ultrafine magnetic particles from simple featurees of the zero-field cooled and field cooled magnetisation curves. The method is established using numerical simulations and is tested...

  3. Quantum equivalence principle without mass superselection

    International Nuclear Information System (INIS)

    Hernandez-Coronado, H.; Okon, E.

    2013-01-01

    The standard argument for the validity of Einstein's equivalence principle in a non-relativistic quantum context involves the application of a mass superselection rule. The objective of this work is to show that, contrary to widespread opinion, the compatibility between the equivalence principle and quantum mechanics does not depend on the introduction of such a restriction. For this purpose, we develop a formalism based on the extended Galileo group, which allows for a consistent handling of superpositions of different masses, and show that, within such scheme, mass superpositions behave as they should in order to obey the equivalence principle. - Highlights: • We propose a formalism for consistently handling, within a non-relativistic quantum context, superpositions of states with different masses. • The formalism utilizes the extended Galileo group, in which mass is a generator. • The proposed formalism allows for the equivalence principle to be satisfied without the need of imposing a mass superselection rule

  4. On the equivalence of chaos control systems

    International Nuclear Information System (INIS)

    Wang Xiaofan

    2003-01-01

    For a given chaotic system, different control systems can be constructed depending on which parameter is tuned or where the external input is added. We prove that two different feedback control systems are qualitatively equivalent if they are feedback linearizable

  5. Equivalence relations and the reinforcement contingency.

    Science.gov (United States)

    Sidman, M

    2000-07-01

    Where do equivalence relations come from? One possible answer is that they arise directly from the reinforcement contingency. That is to say, a reinforcement contingency produces two types of outcome: (a) 2-, 3-, 4-, 5-, or n-term units of analysis that are known, respectively, as operant reinforcement, simple discrimination, conditional discrimination, second-order conditional discrimination, and so on; and (b) equivalence relations that consist of ordered pairs of all positive elements that participate in the contingency. This conception of the origin of equivalence relations leads to a number of new and verifiable ways of conceptualizing equivalence relations and, more generally, the stimulus control of operant behavior. The theory is also capable of experimental disproof.

  6. REFractions: The Representing Equivalent Fractions Game

    Science.gov (United States)

    Tucker, Stephen I.

    2014-01-01

    Stephen Tucker presents a fractions game that addresses a range of fraction concepts including equivalence and computation. The REFractions game also improves students' fluency with representing, comparing and adding fractions.

  7. ON THE EQUIVALENCE OF THE ABEL EQUATION

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    This article uses the reflecting function of Mironenko to study some complicated differential equations which are equivalent to the Abel equation. The results are applied to discuss the behavior of solutions of these complicated differential equations.

  8. interpratation: of equivalences and cultural untranslatability

    African Journals Online (AJOL)

    jgmweri

    translatability in cultural diversity in terms equivalences such as –Vocabulary or lexical ..... A KSL interpreter who does not understand this English idiom may literally interpret it .... Nida, E.A. (1958) Analysis of meaning and dictionary making.

  9. Equivalence Principle, Higgs Boson and Cosmology

    Directory of Open Access Journals (Sweden)

    Mauro Francaviglia

    2013-05-01

    Full Text Available We discuss here possible tests for Palatini f(R-theories together with their implications for different formulations of the Equivalence Principle. We shall show that Palatini f(R-theories obey the Weak Equivalence Principle and violate the Strong Equivalence Principle. The violations of the Strong Equivalence Principle vanish in vacuum (and purely electromagnetic solutions as well as on short time scales with respect to the age of the universe. However, we suggest that a framework based on Palatini f(R-theories is more general than standard General Relativity (GR and it sheds light on the interpretation of data and results in a way which is more model independent than standard GR itself.

  10. The gauge principle vs. the equivalence principle

    International Nuclear Information System (INIS)

    Gates, S.J. Jr.

    1984-01-01

    Within the context of field theory, it is argued that the role of the equivalence principle may be replaced by the principle of gauge invariance to provide a logical framework for theories of gravitation

  11. Dark matter and the equivalence principle

    Science.gov (United States)

    Frieman, Joshua A.; Gradwohl, Ben-Ami

    1993-01-01

    A survey is presented of the current understanding of dark matter invoked by astrophysical theory and cosmology. Einstein's equivalence principle asserts that local measurements cannot distinguish a system at rest in a gravitational field from one that is in uniform acceleration in empty space. Recent test-methods for the equivalence principle are presently discussed as bases for testing of dark matter scenarios involving the long-range forces between either baryonic or nonbaryonic dark matter and ordinary matter.

  12. S-equivalents lagrangians in generalized mechanics

    International Nuclear Information System (INIS)

    Negri, L.J.; Silva, Edna G. da.

    1985-01-01

    The problem of s-equivalent lagrangians is considered in the realm of generalized mechanics. Some results corresponding to the ordinary (non-generalized) mechanics are extended to the generalized case. A theorem for the reduction of the higher order lagrangian description to the usual order is found to be useful for the analysis of generalized mechanical systems and leads to a new class of equivalence between lagrangian functions. Some new perspectives are pointed out. (Author) [pt

  13. Calibration curves for biological dosimetry

    International Nuclear Information System (INIS)

    Guerrero C, C.; Brena V, M. . E-mail cgc@nuclear.inin.mx

    2004-01-01

    The generated information by the investigations in different laboratories of the world, included the ININ, in which settles down that certain class of chromosomal leisure it increases in function of the dose and radiation type, has given by result the obtaining of calibrated curves that are applied in the well-known technique as biological dosimetry. In this work is presented a summary of the work made in the laboratory that includes the calibrated curves for gamma radiation of 60 Cobalt and X rays of 250 k Vp, examples of presumed exposure to ionizing radiation, resolved by means of aberration analysis and the corresponding dose estimate through the equations of the respective curves and finally a comparison among the dose calculations in those people affected by the accident of Ciudad Juarez, carried out by the group of Oak Ridge, USA and those obtained in this laboratory. (Author)

  14. Vertex algebras and algebraic curves

    CERN Document Server

    Frenkel, Edward

    2004-01-01

    Vertex algebras are algebraic objects that encapsulate the concept of operator product expansion from two-dimensional conformal field theory. Vertex algebras are fast becoming ubiquitous in many areas of modern mathematics, with applications to representation theory, algebraic geometry, the theory of finite groups, modular functions, topology, integrable systems, and combinatorics. This book is an introduction to the theory of vertex algebras with a particular emphasis on the relationship with the geometry of algebraic curves. The notion of a vertex algebra is introduced in a coordinate-independent way, so that vertex operators become well defined on arbitrary smooth algebraic curves, possibly equipped with additional data, such as a vector bundle. Vertex algebras then appear as the algebraic objects encoding the geometric structure of various moduli spaces associated with algebraic curves. Therefore they may be used to give a geometric interpretation of various questions of representation theory. The book co...

  15. Curve collection, extension of databases

    International Nuclear Information System (INIS)

    Gillemot, F.

    1992-01-01

    Full text: Databases: generally calculated data only. The original measurements: diagrams. Information loss between them Expensive research eg. irradiation, aging, creep etc. Original curves should be stored for reanalysing. The format of the stored curves: a. Data in ASCII files, only numbers b. Other information in strings in a second file Same name, but different extension. Extensions shows the type of the test and the type of the file. EXAMPLES. TEN is tensile information, TED is tensile data, CHN is Charpy informations, CHD is Charpy data. Storing techniques: digitalised measurements, digitalising old curves stored on paper. Use: making catalogues, reanalysing, comparison with new data. Tools: mathematical software packages like quattro, genplot, exel, mathcad, qbasic, pascal, fortran, mathlab, grapher etc. (author)

  16. Rational points on elliptic curves

    CERN Document Server

    Silverman, Joseph H

    2015-01-01

    The theory of elliptic curves involves a pleasing blend of algebra, geometry, analysis, and number theory. This book stresses this interplay as it develops the basic theory, thereby providing an opportunity for advanced undergraduates to appreciate the unity of modern mathematics. At the same time, every effort has been made to use only methods and results commonly included in the undergraduate curriculum. This accessibility, the informal writing style, and a wealth of exercises make Rational Points on Elliptic Curves an ideal introduction for students at all levels who are interested in learning about Diophantine equations and arithmetic geometry. Most concretely, an elliptic curve is the set of zeroes of a cubic polynomial in two variables. If the polynomial has rational coefficients, then one can ask for a description of those zeroes whose coordinates are either integers or rational numbers. It is this number theoretic question that is the main subject of this book. Topics covered include the geometry and ...

  17. Theoretical melting curve of caesium

    International Nuclear Information System (INIS)

    Simozar, S.; Girifalco, L.A.; Pennsylvania Univ., Philadelphia

    1983-01-01

    A statistical-mechanical model is developed to account for the complex melting curve of caesium. The model assumes the existence of three different species of caesium defined by three different electronic states. On the basis of this model, the free energy of melting and the melting curve are computed up to 60 kbar, using the solid-state data and the initial slope of the fusion curve as input parameters. The calculated phase diagram agrees with experiment to within the experimental error. Other thermodynamic properties including the entropy and volume of melting were also computed, and they agree with experiment. Since the theory requires only one adjustable constant, this is taken as strong evidence that the three-species model is satisfactory for caesium. (author)

  18. Migration and the Wage Curve:

    DEFF Research Database (Denmark)

    Brücker, Herbert; Jahn, Elke J.

    in a general equilibrium framework. For the empirical analysis we employ the IABS, a two percent sample of the German labor force. We find that the elasticity of the wage curve is particularly high for young workers and workers with a university degree, while it is low for older workers and workers......  Based on a wage curve approach we examine the labor market effects of migration in Germany. The wage curve relies on the assumption that wages respond to a change in the unemployment rate, albeit imperfectly. This allows one to derive the wage and employment effects of migration simultaneously...... with a vocational degree. The wage and employment effects of migration are moderate: a 1 percent increase in the German labor force through immigration increases the aggregate unemployment rate by less than 0.1 percentage points and reduces average wages by less 0.1 percent. While native workers benefit from...

  19. Laffer Curves and Home Production

    Directory of Open Access Journals (Sweden)

    Kotamäki Mauri

    2017-06-01

    Full Text Available In the earlier related literature, consumption tax rate Laffer curve is found to be strictly increasing (see Trabandt and Uhlig (2011. In this paper, a general equilibrium macro model is augmented by introducing a substitute for private consumption in the form of home production. The introduction of home production brings about an additional margin of adjustment – an increase in consumption tax rate not only decreases labor supply and reduces the consumption tax base but also allows a substitution of market goods with home-produced goods. The main objective of this paper is to show that, after the introduction of home production, the consumption tax Laffer curve exhibits an inverse U-shape. Also the income tax Laffer curves are significantly altered. The result shown in this paper casts doubt on some of the earlier results in the literature.

  20. Complexity of Curved Glass Structures

    Science.gov (United States)

    Kosić, T.; Svetel, I.; Cekić, Z.

    2017-11-01

    Despite the increasing number of research on the architectural structures of curvilinear forms and technological and practical improvement of the glass production observed over recent years, there is still a lack of comprehensive codes and standards, recommendations and experience data linked to real-life curved glass structures applications regarding design, manufacture, use, performance and economy. However, more and more complex buildings and structures with the large areas of glass envelope geometrically complex shape are built every year. The aim of the presented research is to collect data on the existing design philosophy on curved glass structure cases. The investigation includes a survey about how architects and engineers deal with different design aspects of curved glass structures with a special focus on the design and construction process, glass types and structural and fixing systems. The current paper gives a brief overview of the survey findings.

  1. Phonon transport across nano-scale curved thin films

    Energy Technology Data Exchange (ETDEWEB)

    Mansoor, Saad B.; Yilbas, Bekir S., E-mail: bsyilbas@kfupm.edu.sa

    2016-12-15

    Phonon transport across the curve thin silicon film due to temperature disturbance at film edges is examined. The equation for radiative transport is considered via incorporating Boltzmann transport equation for the energy transfer. The effect of the thin film curvature on phonon transport characteristics is assessed. In the analysis, the film arc length along the film centerline is considered to be constant and the film arc angle is varied to obtain various film curvatures. Equivalent equilibrium temperature is introduced to assess the phonon intensity distribution inside the curved thin film. It is found that equivalent equilibrium temperature decay along the arc length is sharper than that of in the radial direction, which is more pronounced in the region close to the film inner radius. Reducing film arc angle increases the film curvature; in which case, phonon intensity decay becomes sharp in the close region of the high temperature edge. Equivalent equilibrium temperature demonstrates non-symmetric distribution along the radial direction, which is more pronounced in the near region of the high temperature edge.

  2. Phonon transport across nano-scale curved thin films

    International Nuclear Information System (INIS)

    Mansoor, Saad B.; Yilbas, Bekir S.

    2016-01-01

    Phonon transport across the curve thin silicon film due to temperature disturbance at film edges is examined. The equation for radiative transport is considered via incorporating Boltzmann transport equation for the energy transfer. The effect of the thin film curvature on phonon transport characteristics is assessed. In the analysis, the film arc length along the film centerline is considered to be constant and the film arc angle is varied to obtain various film curvatures. Equivalent equilibrium temperature is introduced to assess the phonon intensity distribution inside the curved thin film. It is found that equivalent equilibrium temperature decay along the arc length is sharper than that of in the radial direction, which is more pronounced in the region close to the film inner radius. Reducing film arc angle increases the film curvature; in which case, phonon intensity decay becomes sharp in the close region of the high temperature edge. Equivalent equilibrium temperature demonstrates non-symmetric distribution along the radial direction, which is more pronounced in the near region of the high temperature edge.

  3. Optimization on Spaces of Curves

    DEFF Research Database (Denmark)

    Møller-Andersen, Jakob

    in Rd, and methods to solve the initial and boundary value problem for geodesics allowing us to compute the Karcher mean and principal components analysis of data of curves. We apply the methods to study shape variation in synthetic data in the Kimia shape database, in HeLa cell nuclei and cycles...... of cardiac deformations. Finally we investigate a new application of Riemannian shape analysis in shape optimization. We setup a simple elliptic model problem, and describe how to apply shape calculus to obtain directional derivatives in the manifold of planar curves. We present an implementation based...

  4. Tracing a planar algebraic curve

    International Nuclear Information System (INIS)

    Chen Falai; Kozak, J.

    1994-09-01

    In this paper, an algorithm that determines a real algebraic curve is outlined. Its basic step is to divide the plane into subdomains that include only simple branches of the algebraic curve without singular points. Each of the branches is then stably and efficiently traced in the particular subdomain. Except for the tracing, the algorithm requires only a couple of simple operations on polynomials that can be carried out exactly if the coefficients are rational, and the determination of zeros of several polynomials of one variable. (author). 5 refs, 4 figs

  5. The New Keynesian Phillips Curve

    DEFF Research Database (Denmark)

    Ólafsson, Tjörvi

    This paper provides a survey on the recent literature on the new Keynesian Phillips curve: the controversies surrounding its microfoundation and estimation, the approaches that have been tried to improve its empirical fit and the challenges it faces adapting to the open-economy framework. The new......, learning or state-dependant pricing. The introduction of openeconomy factors into the new Keynesian Phillips curve complicate matters further as it must capture the nexus between price setting, inflation and the exchange rate. This is nevertheless a crucial feature for any model to be used for inflation...... forecasting in a small open economy like Iceland....

  6. Influence of thermoluminescence trapping parameter from abundant quartz powder on equivalent dose

    International Nuclear Information System (INIS)

    Zhao Qiuyue; Wei Mingjian; Song Bo; Pan Baolin; Zhou Rui

    2014-01-01

    Glow curves of abundant quartz powder were obtained with the RGD-3B thermoluminescence (TL) reader. TL peaks with 448, 551, 654, 756 K were identified at the heating rate of 5 K/s. The activation energy, frequency factor and lifetime of trapped charge were evaluated at ambient temperature for four peaks by the method of various heating rates. Within a certain range of activation energy, the equivalent dose increases exponentially with the activation energy. The equivalent dose increases from 54 Gy to 485 Gy with the temperature from 548 K to 608 K, and it fluctuates around 531 Gy with the temperature from 608 K to 748 K. (authors)

  7. TORREFACTION OF CELLULOSE: VALIDITY AND LIMITATION OF THE TEMPERATURE/DURATION EQUIVALENCE

    OpenAIRE

    Lv , Pin; Almeida , Giana; Perré , Patrick

    2012-01-01

    During torrefaction of biomass, equivalence between temperature and residence time is often reported, either in terms of the loss of mass or the alternation of properties. The present work proposes a rigorous investigation of this equivalence. Cellulose, as the main lignocellulosic biomass component, was treated under mild pyrolysis for 48 hours. Several couples of T-D (temperature-duration) points were selected from TGA curves to obtain mass losses of 11.6%, 25%, 50%, 74.4%, and 86.7%. The c...

  8. Normalization: A Preprocessing Stage

    OpenAIRE

    Patro, S. Gopal Krishna; Sahu, Kishore Kumar

    2015-01-01

    As we know that the normalization is a pre-processing stage of any type problem statement. Especially normalization takes important role in the field of soft computing, cloud computing etc. for manipulation of data like scale down or scale up the range of data before it becomes used for further stage. There are so many normalization techniques are there namely Min-Max normalization, Z-score normalization and Decimal scaling normalization. So by referring these normalization techniques we are ...

  9. Signature Curves Statistics of DNA Supercoils

    OpenAIRE

    Shakiban, Cheri; Lloyd, Peter

    2004-01-01

    In this paper we describe the Euclidean signature curves for two dimensional closed curves in the plane and their generalization to closed space curves. The focus will be on discrete numerical methods for approximating such curves. Further we will apply these numerical methods to plot the signature curves related to three-dimensional simulated DNA supercoils. Our primary focus will be on statistical analysis of the data generated for the signature curves of the supercoils. We will try to esta...

  10. Dual Smarandache Curves of a Timelike Curve lying on Unit dual Lorentzian Sphere

    OpenAIRE

    Kahraman, Tanju; Hüseyin Ugurlu, Hasan

    2016-01-01

    In this paper, we give Darboux approximation for dual Smarandache curves of time like curve on unit dual Lorentzian sphere. Firstly, we define the four types of dual Smarandache curves of a timelike curve lying on dual Lorentzian sphere.

  11. Electro-Mechanical Resonance Curves

    Science.gov (United States)

    Greenslade, Thomas B., Jr.

    2018-01-01

    Recently I have been investigating the frequency response of galvanometers. These are direct-current devices used to measure small currents. By using a low-frequency function generator to supply the alternating-current signal and a stopwatch smartphone app to measure the period, I was able to take data to allow a resonance curve to be drawn. This…

  12. Texas curve margin of safety.

    Science.gov (United States)

    2013-01-01

    This software can be used to assist with the assessment of margin of safety for a horizontal curve. It is intended for use by engineers and technicians responsible for safety analysis or management of rural highway pavement or traffic control devices...

  13. Principal Curves on Riemannian Manifolds.

    Science.gov (United States)

    Hauberg, Soren

    2016-09-01

    Euclidean statistics are often generalized to Riemannian manifolds by replacing straight-line interpolations with geodesic ones. While these Riemannian models are familiar-looking, they are restricted by the inflexibility of geodesics, and they rely on constructions which are optimal only in Euclidean domains. We consider extensions of Principal Component Analysis (PCA) to Riemannian manifolds. Classic Riemannian approaches seek a geodesic curve passing through the mean that optimizes a criteria of interest. The requirements that the solution both is geodesic and must pass through the mean tend to imply that the methods only work well when the manifold is mostly flat within the support of the generating distribution. We argue that instead of generalizing linear Euclidean models, it is more fruitful to generalize non-linear Euclidean models. Specifically, we extend the classic Principal Curves from Hastie & Stuetzle to data residing on a complete Riemannian manifold. We show that for elliptical distributions in the tangent of spaces of constant curvature, the standard principal geodesic is a principal curve. The proposed model is simple to compute and avoids many of the pitfalls of traditional geodesic approaches. We empirically demonstrate the effectiveness of the Riemannian principal curves on several manifolds and datasets.

  14. Elliptic curves and primality proving

    Science.gov (United States)

    Atkin, A. O. L.; Morain, F.

    1993-07-01

    The aim of this paper is to describe the theory and implementation of the Elliptic Curve Primality Proving algorithm. Problema, numeros primos a compositis dignoscendi, hosque in factores suos primos resolvendi, ad gravissima ac utilissima totius arithmeticae pertinere, et geometrarum tum veterum tum recentiorum industriam ac sagacitatem occupavisse, tam notum est, ut de hac re copiose loqui superfluum foret.

  15. A Curve for all Reasons

    Indian Academy of Sciences (India)

    from biology, feel that every pattern in the living world, ranging from the folding of ... curves band c have the same rate of increase but reach different asymptotes. If these .... not at x = 0, but at xo' which is the minimum size at birth that will permit ...

  16. Survival curves for irradiated cells

    International Nuclear Information System (INIS)

    Gibson, D.K.

    1975-01-01

    The subject of the lecture is the probability of survival of biological cells which have been subjected to ionising radiation. The basic mathematical theories of cell survival as a function of radiation dose are developed. A brief comparison with observed survival curves is made. (author)

  17. Mentorship, learning curves, and balance.

    Science.gov (United States)

    Cohen, Meryl S; Jacobs, Jeffrey P; Quintessenza, James A; Chai, Paul J; Lindberg, Harald L; Dickey, Jamie; Ungerleider, Ross M

    2007-09-01

    Professionals working in the arena of health care face a variety of challenges as their careers evolve and develop. In this review, we analyze the role of mentorship, learning curves, and balance in overcoming challenges that all such professionals are likely to encounter. These challenges can exist both in professional and personal life. As any professional involved in health care matures, complex professional skills must be mastered, and new professional skills must be acquired. These skills are both technical and judgmental. In most circumstances, these skills must be learned. In 2007, despite the continued need for obtaining new knowledge and learning new skills, the professional and public tolerance for a "learning curve" is much less than in previous decades. Mentorship is the key to success in these endeavours. The success of mentorship is two-sided, with responsibilities for both the mentor and the mentee. The benefits of this relationship must be bidirectional. It is the responsibility of both the student and the mentor to assure this bidirectional exchange of benefit. This relationship requires time, patience, dedication, and to some degree selflessness. This mentorship will ultimately be the best tool for mastering complex professional skills and maturing through various learning curves. Professional mentorship also requires that mentors identify and explicitly teach their mentees the relational skills and abilities inherent in learning the management of the triad of self, relationships with others, and professional responsibilities.Up to two decades ago, a learning curve was tolerated, and even expected, while professionals involved in healthcare developed the techniques that allowed for the treatment of previously untreatable diseases. Outcomes have now improved to the point that this type of learning curve is no longer acceptable to the public. Still, professionals must learn to perform and develop independence and confidence. The responsibility to

  18. The principle of equivalence reconsidered: assessing the relevance of the principle of equivalence in prison medicine.

    Science.gov (United States)

    Jotterand, Fabrice; Wangmo, Tenzin

    2014-01-01

    In this article we critically examine the principle of equivalence of care in prison medicine. First, we provide an overview of how the principle of equivalence is utilized in various national and international guidelines on health care provision to prisoners. Second, we outline some of the problems associated with its applications, and argue that the principle of equivalence should go beyond equivalence to access and include equivalence of outcomes. However, because of the particular context of the prison environment, third, we contend that the concept of "health" in equivalence of health outcomes needs conceptual clarity; otherwise, it fails to provide a threshold for healthy states among inmates. We accomplish this by examining common understandings of the concepts of health and disease. We conclude our article by showing why the conceptualization of diseases as clinical problems provides a helpful approach in the delivery of health care in prison.

  19. Investigation of 1-cm dose equivalent for photons behind shielding materials

    International Nuclear Information System (INIS)

    Hirayama, Hideo; Tanaka, Shun-ichi

    1991-03-01

    The ambient dose equivalent at 1-cm depth, assumed equivalent to the 1-cm dose equivalent in practical dose estimations behind shielding slabs of water, concrete, iron or lead for normally incident photons having various energies was calculated by using conversion factors for a slab phantom. It was compared with the 1-cm depth dose calculated with the Monte Carlo code EGS4. It was concluded from this comparison that the ambient dose equivalent calculated by using the conversion factors for the ICRU sphere could be used for the evaluation of the 1-cm dose equivalent for the sphere phantom within 20% errors. Average and practical conversion factors are defined as the conversion factors from exposure to ambient dose equivalent in a finite slab or an infinite one, respectively. They were calculated with EGS4 and the discrete ordinates code PALLAS. The exposure calculated with simple estimation procedures such as point kernel methods can be easily converted to ambient dose equivalent by using these conversion factors. The maximum value between 1 and 30 mfp can be adopted as the conversion factor which depends only on material and incident photon energy. This gives the ambient dose equivalent on the safe side. 13 refs., 7 figs., 2 tabs

  20. Modelling critical NDVI curves in perennial ryegrass

    DEFF Research Database (Denmark)

    Gislum, R; Boelt, B

    2010-01-01

      The use of optical sensors to measure canopy reflectance and calculate crop index as e.g. normalized difference vegetation index (NDVI) is widely used in agricultural crops, but has so far not been implemented in herbage seed production. The present study has the purpose to develop a critical...... NDVI curve where the critical NDVI, defined as the minimum NDVI obtained to achieve a high seed yield, will be modelled during the growing season. NDVI measurements were made at different growing degree days (GDD) in a three year field experiment where different N application rates were applied....... There was a clear maximum in the correlation coefficient between seed yield and NDVI in the period from approximately 700 to 900 GDD. At this time there was an exponential relationship between NDVI and seed yield where highest seed yield were at NDVI ~0.9. Theoretically the farmers should aim for an NDVI of 0...

  1. Quantum electrodynamics in curved space-time

    International Nuclear Information System (INIS)

    Buchbinder, I.L.; Gitman, D.M.; Fradkin, E.S.

    1981-01-01

    The lagrangian of quantum electrodynamics in curved space-time is constructed and the interaction picture taking into account the external gravitational field exactly is introduced. The transform from the Heisenberg picture to the interaction picture is carried out in a manifestly covariant way. The properties of free spinor and electromagnetic quantum fields are discussed and conditions under which initial and final creation and annihilation operators are connected by unitarity transformation are indicated. The derivation of Feynman's rules for quantum processes are calculated on the base of generalized normal product of operators. The way of reduction formula derivations is indicated and the suitable Green's functions are introduced. A generating functional for this Green's function is defined and the system of functional equations for them is obtained. The representation of different generating funcationals by means of functional integrals is introduced. Some consequences of S-matrix unitary condition are considered which leads to the generalization of the optic theorem

  2. Presheaves of Superselection Structures in Curved Spacetimes

    Science.gov (United States)

    Vasselli, Ezio

    2015-04-01

    We show that superselection structures on curved spacetimes that are expected to describe quantum charges affected by the underlying geometry are categories of sections of presheaves of symmetric tensor categories. When an embedding functor is given, the superselection structure is a Tannaka-type dual of a locally constant group bundle, which hence becomes a natural candidate for the role of the gauge group. Indeed, we show that any locally constant group bundle (with suitable structure group) acts on a net of C* algebras fulfilling normal commutation relations on an arbitrary spacetime. We also give examples of gerbes of C* algebras, defined by Wightman fields and constructed using projective representations of the fundamental group of the spacetime, which we propose as solutions for the problem that existence and uniqueness of the embedding functor are not guaranteed.

  3. Pharmaceutical equivalence of metformin tablets with various binders

    Directory of Open Access Journals (Sweden)

    L. C. Block

    2009-01-01

    Full Text Available

    Normal" style="margin: 0cm 0cm 0pt; line-height: normal; text-align: justify; mso-layout-grid-align: none;"> Metformin hydrochloride is a high-dose drug widely used as an oral anti-hyperglycemic agent. As it is highly crystalline and has poor compaction properties, it is difficult to form tablets by direct compression. The aim of this study was to develop adequate metformin tablets, pharmaceutically equivalent to the reference product, Glucophage® (marketed as Glifage® in Brazil. Metformin 500mg tablets were produced by wet granulation with various binders (A = starch, B = starch 1500®, C = PVP K30®, D = PVP K90®. The tablets were analyzed for their hardness, friability, disintegration, dissolution, content uniformity and dissolution profile (basket apparatus at 50 rpm, pH 6.8 phosphate buffer. The 4 formulations, F1 (5% A and 5% C, F2 (5% B and 5% C, F3 (10% C and F4 (5% D, demonstrated adequate uniformity of content, hardness, friability, disintegration and total drug dissolution after 30 minutes (F1, F2 and F4, and after 60 minutes (F3. The drug release time profiles fitted a Higuchi model (F1, F2 and F3, similarly to the pharmaceutical reference, or a zero order model (F4. The dissolution efficiency for all the formulations was 75%, except for F3 (45%. F1 and F2 were thus equivalent to Glifage®. Keywords: dissolution; metformin; tablet; binder; pharmaceutical equivalence

  4. Some fundamental questions about R-curves

    International Nuclear Information System (INIS)

    Kolednik, O.

    1992-01-01

    With the help of two simple thought experiments it is demonstrated that there exist two physically different types of fracture toughness. The crack-growth toughness, which is identical to the Griffith crack growth resistance, R, is a measure of the non-reversible energy which is needed to produce an increment of new crack area. The size of R is reflected by the slopes of the R-curves commonly used. So an increasing J-Δa-curve does not mean that the crack-growth resistance increases. The fracture initiation toughness, J i , is a normalized total energy (related to the ligament area) which must be put into the specimen up to fracture initiation. Only for ideally brittle materials R and J i have equal sizes. For small-scale yielding a relationship exists between R and J i , ao a one-parameter description of fracture processes is applicable. For large-scale yielding R and J i are not strictly related and both parameters are necessary to describe the fracture process. (orig.) [de

  5. Light Curve Variations of AR Lacertae

    Directory of Open Access Journals (Sweden)

    Il-Seong Nha

    1991-12-01

    Full Text Available Sixteen unitary Light curves of AR Lac in B and V are made at Yonsei University Observatory in the period of 1980-1988. Some overview findings of light variations are made. (1 The light variations outside eclipse follow none of the wave migration patterns reported by previous investigators. (2 Complicated shapes outside eclipse are apparently much reduced in the light curves of 1983-1984. This suggests that, in the future, AR Lac has a chance to attain a normal state with mo complicated interactions. (3 The depths of the primary and the secondary mid-eclipses are changing year-to-year. (4 The K0 star, the larger component, has brightened by 0.m14 V, while the G2 star has shown a fluctuation of about 0.m05 in V. (5 The B-V values at primary mid-eclipse have no correlation with the depth variations. (6 Independently of the increase of maximum brightness, the B-V colors in the non-eclipsed phases changed slightly over the years.

  6. Equivalence of Szegedy's and coined quantum walks

    Science.gov (United States)

    Wong, Thomas G.

    2017-09-01

    Szegedy's quantum walk is a quantization of a classical random walk or Markov chain, where the walk occurs on the edges of the bipartite double cover of the original graph. To search, one can simply quantize a Markov chain with absorbing vertices. Recently, Santos proposed two alternative search algorithms that instead utilize the sign-flip oracle in Grover's algorithm rather than absorbing vertices. In this paper, we show that these two algorithms are exactly equivalent to two algorithms involving coined quantum walks, which are walks on the vertices of the original graph with an internal degree of freedom. The first scheme is equivalent to a coined quantum walk with one walk step per query of Grover's oracle, and the second is equivalent to a coined quantum walk with two walk steps per query of Grover's oracle. These equivalences lie outside the previously known equivalence of Szegedy's quantum walk with absorbing vertices and the coined quantum walk with the negative identity operator as the coin for marked vertices, whose precise relationships we also investigate.

  7. Leptogenesis from loop effects in curved spacetime

    Energy Technology Data Exchange (ETDEWEB)

    McDonald, Jamie I.; Shore, Graham M. [Department of Physics, Swansea University,Singleton Park, Swansea, SA2 8PP (United Kingdom)

    2016-04-05

    We describe a new mechanism — radiatively-induced gravitational leptogenesis — for generating the matter-antimatter asymmetry of the Universe. We show how quantum loop effects in C and CP violating theories cause matter and antimatter to propagate differently in the presence of gravity, and prove this is forbidden in flat space by CPT and translation symmetry. This generates a curvature-dependent chemical potential for leptons, allowing a matter-antimatter asymmetry to be generated in thermal equilibrium in the early Universe. The time-dependent dynamics necessary for leptogenesis is provided by the interaction of the virtual self-energy cloud of the leptons with the expanding curved spacetime background, which violates the strong equivalence principle and allows a distinction between matter and antimatter. We show here how this mechanism is realised in a particular BSM theory, the see-saw model, where the quantum loops involve the heavy sterile neutrinos responsible for light neutrino masses. We demonstrate by explicit computation of the relevant two-loop Feynman diagrams how the size of the radiative corrections relevant for leptogenesis becomes enhanced by increasing the mass hierarchy of the sterile neutrinos, and show how the induced lepton asymmetry may be sufficiently large to play an important rôle in determining the baryon-to-photon ratio of the Universe.

  8. Quantum mechanics and the equivalence principle

    International Nuclear Information System (INIS)

    Davies, P C W

    2004-01-01

    A quantum particle moving in a gravitational field may penetrate the classically forbidden region of the gravitational potential. This raises the question of whether the time of flight of a quantum particle in a gravitational field might deviate systematically from that of a classical particle due to tunnelling delay, representing a violation of the weak equivalence principle. I investigate this using a model quantum clock to measure the time of flight of a quantum particle in a uniform gravitational field, and show that a violation of the equivalence principle does not occur when the measurement is made far from the turning point of the classical trajectory. The results are then confirmed using the so-called dwell time definition of quantum tunnelling. I conclude with some remarks about the strong equivalence principle in quantum mechanics

  9. Water equivalence of polymer gel dosimeters

    International Nuclear Information System (INIS)

    Sellakumar, P.; James Jebaseelan Samuel, E.; Supe, Sanjay S.

    2007-01-01

    To evaluate the water equivalence and radiation transport properties of polymer gel dosimeters over the wide range of photon and electron energies 14 different types of polymer gels were considered. Their water equivalence was evaluated in terms of effective atomic number (Z eff ), electron density (ρ e ), photon mass attenuation coefficient (μ/ρ), photon mass energy absorption coefficient (μ en /ρ) and total stopping power (S/ρ) tot of electrons using the XCOM and the ESTAR database. The study showed that the effective atomic number of polymer gels were very close ( en /ρ for all polymer gels were in close agreement ( tot of electrons in polymer gel dosimeters were within 1% agreement with that of water. From the study we conclude that at lower energy (<80keV) the polymer gel dosimeters cannot be considered water equivalent and study has to be carried out before using the polymer gel for clinical application

  10. The equivalence problem for LL- and LR-regular grammars

    NARCIS (Netherlands)

    Nijholt, Antinus; Gecsec, F.

    It will be shown that the equivalence problem for LL-regular grammars is decidable. Apart from extending the known result for LL(k) grammar equivalence to LLregular grammar equivalence, we obtain an alternative proof of the decidability of LL(k) equivalence. The equivalence prob]em for LL-regular

  11. Fiscal adjustments in Europe and Ricardian equivalence

    Directory of Open Access Journals (Sweden)

    V. DE BONIS

    1998-09-01

    Full Text Available According to the ‘Ricardian’ equivalence hypothesis, consumption is dependent on permanent disposable income and current deficits are equivalent to future tax payments. This hypothesis is tested on 14 European countries in the 1990s. The relationships between private sector savings and general government deficit, and the GDP growth rate and the unemployment rate are determined. The results show the change in consumers' behaviour with respect to government deficit, and that expectations of an increase in future wealth are no longer associated with a decrease in deficit.

  12. Equivalent circuit analysis of terahertz metamaterial filters

    KAUST Repository

    Zhang, Xueqian

    2011-01-01

    An equivalent circuit model for the analysis and design of terahertz (THz) metamaterial filters is presented. The proposed model, derived based on LMC equivalent circuits, takes into account the detailed geometrical parameters and the presence of a dielectric substrate with the existing analytic expressions for self-inductance, mutual inductance, and capacitance. The model is in good agreement with the experimental measurements and full-wave simulations. Exploiting the circuit model has made it possible to predict accurately the resonance frequency of the proposed structures and thus, quick and accurate process of designing THz device from artificial metamaterials is offered. ©2011 Chinese Optics Letters.

  13. Topological equivalence of nonlinear autonomous dynamical systems

    International Nuclear Information System (INIS)

    Nguyen Huynh Phan; Tran Van Nhung

    1995-12-01

    We show in this paper that the autonomous nonlinear dynamical system Σ(A,B,F): x' = Ax+Bu+F(x) is topologically equivalent to the linear dynamical system Σ(A,B,O): x' = Ax+Bu if the projection of A on the complement in R n of the controllable vectorial subspace is hyperbolic and if lipschitz constant of F is sufficiently small ( * ) and F(x) = 0 when parallel x parallel is sufficiently large ( ** ). In particular, if Σ(A,B,O) is controllable, it is topologically equivalent to Σ(A,B,F) when it is only that F satisfy ( ** ). (author). 18 refs

  14. Measurements of the personal dose equivalent

    International Nuclear Information System (INIS)

    Scarlat, F.; Scarisoreanu, A.; Badita, E.; Oane, M.; Mitru, E.

    2008-01-01

    Full text: The paper presents the results of measurements related to the personal dose equivalent in the rooms adjacent to NILPRP 7 MeV linear accelerator, by means of the secondary standard chamber T34035 Hp(10). The chamber was calibrated by PTB at S- 137 Cs (E av = 661.6 keV, T 1/2 11050 days) and has N H = 3.17x10 6 Sv/C calibration factor for the personal dose equivalent, Hp(10), at a depth of 10 mm in climatic reference conditions. The measurements were made for the two operation mode of the 7 MeV linac: electrons and bremsstrahlung

  15. Fragility curves for bridges under differential support motions

    DEFF Research Database (Denmark)

    Konakli, Katerina

    2012-01-01

    This paper employs the notion of fragility to investigate the seismic vulnerability of bridges subjected to spatially varying support motions. Fragility curves are developed for four highway bridges in California with vastly different structural characteristics. The input in this analysis consists...... of simulated ground motion arrays with temporal and spectral nonstationarities, and consistent with prescribed spatial variation patterns. Structural damage is quantified through displacement ductility demands obtained from nonlinear time-history analysis. The potential use of the ‘equal displacement’ rule...... to approximately evaluate displacement demands from analysis of the equivalent linear systems is examined....

  16. Growth curves in Down syndrome with congenital heart disease

    Directory of Open Access Journals (Sweden)

    Caroline D’Azevedo Sica

    Full Text Available SUMMARY Introduction: To assess dietary habits, nutritional status and food frequency in children and adolescents with Down syndrome (DS and congenital heart disease (CHD. Additionally, we attempted to compare body mass index (BMI classifications according to the World Health Organization (WHO curves and curves developed for individuals with DS. Method: Cross-sectional study including individuals with DS and CHD treated at a referral center for cardiology, aged 2 to 18 years. Weight, height, BMI, total energy and food frequency were measured. Nutritional status was assessed using BMI for age and gender, using curves for evaluation of patients with DS and those set by the WHO. Results: 68 subjects with DS and CHD were evaluated. Atrioventricular septal defect (AVSD was the most common heart disease (52.9%. There were differences in BMI classification between the curves proposed for patients with DS and those proposed by the WHO. There was an association between consumption of vitamin E and polyunsaturated fatty acids. Conclusion: Results showed that individuals with DS are mostly considered normal weight for age, when evaluated using specific curves for DS. Reviews on specific curves for DS would be the recommended practice for health professionals so as to avoid precipitated diagnosis of overweight and/or obesity in this population.

  17. Energy response of detectors to alpha/beta particles and compatibility of the equivalent factors

    International Nuclear Information System (INIS)

    Lin Bingxing; Li Guangxian; Lin Lixiong

    2011-01-01

    By measuring detect efficiency and equivalent factors of alpha/beta radiation with different energies on three types of detectors, this paper compares compatibility of their equivalent factors and discusses applicability of detectors to measuring total alpha/beta radiation. The result shows the relationship between efficiency of alpha/beta radiation and their energies on 3 types of detectors, such as scintillation and proportional and semiconductor counters, are overall identical. Alpha count efficiency display exponential relation with alpha-particle energy. While beta count efficiency display logarithm relation with beta-particle energy, but the curves appears deflection at low energy. Comparison test of energy response also shows that alpha and beta equivalent factors of scintillation and proportional counters have a good compatibility, and alpha equivalent factors of the semiconductor counters are in good agreement with those of the above two types of counters, but beta equivalent factors have obvious difference, or equivalent factors of low energy beta-particle are lower than those of other detectors. So, the semiconductor counter can not be used for measuring total radioactivity or for the measurements for the purpose of food safety. (authors)

  18. An Equivalent Circuit of Longitudinal Vibration for a Piezoelectric Structure with Losses.

    Science.gov (United States)

    Yuan, Tao; Li, Chaodong; Fan, Pingqing

    2018-03-22

    Equivalent circuits of piezoelectric structures such as bimorphs and unimorphs conventionally focus on the bending vibration modes. However, the longitudinal vibration modes are rarely considered even though they also play a remarkable role in piezoelectric devices. Losses, especially elastic loss in the metal substrate, are also generally neglected, which leads to discrepancies compared with experiments. In this paper, a novel equivalent circuit with four kinds of losses is proposed for a beamlike piezoelectric structure under the longitudinal vibration mode. This structure consists of a slender beam as the metal substrate, and a piezoelectric patch which covers a partial length of the beam. In this approach, first, complex numbers are used to deal with four kinds of losses-elastic loss in the metal substrate, and piezoelectric, dielectric, and elastic losses in the piezoelectric patch. Next in this approach, based on Mason's model, a new equivalent circuit is developed. Using MATLAB, impedance curves of this structure are simulated by the equivalent circuit method. Experiments are conducted and good agreements are revealed between experiments and equivalent circuit results. It is indicated that the introduction of four losses in an equivalent circuit can increase the result accuracy considerably.

  19. Relativistic electron-beam transport in curved channels

    International Nuclear Information System (INIS)

    Vittitoe, C.N.; Morel, J.E.; Wright, T.P.

    1982-01-01

    Collisionless single particle trajectories are modeled for a single plasma channel having one section curved in a circular arc. The magnetic field is developed by superposition of straight and curved channel segments. The plasma density gives charge and beam-current neutralization. High transport efficiencies are found for turning a relativistic electron beam 90 0 under reasonable conditions of plasma current, beam energy, arc radius, channel radius, and injection distributions in velocity and in position at the channel entrance. Channel exit distributions in velocity and position are found consistent with those for a straight plasma channel of equivalent length. Such transport problems are important in any charged particle-beam application constrained by large diode-to-target distance or by requirements of maximum power deposition in a confined area

  20. A catalog of special plane curves

    CERN Document Server

    Lawrence, J Dennis

    2014-01-01

    Among the largest, finest collections available-illustrated not only once for each curve, but also for various values of any parameters present. Covers general properties of curves and types of derived curves. Curves illustrated by a CalComp digital incremental plotter. 12 illustrations.

  1. Computation of undulator tuning curves

    International Nuclear Information System (INIS)

    Dejus, Roger J.

    1997-01-01

    Computer codes for fast computation of on-axis brilliance tuning curves and flux tuning curves have been developed. They are valid for an ideal device (regular planar device or a helical device) using the Bessel function formalism. The effects of the particle beam emittance and the beam energy spread on the spectrum are taken into account. The applicability of the codes and the importance of magnetic field errors of real insertion devices are addressed. The validity of the codes has been experimentally verified at the APS and observed discrepancies are in agreement with predicted reduction of intensities due to magnetic field errors. The codes are distributed as part of the graphical user interface XOP (X-ray OPtics utilities), which simplifies execution and viewing of the results

  2. Curved canals: Ancestral files revisited

    Directory of Open Access Journals (Sweden)

    Jain Nidhi

    2008-01-01

    Full Text Available The aim of this article is to provide an insight into different techniques of cleaning and shaping of curved root canals with hand instruments. Although a plethora of root canal instruments like ProFile, ProTaper, LightSpeed ® etc dominate the current scenario, the inexpensive conventional root canal hand files such as K-files and flexible files can be used to get optimum results when handled meticulously. Special emphasis has been put on the modifications in biomechanical canal preparation in a variety of curved canal cases. This article compiles a series of clinical cases of root canals with curvatures in the middle and apical third and with S-shaped curvatures that were successfully completed by employing only conventional root canal hand instruments.

  3. Invariance for Single Curved Manifold

    KAUST Repository

    Castro, Pedro Machado Manhaes de

    2012-01-01

    Recently, it has been shown that, for Lambert illumination model, solely scenes composed by developable objects with a very particular albedo distribution produce an (2D) image with isolines that are (almost) invariant to light direction change. In this work, we provide and investigate a more general framework, and we show that, in general, the requirement for such in variances is quite strong, and is related to the differential geometry of the objects. More precisely, it is proved that single curved manifolds, i.e., manifolds such that at each point there is at most one principal curvature direction, produce invariant is surfaces for a certain relevant family of energy functions. In the three-dimensional case, the associated energy function corresponds to the classical Lambert illumination model with albedo. This result is also extended for finite-dimensional scenes composed by single curved objects. © 2012 IEEE.

  4. Invariance for Single Curved Manifold

    KAUST Repository

    Castro, Pedro Machado Manhaes de

    2012-08-01

    Recently, it has been shown that, for Lambert illumination model, solely scenes composed by developable objects with a very particular albedo distribution produce an (2D) image with isolines that are (almost) invariant to light direction change. In this work, we provide and investigate a more general framework, and we show that, in general, the requirement for such in variances is quite strong, and is related to the differential geometry of the objects. More precisely, it is proved that single curved manifolds, i.e., manifolds such that at each point there is at most one principal curvature direction, produce invariant is surfaces for a certain relevant family of energy functions. In the three-dimensional case, the associated energy function corresponds to the classical Lambert illumination model with albedo. This result is also extended for finite-dimensional scenes composed by single curved objects. © 2012 IEEE.

  5. A Comparison of the Effects of Non-Normal Distributions on Tests of Equivalence

    Science.gov (United States)

    Ellington, Linda F.

    2011-01-01

    Statistical theory and its application provide the foundation to modern systematic inquiry in the behavioral, physical and social sciences disciplines (Fisher, 1958; Wilcox, 1996). It provides the tools for scholars and researchers to operationalize constructs, describe populations, and measure and interpret the relations between populations and…

  6. Curved Folded Plate Timber Structures

    OpenAIRE

    Buri, Hans Ulrich; Stotz, Ivo; Weinand, Yves

    2011-01-01

    This work investigates the development of a Curved Origami Prototype made with timber panels. In the last fifteen years the timber industry has developed new, large size, timber panels. Composition and dimensions of these panels and the possibility of milling them with Computer Numerical Controlled machines shows great potential for folded plate structures. To generate the form of these structures we were inspired by Origami, the Japanese art of paper folding. Common paper tessellations are c...

  7. Projection-based curve clustering

    International Nuclear Information System (INIS)

    Auder, Benjamin; Fischer, Aurelie

    2012-01-01

    This paper focuses on unsupervised curve classification in the context of nuclear industry. At the Commissariat a l'Energie Atomique (CEA), Cadarache (France), the thermal-hydraulic computer code CATHARE is used to study the reliability of reactor vessels. The code inputs are physical parameters and the outputs are time evolution curves of a few other physical quantities. As the CATHARE code is quite complex and CPU time-consuming, it has to be approximated by a regression model. This regression process involves a clustering step. In the present paper, the CATHARE output curves are clustered using a k-means scheme, with a projection onto a lower dimensional space. We study the properties of the empirically optimal cluster centres found by the clustering method based on projections, compared with the 'true' ones. The choice of the projection basis is discussed, and an algorithm is implemented to select the best projection basis among a library of orthonormal bases. The approach is illustrated on a simulated example and then applied to the industrial problem. (authors)

  8. Elementary particles in curved spaces

    International Nuclear Information System (INIS)

    Lazanu, I.

    2004-01-01

    The theories in particle physics are developed currently, in Minkowski space-time starting from the Poincare group. A physical theory in flat space can be seen as the limit of a more general physical theory in a curved space. At the present time, a theory of particles in curved space does not exist, and thus the only possibility is to extend the existent theories in these spaces. A formidable obstacle to the extension of physical models is the absence of groups of motion in more general Riemann spaces. A space of constant curvature has a group of motion that, although differs from that of a flat space, has the same number of parameters and could permit some generalisations. In this contribution we try to investigate some physical implications of the presumable existence of elementary particles in curved space. In de Sitter space (dS) the invariant rest mass is a combination of the Poincare rest mass and the generalised angular momentum of a particle and it permits to establish a correlation with the vacuum energy and with the cosmological constant. The consequences are significant because in an experiment the local structure of space-time departs from the Minkowski space and becomes a dS or AdS space-time. Discrete symmetry characteristics of the dS/AdS group suggest some arguments for the possible existence of the 'mirror matter'. (author)

  9. 77 FR 32632 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Three New Equivalent Methods

    Science.gov (United States)

    2012-06-01

    ... Methods: Designation of Three New Equivalent Methods AGENCY: Environmental Protection Agency. ACTION... accordance with 40 CFR Part 53, three new equivalent methods: One for measuring concentrations of nitrogen... INFORMATION: In accordance with regulations at 40 CFR Part 53, the EPA evaluates various methods for...

  10. Wind-Induced Fatigue Analysis of High-Rise Steel Structures Using Equivalent Structural Stress Method

    Directory of Open Access Journals (Sweden)

    Zhao Fang

    2017-01-01

    Full Text Available Welded beam-to-column connections of high-rise steel structures are susceptive to fatigue damage under wind loading. However, most fatigue assessments in the field of civil engineering are mainly based on nominal stress or hot spot stress theories, which has the disadvantage of dependence on the meshing styles and massive curves selected. To address this problem, in this paper, the equivalent structural stress method with advantages of mesh-insensitive quality and capability of unifying different stress-life curves (S-N curves into one is introduced to the wind-induced fatigue assessment of a large-scale complicated high-rise steel structure. The multi-scale finite element model is established and the corresponding wind loading is simulated. Fatigue life assessments using equivalent structural stress method, hot spot stress method and nominal stress method are performed, and the results are verified and comparisons are made. The mesh-insensitive quality is also verified. The results show that the lateral weld toe of the butt weld connecting the beam flange plate and the column is the location where fatigue damage most likely happens. Nominal stress method considers fatigue assessment of welds in a more global way by averaging all the stress on the weld section while in equivalent structural stress method and hot spot method local stress concentration can be taken into account more precisely.

  11. Confluence Modulo Equivalence in Constraint Handling Rules

    DEFF Research Database (Denmark)

    Christiansen, Henning; Kirkeby, Maja Hanne

    2015-01-01

    Previous results on confluence for Constraint Handling Rules, CHR, are generalized to take into account user-defined state equivalence relations. This allows a much larger class of programs to enjoy the advantages of confluence, which include various optimization techniques and simplified...

  12. Free Fall and the Equivalence Principle Revisited

    Science.gov (United States)

    Pendrill, Ann-Marie

    2017-01-01

    Free fall is commonly discussed as an example of the equivalence principle, in the context of a homogeneous gravitational field, which is a reasonable approximation for small test masses falling moderate distances. Newton's law of gravity provides a generalisation to larger distances, and also brings in an inhomogeneity in the gravitational field.…

  13. Estimation of Toxicity Equivalent Concentration (TEQ) of ...

    African Journals Online (AJOL)

    Estimation of Toxicity Equivalent Concentration (TEQ) of carcinogenic polycyclic aromatic hydrocarbons in soils from Idu Ekpeye playground and University of Port ... Effective soil remediation and detoxification method like Dispersion by chemical reaction technology should be deployed to clean-up sites to avoid soil toxicity ...

  14. Chemical equivalence assessment of three brands of ...

    African Journals Online (AJOL)

    Assay for content of active ingredients is a critical test of drug quality; failure to meet up the standard for content of active ingredients will result to sub therapeutic quantities. Three brands (A, B and C) of carbamazepine were assayed to determine their chemical equivalence as well as their anticonvulsant activities. This was ...

  15. On Behavioral Equivalence of Rational Representations

    NARCIS (Netherlands)

    Trentelman, Harry L.; Willems, JC; Hara, S; Ohta, Y; Fujioka, H

    2010-01-01

    This article deals with the equivalence of representations of behaviors of linear differential systems In general. the behavior of a given linear differential system has many different representations. In this paper we restrict ourselves to kernel representations and image representations Two kernel

  16. Visual Equivalence and Amodal Completion in Cuttlefish.

    Science.gov (United States)

    Lin, I-Rong; Chiao, Chuan-Chin

    2017-01-01

    Modern cephalopods are notably the most intelligent invertebrates and this is accompanied by keen vision. Despite extensive studies investigating the visual systems of cephalopods, little is known about their visual perception and object recognition. In the present study, we investigated the visual processing of the cuttlefish Sepia pharaonis , including visual equivalence and amodal completion. Cuttlefish were trained to discriminate images of shrimp and fish using the operant conditioning paradigm. After cuttlefish reached the learning criteria, a series of discrimination tasks were conducted. In the visual equivalence experiment, several transformed versions of the training images, such as images reduced in size, images reduced in contrast, sketches of the images, the contours of the images, and silhouettes of the images, were used. In the amodal completion experiment, partially occluded views of the original images were used. The results showed that cuttlefish were able to treat the training images of reduced size and sketches as the visual equivalence. Cuttlefish were also capable of recognizing partially occluded versions of the training image. Furthermore, individual differences in performance suggest that some cuttlefish may be able to recognize objects when visual information was partly removed. These findings support the hypothesis that the visual perception of cuttlefish involves both visual equivalence and amodal completion. The results from this research also provide insights into the visual processing mechanisms used by cephalopods.

  17. Possibility and necessity measures and integral equivalence

    Czech Academy of Sciences Publication Activity Database

    Chen, T.; Mesiar, Radko; Li, J.; Stupňanová, A.

    2017-01-01

    Roč. 86, č. 1 (2017), s. 62-72 ISSN 0888-613X Institutional support: RVO:67985556 Keywords : Integral equivalence * Necessity measure * Possibility measure * Survival function * Universal integral Subject RIV: BA - General Mathematics OBOR OECD: Statistics and probability Impact factor: 2.845, year: 2016 http://library.utia.cas.cz/separaty/2017/E/mesiar-0477092.pdf

  18. Fuel Cell Equivalent Electric Circuit Parameter Mapping

    DEFF Research Database (Denmark)

    Jeppesen, Christian; Zhou, Fan; Andreasen, Søren Juhl

    In this work a simple model for a fuel cell is investigated for diagnostic purpose. The fuel cell is characterized, with respect to the electrical impedance of the fuel cell at non-faulty conditions and under variations in load current. Based on this the equivalent electrical circuit parameters can...

  19. Weak equivalence classes of complex vector bundles

    Czech Academy of Sciences Publication Activity Database

    Le, Hong-Van

    LXXVII, č. 1 (2008), s. 23-30 ISSN 0862-9544 R&D Projects: GA AV ČR IAA100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : chern classes * complex Grassmannians weak equivalence Subject RIV: BA - General Mathematics

  20. Violation of Equivalence Principle and Solar Neutrinos

    International Nuclear Information System (INIS)

    Gago, A.M.; Nunokawa, H.; Zukanovich Funchal, R.

    2001-01-01

    We have updated the analysis for the solution to the solar neutrino problem by the long-wavelength neutrino oscillations induced by a tiny breakdown of the weak equivalence principle of general relativity, and obtained a very good fit to all the solar neutrino data

  1. Bilingual Dictionaries and Communicative Equivalence for a ...

    African Journals Online (AJOL)

    This implies that a bilingual dictionary becomes a poly functional instrument, presenting more information than just translation equivalents. ... With the emphasis on the user perspective, metalexicographical criteria are used to investigate problems regarding the access structure and the addressing procedures in Afrikaans ...

  2. Equivalent operator preconditioning for elliptic problems

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Karátson, J.

    2009-01-01

    Roč. 50, č. 3 (2009), s. 297-380 ISSN 1017-1398 Institutional research plan: CEZ:AV0Z30860518 Keywords : Elliptic problem * Conjugate gradient method * preconditioning * equivalent operators * compact operators Subject RIV: BA - General Mathematics Impact factor: 0.716, year: 2009 http://en.scientificcommons.org/42514649

  3. Superstring field theory equivalence: Ramond sector

    International Nuclear Information System (INIS)

    Kroyter, Michael

    2009-01-01

    We prove that the finite gauge transformation of the Ramond sector of the modified cubic superstring field theory is ill-defined due to collisions of picture changing operators. Despite this problem we study to what extent could a bijective classical correspondence between this theory and the (presumably consistent) non-polynomial theory exist. We find that the classical equivalence between these two theories can almost be extended to the Ramond sector: We construct mappings between the string fields (NS and Ramond, including Chan-Paton factors and the various GSO sectors) of the two theories that send solutions to solutions in a way that respects the linearized gauge symmetries in both sides and keeps the action of the solutions invariant. The perturbative spectrum around equivalent solutions is also isomorphic. The problem with the cubic theory implies that the correspondence of the linearized gauge symmetries cannot be extended to a correspondence of the finite gauge symmetries. Hence, our equivalence is only formal, since it relates a consistent theory to an inconsistent one. Nonetheless, we believe that the fact that the equivalence formally works suggests that a consistent modification of the cubic theory exists. We construct a theory that can be considered as a first step towards a consistent RNS cubic theory.

  4. Equivalence Scales for the Former West Germany

    NARCIS (Netherlands)

    Charlier, E.

    1997-01-01

    Equivalence scales provide answers to questions like how much a household with four children needs to spend compared to a household with two children or how much a childless couple needs to spend compared to a single person household to attain the same welfare level. These are important questions

  5. Confluence Modulo Equivalence in Constraint Handling Rules

    DEFF Research Database (Denmark)

    Christiansen, Henning; Kirkeby, Maja Hanne

    2014-01-01

    Previous results on confluence for Constraint Handling Rules, CHR, are generalized to take into account user-defined state equivalence relations. This allows a much larger class of programs to enjoy the ad- vantages of confluence, which include various optimization techniques and simplified...

  6. Equivalence of rational representations of behaviors

    NARCIS (Netherlands)

    Gottimukkala, Sasanka; Fiaz, Shaik; Trentelman, H.L.

    This article deals with the equivalence of representations of behaviors of linear differential systems. In general, the behavior of a given linear differential system has many different representations. In this paper we restrict ourselves to kernel and image representations. Two kernel

  7. Dual Smarandache Curves and Smarandache Ruled Surfaces

    OpenAIRE

    Tanju KAHRAMAN; Mehmet ÖNDER; H. Hüseyin UGURLU

    2013-01-01

    In this paper, by considering dual geodesic trihedron (dual Darboux frame) we define dual Smarandache curves lying fully on dual unit sphere S^2 and corresponding to ruled surfaces. We obtain the relationships between the elements of curvature of dual spherical curve (ruled surface) x(s) and its dual Smarandache curve (Smarandache ruled surface) x1(s) and we give an example for dual Smarandache curves of a dual spherical curve.

  8. SU-F-T-181: Proton Therapy Tissue-Equivalence of 3D Printed Materials

    International Nuclear Information System (INIS)

    Taylor, P; Craft, D; Followill, D; Howell, R

    2016-01-01

    Purpose: This work investigated the proton tissue-equivalence of various 3D printed materials. Methods: Three 3D printers were used to create 5 cm cubic phantoms made of different plastics with varying percentages of infill. White resin, polylactic acid (PLA), and NinjaFlex plastics were used. The infills ranged from 15% to 100%. Each phantom was scanned with a CT scanner to obtain the HU value. The relative linear stopping power (RLSP) was then determined using a multi-layer ion chamber in a 200 MeV proton beam. The RLSP was measured both parallel and perpendicular to the print direction for each material. Results: The HU values of the materials ranged from lung-equivalent (−820 HU σ160) when using a low infill, to soft-tissue-equivalent 159 (σ12). The RLSP of the materials depended on the orientation of the beam relative to the print direction. When the proton beam was parallel to the print direction, the RLSP was generally higher than the RLSP in the perpendicular orientation, by up to 45%. This difference was smaller (less than 6%) for the materials with 100% infill. For low infill cubes irradiated parallel to the print direction, the SOBP curve showed extreme degradation of the beam in the distal region. The materials with 15–25% infill had wide-ranging agreement with a clinical HU-RLSP conversion curve, with some measurements falling within 1% of the curve and others deviating up to 45%. The materials with 100% infill all fell within 7% of the curve. Conclusion: While some materials tested fall within 1% of a clinical HU-RLSP curve, caution should be taken when using 3D printed materials with proton therapy, as the orientation of the beam relative to the print direction can result in a large change in RLSP. Further investigation is needed to measure how the infill pattern affects the material RLSP. This work was supported by PHS grant CA180803.

  9. SU-F-T-181: Proton Therapy Tissue-Equivalence of 3D Printed Materials

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, P; Craft, D; Followill, D; Howell, R [UT MD Anderson Cancer Center, Houston, TX (United States)

    2016-06-15

    Purpose: This work investigated the proton tissue-equivalence of various 3D printed materials. Methods: Three 3D printers were used to create 5 cm cubic phantoms made of different plastics with varying percentages of infill. White resin, polylactic acid (PLA), and NinjaFlex plastics were used. The infills ranged from 15% to 100%. Each phantom was scanned with a CT scanner to obtain the HU value. The relative linear stopping power (RLSP) was then determined using a multi-layer ion chamber in a 200 MeV proton beam. The RLSP was measured both parallel and perpendicular to the print direction for each material. Results: The HU values of the materials ranged from lung-equivalent (−820 HU σ160) when using a low infill, to soft-tissue-equivalent 159 (σ12). The RLSP of the materials depended on the orientation of the beam relative to the print direction. When the proton beam was parallel to the print direction, the RLSP was generally higher than the RLSP in the perpendicular orientation, by up to 45%. This difference was smaller (less than 6%) for the materials with 100% infill. For low infill cubes irradiated parallel to the print direction, the SOBP curve showed extreme degradation of the beam in the distal region. The materials with 15–25% infill had wide-ranging agreement with a clinical HU-RLSP conversion curve, with some measurements falling within 1% of the curve and others deviating up to 45%. The materials with 100% infill all fell within 7% of the curve. Conclusion: While some materials tested fall within 1% of a clinical HU-RLSP curve, caution should be taken when using 3D printed materials with proton therapy, as the orientation of the beam relative to the print direction can result in a large change in RLSP. Further investigation is needed to measure how the infill pattern affects the material RLSP. This work was supported by PHS grant CA180803.

  10. Test for the statistical significance of differences between ROC curves

    International Nuclear Information System (INIS)

    Metz, C.E.; Kronman, H.B.

    1979-01-01

    A test for the statistical significance of observed differences between two measured Receiver Operating Characteristic (ROC) curves has been designed and evaluated. The set of observer response data for each ROC curve is assumed to be independent and to arise from a ROC curve having a form which, in the absence of statistical fluctuations in the response data, graphs as a straight line on double normal-deviate axes. To test the significance of an apparent difference between two measured ROC curves, maximum likelihood estimates of the two parameters of each curve and the associated parameter variances and covariance are calculated from the corresponding set of observer response data. An approximate Chi-square statistic with two degrees of freedom is then constructed from the differences between the parameters estimated for each ROC curve and from the variances and covariances of these estimates. This statistic is known to be truly Chi-square distributed only in the limit of large numbers of trials in the observer performance experiments. Performance of the statistic for data arising from a limited number of experimental trials was evaluated. Independent sets of rating scale data arising from the same underlying ROC curve were paired, and the fraction of differences found (falsely) significant was compared to the significance level, α, used with the test. Although test performance was found to be somewhat dependent on both the number of trials in the data and the position of the underlying ROC curve in the ROC space, the results for various significance levels showed the test to be reliable under practical experimental conditions

  11. W-curve alignments for HIV-1 genomic comparisons.

    Directory of Open Access Journals (Sweden)

    Douglas J Cork

    2010-06-01

    Full Text Available The W-curve was originally developed as a graphical visualization technique for viewing DNA and RNA sequences. Its ability to render features of DNA also makes it suitable for computational studies. Its main advantage in this area is utilizing a single-pass algorithm for comparing the sequences. Avoiding recursion during sequence alignments offers advantages for speed and in-process resources. The graphical technique also allows for multiple models of comparison to be used depending on the nucleotide patterns embedded in similar whole genomic sequences. The W-curve approach allows us to compare large numbers of samples quickly.We are currently tuning the algorithm to accommodate quirks specific to HIV-1 genomic sequences so that it can be used to aid in diagnostic and vaccine efforts. Tracking the molecular evolution of the virus has been greatly hampered by gap associated problems predominantly embedded within the envelope gene of the virus. Gaps and hypermutation of the virus slow conventional string based alignments of the whole genome. This paper describes the W-curve algorithm itself, and how we have adapted it for comparison of similar HIV-1 genomes. A treebuilding method is developed with the W-curve that utilizes a novel Cylindrical Coordinate distance method and gap analysis method. HIV-1 C2-V5 env sequence regions from a Mother/Infant cohort study are used in the comparison.The output distance matrix and neighbor results produced by the W-curve are functionally equivalent to those from Clustal for C2-V5 sequences in the mother/infant pairs infected with CRF01_AE.Significant potential exists for utilizing this method in place of conventional string based alignment of HIV-1 genomes, such as Clustal X. With W-curve heuristic alignment, it may be possible to obtain clinically useful results in a short time-short enough to affect clinical choices for acute treatment. A description of the W-curve generation process, including a comparison

  12. W-curve alignments for HIV-1 genomic comparisons.

    Science.gov (United States)

    Cork, Douglas J; Lembark, Steven; Tovanabutra, Sodsai; Robb, Merlin L; Kim, Jerome H

    2010-06-01

    The W-curve was originally developed as a graphical visualization technique for viewing DNA and RNA sequences. Its ability to render features of DNA also makes it suitable for computational studies. Its main advantage in this area is utilizing a single-pass algorithm for comparing the sequences. Avoiding recursion during sequence alignments offers advantages for speed and in-process resources. The graphical technique also allows for multiple models of comparison to be used depending on the nucleotide patterns embedded in similar whole genomic sequences. The W-curve approach allows us to compare large numbers of samples quickly. We are currently tuning the algorithm to accommodate quirks specific to HIV-1 genomic sequences so that it can be used to aid in diagnostic and vaccine efforts. Tracking the molecular evolution of the virus has been greatly hampered by gap associated problems predominantly embedded within the envelope gene of the virus. Gaps and hypermutation of the virus slow conventional string based alignments of the whole genome. This paper describes the W-curve algorithm itself, and how we have adapted it for comparison of similar HIV-1 genomes. A treebuilding method is developed with the W-curve that utilizes a novel Cylindrical Coordinate distance method and gap analysis method. HIV-1 C2-V5 env sequence regions from a Mother/Infant cohort study are used in the comparison. The output distance matrix and neighbor results produced by the W-curve are functionally equivalent to those from Clustal for C2-V5 sequences in the mother/infant pairs infected with CRF01_AE. Significant potential exists for utilizing this method in place of conventional string based alignment of HIV-1 genomes, such as Clustal X. With W-curve heuristic alignment, it may be possible to obtain clinically useful results in a short time-short enough to affect clinical choices for acute treatment. A description of the W-curve generation process, including a comparison technique of

  13. Characterizations of Space Curves According to Bishop Darboux Vector in Euclidean 3-Space E3

    OpenAIRE

    Huseyin KOCAYIGIT; Ali OZDEMIR

    2014-01-01

    In this paper, we obtained some characterizations of space curves according to Bihop frame in Euclidean 3-space E3 by using Laplacian operator and Levi-Civita connection. Furthermore, we gave the general differential equations which characterize the space curves according to the Bishop Darboux vector and the normal Bishop Darboux vector.

  14. Integrable motion of curves in self-consistent potentials: Relation to spin systems and soliton equations

    Energy Technology Data Exchange (ETDEWEB)

    Myrzakulov, R.; Mamyrbekova, G.K.; Nugmanova, G.N.; Yesmakhanova, K.R. [Eurasian International Center for Theoretical Physics and Department of General and Theoretical Physics, Eurasian National University, Astana 010008 (Kazakhstan); Lakshmanan, M., E-mail: lakshman@cnld.bdu.ac.in [Centre for Nonlinear Dynamics, School of Physics, Bharathidasan University, Tiruchirapalli 620 024 (India)

    2014-06-13

    Motion of curves and surfaces in R{sup 3} lead to nonlinear evolution equations which are often integrable. They are also intimately connected to the dynamics of spin chains in the continuum limit and integrable soliton systems through geometric and gauge symmetric connections/equivalence. Here we point out the fact that a more general situation in which the curves evolve in the presence of additional self-consistent vector potentials can lead to interesting generalized spin systems with self-consistent potentials or soliton equations with self-consistent potentials. We obtain the general form of the evolution equations of underlying curves and report specific examples of generalized spin chains and soliton equations. These include principal chiral model and various Myrzakulov spin equations in (1+1) dimensions and their geometrically equivalent generalized nonlinear Schrödinger (NLS) family of equations, including Hirota–Maxwell–Bloch equations, all in the presence of self-consistent potential fields. The associated gauge equivalent Lax pairs are also presented to confirm their integrability. - Highlights: • Geometry of continuum spin chain with self-consistent potentials explored. • Mapping on moving space curves in R{sup 3} in the presence of potential fields carried out. • Equivalent generalized nonlinear Schrödinger (NLS) family of equations identified. • Integrability of identified nonlinear systems proved by deducing appropriate Lax pairs.

  15. Multi-MW wind turbine power curve measurements using remote sensing instruments - the first Hoevsoere campaign

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, R.; Courtney, M.

    2009-02-15

    Power curve measurement for large wind turbines requires taking into account more parameters than only the wind speed at hub height. Based on results from aerodynamic simulations, an equivalent wind speed taking the wind shear into account was defined and found to reduce the scatter in the power curve significantly. Two LiDARs and a SoDAR are used to measure the wind profile in front of a wind turbine. These profiles are used to calculate the equivalent wind speed. LiDAR are found to be more accurate than SoDAR and therefore more suitable for power performance measurement. The equivalent wind speed calculated from LiDAR profile measurements gave a small reduction of the power curve uncertainty. Several factors can explain why this difference is smaller than expected, including the experimental design and errors pertaining to the LiDAR at that time. This first measurement campaign shows that used of the equivalent wind speed at least results in a power curve with no more scatter than using the conventional method. (au)

  16. Problems associated with use of the logarithmic equivalent strain in high pressure torsion

    International Nuclear Information System (INIS)

    Jonas, J J; Aranas, C Jr

    2014-01-01

    The logarithmic 'equivalent' strain is frequently recommended for description of the experimental flow curves determined in high pressure torsion (HPT) tests. Some experimental results determined at -196 and 190 °C on a 2024 aluminum alloy are plotted using both the von Mises and logarithmic equivalent strains. Three types of problems associated with use of the latter are described. The first involves the lack of work conjugacy between the logarithmic and shear stress/shear strain curves, a topic that has been discussed earlier. The second concerns the problems associated with testing at constant logarithmic strain rate, a feature of particular importance when the material is rate sensitive. The third type of problem involves the 'history dependence' of this measure in that the incremental logarithmic strain depends on whether the prior strain accumulated in the sample is known or not. This is a difficulty that does not affect use of the von Mises equivalent strain. For these reasons, it is concluded that the qualifier 'equivalent' should not be used when the logarithmic strain is employed to describe HPT results

  17. On the backreaction of scalar and spinor quantum fields in curved spacetimes

    International Nuclear Information System (INIS)

    Hack, Thomas-Paul

    2010-10-01

    In the first instance, the present work is concerned with generalising constructions and results in quantum field theory on curved spacetimes from the well-known case of the Klein-Gordon field to Dirac fields. To this end, the enlarged algebra of observables of the Dirac field is constructed in the algebraic framework. This algebra contains normal-ordered Wick polynomials in particular, and an extended analysis of one of its elements, the stress-energy tensor, is performed. Based on detailed calculations of the Hadamard coe?cients of the Dirac field, it is found that a local, covariant, and covariantly conserved construction of the stress-energy tensor is possible. Additionally, the mathematically sound Hadamard regularisation prescription of the stress-energy tensor is compared to the mathematically less rigorous DeWitt-Schwinger regularisation. It is found that both prescriptions are essentially equivalent, particularly, it turns out to be possible to formulate the DeWitt-Schwinger prescription in a well-defined way. While the aforementioned results hold in generic curved spacetimes, particular attention is also devoted to a specific class of Robertson-Walker spacetimes with a lightlike Big Bang hypersurface. Employing holographic methods, Hadamard states for the Klein-Gordon and the Dirac field are constructed. These states are preferred in the sense that they constitute asymptotic equilibrium states in the limit to the Big Bang hypersurface. Finally, solutions of the semiclassical Einstein equation for quantum fields of arbitrary spin are analysed in the flat Robertson-Walker case. One finds that these solutions explain the measured supernova Ia data as good as the ΛCDM model. Hence, one arrives at a natural explanation of dark energy and a simple quantum model of cosmological dark matter. (orig.)

  18. On the backreaction of scalar and spinor quantum fields in curved spacetimes

    Energy Technology Data Exchange (ETDEWEB)

    Hack, Thomas-Paul

    2010-10-15

    In the first instance, the present work is concerned with generalising constructions and results in quantum field theory on curved spacetimes from the well-known case of the Klein-Gordon field to Dirac fields. To this end, the enlarged algebra of observables of the Dirac field is constructed in the algebraic framework. This algebra contains normal-ordered Wick polynomials in particular, and an extended analysis of one of its elements, the stress-energy tensor, is performed. Based on detailed calculations of the Hadamard coe?cients of the Dirac field, it is found that a local, covariant, and covariantly conserved construction of the stress-energy tensor is possible. Additionally, the mathematically sound Hadamard regularisation prescription of the stress-energy tensor is compared to the mathematically less rigorous DeWitt-Schwinger regularisation. It is found that both prescriptions are essentially equivalent, particularly, it turns out to be possible to formulate the DeWitt-Schwinger prescription in a well-defined way. While the aforementioned results hold in generic curved spacetimes, particular attention is also devoted to a specific class of Robertson-Walker spacetimes with a lightlike Big Bang hypersurface. Employing holographic methods, Hadamard states for the Klein-Gordon and the Dirac field are constructed. These states are preferred in the sense that they constitute asymptotic equilibrium states in the limit to the Big Bang hypersurface. Finally, solutions of the semiclassical Einstein equation for quantum fields of arbitrary spin are analysed in the flat Robertson-Walker case. One finds that these solutions explain the measured supernova Ia data as good as the {lambda}CDM model. Hence, one arrives at a natural explanation of dark energy and a simple quantum model of cosmological dark matter. (orig.)

  19. The exp-normal distribution is infinitely divisible

    OpenAIRE

    Pinelis, Iosif

    2018-01-01

    Let $Z$ be a standard normal random variable (r.v.). It is shown that the distribution of the r.v. $\\ln|Z|$ is infinitely divisible; equivalently, the standard normal distribution considered as the distribution on the multiplicative group over $\\mathbb{R}\\setminus\\{0\\}$ is infinitely divisible.

  20. An equivalent body surface charge model representing three-dimensional bioelectrical activity

    Science.gov (United States)

    He, B.; Chernyak, Y. B.; Cohen, R. J.

    1995-01-01

    A new surface-source model has been developed to account for the bioelectrical potential on the body surface. A single-layer surface-charge model on the body surface has been developed to equivalently represent bioelectrical sources inside the body. The boundary conditions on the body surface are discussed in relation to the surface-charge in a half-space conductive medium. The equivalent body surface-charge is shown to be proportional to the normal component of the electric field on the body surface just outside the body. The spatial resolution of the equivalent surface-charge distribution appears intermediate between those of the body surface potential distribution and the body surface Laplacian distribution. An analytic relationship between the equivalent surface-charge and the surface Laplacian of the potential was found for a half-space conductive medium. The effects of finite spatial sampling and noise on the reconstruction of the equivalent surface-charge were evaluated by computer simulations. It was found through computer simulations that the reconstruction of the equivalent body surface-charge from the body surface Laplacian distribution is very stable against noise and finite spatial sampling. The present results suggest that the equivalent body surface-charge model may provide an additional insight to our understanding of bioelectric phenomena.

  1. Variation of indoor radon concentration and ambient dose equivalent rate in different outdoor and indoor environments

    Energy Technology Data Exchange (ETDEWEB)

    Stojanovska, Zdenka; Janevik, Emilija; Taleski, Vaso [Goce Delcev University, Faculty of Medical Sciences, Stip (Macedonia, The Former Yugoslav Republic of); Boev, Blazo [Goce Delcev University, Faculty of Natural and Technical Sciences, Stip (Macedonia, The Former Yugoslav Republic of); Zunic, Zora S. [University of Belgrade, Institute of Nuclear Sciences ' ' Vinca' ' , Belgrade (Serbia); Ivanova, Kremena; Tsenova, Martina [National Center of Radiobiology and Radiation Protection, Sofia (Bulgaria); Ristova, Mimoza [University in Ss. Cyril and Methodius, Faculty of Natural Sciences and Mathematic, Institute of Physics, Skopje (Macedonia, The Former Yugoslav Republic of); Ajka, Sorsa [Croatian Geological Survey, Zagreb (Croatia); Bossew, Peter [German Federal Office for Radiation Protection, Berlin (Germany)

    2016-05-15

    Subject of this study is an investigation of the variations of indoor radon concentration and ambient dose equivalent rate in outdoor and indoor environments of 40 dwellings, 31 elementary schools and five kindergartens. The buildings are located in three municipalities of two, geologically different, areas of the Republic of Macedonia. Indoor radon concentrations were measured by nuclear track detectors, deployed in the most occupied room of the building, between June 2013 and May 2014. During the deploying campaign, indoor and outdoor ambient dose equivalent rates were measured simultaneously at the same location. It appeared that the measured values varied from 22 to 990 Bq/m{sup 3} for indoor radon concentrations, from 50 to 195 nSv/h for outdoor ambient dose equivalent rates, and from 38 to 184 nSv/h for indoor ambient dose equivalent rates. The geometric mean value of indoor to outdoor ambient dose equivalent rates was found to be 0.88, i.e. the outdoor ambient dose equivalent rates were on average higher than the indoor ambient dose equivalent rates. All measured can reasonably well be described by log-normal distributions. A detailed statistical analysis of factors which influence the measured quantities is reported. (orig.)

  2. Topological Equivalence of Objects. Teacher's Guide for Use with Stretching and Bending. Working Paper No. 18a.

    Science.gov (United States)

    Shah, Sair Ali

    The notions of topological equivalence for one-, two-, and three-dimensional figures, as well as for graphs and networks, are developed for classroom use with children between the ages of three and ten. Properties of open and closed curves are also examined. This manual, addressed to the teacher, describes several activities related to each…

  3. Distribution of Snow and Maximum Snow Water Equivalent Obtained by LANDSAT Data and Degree Day Method

    Science.gov (United States)

    Takeda, K.; Ochiai, H.; Takeuchi, S.

    1985-01-01

    Maximum snow water equivalence and snowcover distribution are estimated using several LANDSAT data taken in snowmelting season over a four year period. The test site is Okutadami-gawa Basin located in the central position of Tohoku-Kanto-Chubu District. The year to year normalization for snowmelt volume computation on the snow line is conducted by year to year correction of degree days using the snowcover percentage within the test basin obtained from LANDSAT data. The maximum snow water equivalent map in the test basin is generated based on the normalized snowmelt volume on the snow line extracted from four LANDSAT data taken in a different year. The snowcover distribution on an arbitrary day in snowmelting of 1982 is estimated from the maximum snow water equivalent map. The estimated snowcover is compared with the snowcover area extracted from NOAA-AVHRR data taken on the same day. The applicability of the snow estimation using LANDSAT data is discussed.

  4. Birth weight curves tailored to maternal world region.

    Science.gov (United States)

    Ray, Joel G; Sgro, Michael; Mamdani, Muhammad M; Glazier, Richard H; Bocking, Alan; Hilliard, Robert; Urquia, Marcelo L

    2012-02-01

    Newborns of certain immigrant mothers are smaller at birth than those of domestically born mothers. Contemporary, population-derived percentile curves for these newborns are lacking, as are estimates of their risk of being misclassified as too small or too large using conventional rather than tailored birth weight curves. We completed a population-based study of 766 688 singleton live births in Ontario from 2002 to 2007. Smoothed birth weight percentile curves were generated for males and females, categorized by maternal world region of birth: Canada (63.5%), Europe/Western nations (7.6%), Africa/Caribbean (4.9%), Middle East/North Africa (3.4%), Latin America (3.4%), East Asia/Pacific (8.1%), and South Asia (9.2%). We determined the likelihood of misclassifying an infant as small for gestational age (≤ 10th percentile for weight) or as large for gestational age (≥ 90th percentile for weight) on a Canadian-born maternal curve versus one specific to maternal world region of origin. Significantly lower birth weights were seen at gestation-specific 10th, 50th, and 90th percentiles among term infants born to mothers from each world region, with the exception of Europe/Western nations, compared with those for infants of Canadian-born mothers. For example, for South Asian babies born at 40 weeks' gestation, the absolute difference at the 10th percentile was 198 g (95% CI 183 to 212) for males and 170 g (95% CI 161 to 179) for females. Controlling for maternal age and parity, South Asian males had an odds ratio of 2.60 (95% CI 2.53 to 2.68) of being misclassified as small for gestational age, equivalent to approximately 116 in 1000 newborns; for South Asian females the OR was 2.41 (95% CI 2.34 to 2.48), equivalent to approximately 106 per 1000 newborns. Large for gestational age would be missed in approximately 61 per 1000 male and 57 per 1000 female South Asian newborns if conventional rather than ethnicity-specific birth weight curves were used. Birth weight curves

  5. Unsaturated aldehydes as alkene equivalents in the Diels-Alder reaction

    DEFF Research Database (Denmark)

    Taarning, Esben; Madsen, Robert

    2008-01-01

    A one-pot procedure is described for using alpha,beta-unsaturated aldehydes as olefin equivalents in the Diels-Alder reaction. The method combines the normal electron demand cycloaddition with aldehyde dienophiles and the rhodium-catalyzed decarbonylation of aldehydes to afford cyclohexenes...

  6. Observable Zitterbewegung in curved spacetimes

    Science.gov (United States)

    Kobakhidze, Archil; Manning, Adrian; Tureanu, Anca

    2016-06-01

    Zitterbewegung, as it was originally described by Schrödinger, is an unphysical, non-observable effect. We verify whether the effect can be observed in non-inertial reference frames/curved spacetimes, where the ambiguity in defining particle states results in a mixing of positive and negative frequency modes. We explicitly demonstrate that such a mixing is in fact necessary to obtain the correct classical value for a particle's velocity in a uniformly accelerated reference frame, whereas in cosmological spacetime a particle does indeed exhibit Zitterbewegung.

  7. Observable Zitterbewegung in curved spacetimes

    Energy Technology Data Exchange (ETDEWEB)

    Kobakhidze, Archil, E-mail: archilk@physics.usyd.edu.au [ARC Centre of Excellence for Particle Physics at the Terascale, School of Physics, The University of Sydney, NSW 2006 (Australia); Manning, Adrian, E-mail: a.manning@physics.usyd.edu.au [ARC Centre of Excellence for Particle Physics at the Terascale, School of Physics, The University of Sydney, NSW 2006 (Australia); Tureanu, Anca, E-mail: anca.tureanu@helsinki.fi [Department of Physics, University of Helsinki, P.O. Box 64, 00014 Helsinki (Finland)

    2016-06-10

    Zitterbewegung, as it was originally described by Schrödinger, is an unphysical, non-observable effect. We verify whether the effect can be observed in non-inertial reference frames/curved spacetimes, where the ambiguity in defining particle states results in a mixing of positive and negative frequency modes. We explicitly demonstrate that such a mixing is in fact necessary to obtain the correct classical value for a particle's velocity in a uniformly accelerated reference frame, whereas in cosmological spacetime a particle does indeed exhibit Zitterbewegung.

  8. Differential geometry curves, surfaces, manifolds

    CERN Document Server

    Kohnel, Wolfgang

    2002-01-01

    This carefully written book is an introduction to the beautiful ideas and results of differential geometry. The first half covers the geometry of curves and surfaces, which provide much of the motivation and intuition for the general theory. Special topics that are explored include Frenet frames, ruled surfaces, minimal surfaces and the Gauss-Bonnet theorem. The second part is an introduction to the geometry of general manifolds, with particular emphasis on connections and curvature. The final two chapters are insightful examinations of the special cases of spaces of constant curvature and Einstein manifolds. The text is illustrated with many figures and examples. The prerequisites are undergraduate analysis and linear algebra.

  9. LINS Curve in Romanian Economy

    Directory of Open Access Journals (Sweden)

    Emilian Dobrescu

    2016-02-01

    Full Text Available The paper presents theoretical considerations and empirical evidence to test the validity of the Laffer in Narrower Sense (LINS curve as a parabola with a maximum. Attention is focused on the so-called legal-effective tax gap (letg. The econometric application is based on statistical data (1990-2013 for Romania as an emerging European economy. Three cointegrating regressions (fully modified least squares, canonical cointegrating regression and dynamic least squares and three algorithms, which are based on instrumental variables (two-stage least squares, generalized method of moments, and limited information maximum likelihood, are involved.

  10. Normalized modes at selected points without normalization

    Science.gov (United States)

    Kausel, Eduardo

    2018-04-01

    As every textbook on linear algebra demonstrates, the eigenvectors for the general eigenvalue problem | K - λM | = 0 involving two real, symmetric, positive definite matrices K , M satisfy some well-defined orthogonality conditions. Equally well-known is the fact that those eigenvectors can be normalized so that their modal mass μ =ϕT Mϕ is unity: it suffices to divide each unscaled mode by the square root of the modal mass. Thus, the normalization is the result of an explicit calculation applied to the modes after they were obtained by some means. However, we show herein that the normalized modes are not merely convenient forms of scaling, but that they are actually intrinsic properties of the pair of matrices K , M, that is, the matrices already "know" about normalization even before the modes have been obtained. This means that we can obtain individual components of the normalized modes directly from the eigenvalue problem, and without needing to obtain either all of the modes or for that matter, any one complete mode. These results are achieved by means of the residue theorem of operational calculus, a finding that is rather remarkable inasmuch as the residues themselves do not make use of any orthogonality conditions or normalization in the first place. It appears that this obscure property connecting the general eigenvalue problem of modal analysis with the residue theorem of operational calculus may have been overlooked up until now, but which has in turn interesting theoretical implications.Á

  11. Investigation of the bases for use of the KIc curve

    International Nuclear Information System (INIS)

    McCabe, D.E.; Nanstad, R.K.; Rosenfield, A.R.; Marschall, C.W.; Irwin, G.R.

    1991-01-01

    Title 10 of the Code of Federal Regulations, Part 50 (10CFR50), Appendix G, establishes the bases for setting allowable pressure and temperature limits on reactors during heatup and cooldown operation. Both the K Ic and K Ia curves are utilized in prescribed ways to maintain reactor vessel structural integrity in the presence of an assumed or actual flaw and operating stresses. Currently, the code uses the K Ia curve, normalized to the RT NDT , to represent the fracture toughness trend for unirradiated and irradiated pressure vessel steels. Although this is clearly a conservative policy, it has been suggested that the K Ic curve is the more appropriate for application to a non-accident operating condition. A number of uncertainties have been identified, however, that might convert normal operating transients into a dynamic loading situation. Those include the introduction of running cracks from local brittle zones, crack pop-ins, reduced toughness from arrested cleavage cracks, description of the K Ic curve for irradiated materials, and other related unresolved issues relative to elastic-plastic fracture mechanics. Some observations and conclusions can be made regarding various aspects of those uncertainties and they are discussed in this paper. A discussion of further work required and under way to address the remaining uncertainties is also presented

  12. Differential geometry and topology of curves

    CERN Document Server

    Animov, Yu

    2001-01-01

    Differential geometry is an actively developing area of modern mathematics. This volume presents a classical approach to the general topics of the geometry of curves, including the theory of curves in n-dimensional Euclidean space. The author investigates problems for special classes of curves and gives the working method used to obtain the conditions for closed polygonal curves. The proof of the Bakel-Werner theorem in conditions of boundedness for curves with periodic curvature and torsion is also presented. This volume also highlights the contributions made by great geometers. past and present, to differential geometry and the topology of curves.

  13. On the equivalence of GPD representations

    International Nuclear Information System (INIS)

    Müller, Dieter; Semenov-Tian-Shansky, Kirill

    2016-01-01

    Phenomenological representations of generalized parton distributions (GPDs) implementing the non-trivial field theoretical requirements are employed in the present day strategies for extracting of hadron structure information encoded in GPDs from the observables of hard exclusive reactions. Showing out the equivalence of various GPD representations can help to get more insight into GPD properties and allow to build up flexible GPD models capable of satisfactory description of the whole set of available experimental data. Below we review the mathematical aspects of establishing equivalence between the the double partial wave expansion of GPDs in the conformal partial waves and in the t-channel SO(3) partial waves and the double distribution representation of GPDs

  14. Developing equivalent circuits for radial distribution networks

    Energy Technology Data Exchange (ETDEWEB)

    Prada, Ricardo; Coelho, Agnelo; Rodrigues, Anselmo [Catholic University of Rio de Janeiro (PUC-Rio), RJ (Brazil). Dept. of Electrical Engineering], Emails: prada@ele.puc-rio.br, agnelo@ele.puc-rio.br, nebulok_99@yahoo.com; Silva, Maria da Guia da [Federal University of Maranhao, Sao Luiz, MA (Brazil). Dept. of Electrical Engineering

    2009-07-01

    This paper presents a method for evaluating External Equivalent in Electric Distribution Networks (EDN).The proposed method has as its main objectives the reduction of the computational costs in distribution network reconfiguration, investigation of the optimal allocation of banks of capacitors, investigation of the allocation of distributed generation, etc. In these sorts of problems a large number of alternative projects must be assessed in order to identify the optimal solution. The optimal solution comes up with the voltage level in the load points within specified limits. Consequently, the EDN must retain the external network load points but without major increasing in the dimension of the equivalent circuit. The proposed method has been tested and validated in a substation of the Electricity Utility of Maranhao - CEMAR, in Brazil. (author)

  15. Thermoluminescence dosemeter for personal dose equivalent assessment

    International Nuclear Information System (INIS)

    Silva, T.A. da; Rosa, L.A.R. da; Campos, L.L.

    1995-01-01

    The possibility was investigated of utilising a Brazilian thermoluminescence individual dosemeter, usually calibrated in terms of photon dose equivalent, for the assessment of the personal dose equivalent, H p (d), at depths of 0.07 and 10 mm. The dosemeter uses four CaSO 4 :Dy thermoluminescent detectors, between different filters, as the sensitive materials. It was calibrated in gamma and X radiation fields in the energy range from 17 to 1250 keV. Linear combinations of the responses of three detectors, in this energy range, allow the evaluation of H p (0.07) and H p (10), for radiation incidence angles varying from 0 to 60 degrees, with an accuracy better than 35%. The method is not applicable to mixed photon-beta fields. (author)

  16. Thevenin Equivalent Method for Dynamic Contingency Assessment

    DEFF Research Database (Denmark)

    Møller, Jakob Glarbo; Jóhannsson, Hjörtur; Østergaard, Jacob

    2015-01-01

    A method that exploits Thevenin equivalent representation for obtaining post-contingency steady-state nodal voltages is integrated with a method of detecting post-contingency aperiodic small-signal instability. The task of integrating stability assessment with contingency assessment is challenged...... by the cases of unstable post-contingency conditions. For unstable postcontingency conditions there exists no credible steady-state which can be used for basis of a stability assessment. This paper demonstrates how Thevenin Equivalent methods can be applied in algebraic representation of such bifurcation...... points which may be used in assessment of post-contingency aperiodic small-signal stability. The assessment method is introduced with a numeric example....

  17. The Logic of Equivalence in Academic Discourse?

    DEFF Research Database (Denmark)

    Madsen, Dorte

    2017-01-01

    of discourse to distinguish between the scientific field, where interrelationships among academic disciplines are taken as an object of research, and the widespread uses of ‘interdisciplinary’ and ‘interdisciplinarity’ in academic discourse more generally, typically for legitimation purposes. The assumption......-discourses meet. It is suggested that the logics of signification, and the tension between difference and equivalence, may be important tools for theorizing this borderland. It is argued that whereas the logic of equivalence and the production of empty signifiers appears to be of marginal interest...... to the scientific field, the logic of difference as a more complex articulation of elements, seems to be more in line with the ideals of academic discourse....

  18. On equivalent resistance of electrical circuits

    Science.gov (United States)

    Kagan, Mikhail

    2015-01-01

    While the standard (introductory physics) way of computing the equivalent resistance of nontrivial electrical circuits is based on Kirchhoff's rules, there is a mathematically and conceptually simpler approach, called the method of nodal potentials, whose basic variables are the values of the electric potential at the circuit's nodes. In this paper, we review the method of nodal potentials and illustrate it using the Wheatstone bridge as an example. We then derive a closed-form expression for the equivalent resistance of a generic circuit, which we apply to a few sample circuits. The result unveils a curious interplay between electrical circuits, matrix algebra, and graph theory and its applications to computer science. The paper is written at a level accessible by undergraduate students who are familiar with matrix arithmetic. Additional proofs and technical details are provided in appendices.

  19. Symmetry adaptation, operator equivalents and magnetic resonance

    International Nuclear Information System (INIS)

    Kibler, M.; Chatterjee, R.

    1977-12-01

    Basic quantities for symmetry adaptation are discussed in connection with molecular and solid state physics. This gives rise to a formalism whose the central elements are operator equivalents adapted to a point group. Such symmetry adapted operator equivalents are defined in terms of Schwinger operators so that they cover the off-diagonal and diagonal cases. Special emphasis is put on the applications of the formalism to magnetic resonance. More specifically, it is shown how to apply the formalism to the construction, the study of the transformation properties, and the determination of the eigenstates of a generalized spin hamiltonian. Numerous examples are given as well as key tables relative to the chain SO(3) for making easy the application of the formalism to electron paramagnetic resonance [fr

  20. Ignition Delay of Combustible Materials in Normoxic Equivalent Environments

    Science.gov (United States)

    McAllister, Sara; Fernandez-Pello, Carlos; Ruff, Gary; Urban, David

    2009-01-01

    Material flammability is an important factor in determining the pressure and composition (fraction of oxygen and nitrogen) of the atmosphere in the habitable volume of exploration vehicles and habitats. The method chosen in this work to quantify the flammability of a material is by its ease of ignition. The ignition delay time was defined as the time it takes a combustible material to ignite after it has been exposed to an external heat flux. Previous work in the Forced Ignition and Spread Test (FIST) apparatus has shown that the ignition delay in the currently proposed space exploration atmosphere (approximately 58.6 kPa and32% oxygen concentration) is reduced by 27% compared to the standard atmosphere used in the Space Shuttle and Space Station. In order to determine whether there is a safer environment in terms of material flammability, a series of piloted ignition delay tests using polymethylmethacrylate (PMMA) was conducted in the FIST apparatus to extend the work over a range of possible exploration atmospheres. The exploration atmospheres considered were the normoxic equivalents, i.e. reduced pressure conditions with a constant partial pressure of oxygen. The ignition delay time was seen to decrease as the pressure was reduced along the normoxic curve. The minimum ignition delay observed in the normoxic equivalent environments was nearly 30% lower than in standard atmospheric conditions. The ignition delay in the proposed exploration atmosphere is only slightly larger than this minimum. Interms of material flammability, normoxic environments with a higher pressure relative to the proposed pressure would be desired.

  1. SUPERNOVA LIGHT CURVES POWERED BY FALLBACK ACCRETION

    Energy Technology Data Exchange (ETDEWEB)

    Dexter, Jason; Kasen, Daniel, E-mail: jdexter@berkeley.edu [Departments of Physics and Astronomy, University of California, Berkeley, CA 94720 (United States)

    2013-07-20

    Some fraction of the material ejected in a core collapse supernova explosion may remain bound to the compact remnant, and eventually turn around and fall back. We show that the late time ({approx}>days) power potentially associated with the accretion of this 'fallback' material could significantly affect the optical light curve, in some cases producing super-luminous or otherwise peculiar supernovae. We use spherically symmetric hydrodynamical models to estimate the accretion rate at late times for a range of progenitor masses and radii and explosion energies. The accretion rate onto the proto-neutron star or black hole decreases as M-dot {proportional_to}t{sup -5/3} at late times, but its normalization can be significantly enhanced at low explosion energies, in very massive stars, or if a strong reverse shock wave forms at the helium/hydrogen interface in the progenitor. If the resulting super-Eddington accretion drives an outflow which thermalizes in the outgoing ejecta, the supernova debris will be re-energized at a time when photons can diffuse out efficiently. The resulting light curves are different and more diverse than previous fallback supernova models which ignored the input of accretion power and produced short-lived, dim transients. The possible outcomes when fallback accretion power is significant include super-luminous ({approx}> 10{sup 44} erg s{sup -1}) Type II events of both short and long durations, as well as luminous Type I events from compact stars that may have experienced significant mass loss. Accretion power may unbind the remaining infalling material, causing a sudden decrease in the brightness of some long duration Type II events. This scenario may be relevant for explaining some of the recently discovered classes of peculiar and rare supernovae.

  2. A Logical Characterisation of Static Equivalence

    DEFF Research Database (Denmark)

    Hüttel, Hans; Pedersen, Michael D.

    2007-01-01

    -order logic for frames with quantification over environment knowledge which, under certain general conditions, characterizes static equivalence and is amenable to construction of characteristic formulae. The logic can be used to reason about environment knowledge and can be adapted to a particular application...... by defining a suitable signature and associated equational theory. The logic can furthermore be extended with modalities to yield a modal logic for e.g. the Applied Pi calculus....

  3. Visual Equivalence and Amodal Completion in Cuttlefish

    OpenAIRE

    Lin, I-Rong; Chiao, Chuan-Chin

    2017-01-01

    Modern cephalopods are notably the most intelligent invertebrates and this is accompanied by keen vision. Despite extensive studies investigating the visual systems of cephalopods, little is known about their visual perception and object recognition. In the present study, we investigated the visual processing of the cuttlefish Sepia pharaonis, including visual equivalence and amodal completion. Cuttlefish were trained to discriminate images of shrimp and fish using the operant conditioning pa...

  4. Canonizing certain Borel equivalences for Silver forcing

    Czech Academy of Sciences Publication Activity Database

    Doucha, Michal

    2012-01-01

    Roč. 159, č. 13 (2012), s. 2973-2979 ISSN 0166-8641. [Prague Symposium on General Topology and its Relations to Modern Analysis and Algebra /11./. Prague, 07.08.2011-12.08.2011] Institutional research plan: CEZ:AV0Z10190503 Keywords : Borel equivalence relations * silver ideal * canonical Ramsey theorem Subject RIV: BA - General Mathematics Impact factor: 0.562, year: 2012 http://www.sciencedirect.com/science/article/pii/S0166864112002180#

  5. Quantum mechanics from an equivalence principle

    International Nuclear Information System (INIS)

    Faraggi, A.E.

    1997-01-01

    The authors show that requiring diffeomorphic equivalence for one-dimensional stationary states implies that the reduced action S 0 satisfies the quantum Hamilton-Jacobi equation with the Planck constant playing the role of a covariantizing parameter. The construction shows the existence of a fundamental initial condition which is strictly related to the Moebius symmetry of the Legendre transform and to its involutive character. The universal nature of the initial condition implies the Schroedinger equation in any dimension

  6. Equivalency of two-dimensional algebras

    International Nuclear Information System (INIS)

    Santos, Gildemar Carneiro dos; Pomponet Filho, Balbino Jose S.

    2011-01-01

    Full text: Let us consider a vector z = xi + yj over the field of real numbers, whose basis (i,j) satisfy a given algebra. Any property of this algebra will be reflected in any function of z, so we can state that the knowledge of the properties of an algebra leads to more general conclusions than the knowledge of the properties of a function. However structural properties of an algebra do not change when this algebra suffers a linear transformation, though the structural constants defining this algebra do change. We say that two algebras are equivalent to each other whenever they are related by a linear transformation. In this case, we have found that some relations between the structural constants are sufficient to recognize whether or not an algebra is equivalent to another. In spite that the basis transform linearly, the structural constants change like a third order tensor, but some combinations of these tensors result in a linear transformation, allowing to write the entries of the transformation matrix as function of the structural constants. Eventually, a systematic way to find the transformation matrix between these equivalent algebras is obtained. In this sense, we have performed the thorough classification of associative commutative two-dimensional algebras, and find that even non-division algebra may be helpful in solving non-linear dynamic systems. The Mandelbrot set was used to have a pictorial view of each algebra, since equivalent algebras result in the same pattern. Presently we have succeeded in classifying some non-associative two-dimensional algebras, a task more difficult than for associative one. (author)

  7. Cryogenic test of the equivalence principle

    International Nuclear Information System (INIS)

    Worden, P.W. Jr.

    1976-01-01

    The weak equivalence principle is the hypothesis that the ratio of internal and passive gravitational mass is the same for all bodies. A greatly improved test of this principle is possible in an orbiting satellite. The most promising experiments for an orbital test are adaptations of the Galilean free-fall experiment and the Eotvos balance. Sensitivity to gravity gradient noise, both from the earth and from the spacecraft, defines a limit to the sensitivity in each case. This limit is generally much worse for an Eotvos balance than for a properly designed free-fall experiment. The difference is related to the difficulty of making a balance sufficiently isoinertial. Cryogenic technology is desirable to take full advantage of the potential sensitivity, but tides in the liquid helium refrigerant may produce a gravity gradient that seriously degrades the ultimate sensitivity. The Eotvos balance appears to have a limiting sensitivity to relative difference of rate of fall of about 2 x 10 -14 in orbit. The free-fall experiment is limited by helium tide to about 10 -15 ; if the tide can be controlled or eliminated the limit may approach 10 -18 . Other limitations to equivalence principle experiments are discussed. An experimental test of some of the concepts involved in the orbital free-fall experiment is continuing. The experiment consists in comparing the motions of test masses levitated in a superconducting magnetic bearing, and is itself a sensitive test of the equivalence principle. At present the levitation magnets, position monitors and control coils have been tested and major noise sources identified. A measurement of the equivalence principle is postponed pending development of a system for digitizing data. The experiment and preliminary results are described

  8. Extended equivalent dipole model for radiated emissions

    OpenAIRE

    Obiekezie, Chijioke S.

    2016-01-01

    This work is on the characterisation of radiated fields from electronic devices. An equivalent dipole approach is used. Previous work showed that this was an effective approach for single layer printed circuit boards where an infinite ground plane can be assumed. In this work, this approach is extended for the characterisation of more complex circuit boards or electronic systems.\\ud For complex electronic radiators with finite ground planes, the main challenge is characterising field diffract...

  9. Flow characteristics of curved ducts

    Directory of Open Access Journals (Sweden)

    Rudolf P.

    2007-10-01

    Full Text Available Curved channels are very often present in real hydraulic systems, e.g. curved diffusers of hydraulic turbines, S-shaped bulb turbines, fittings, etc. Curvature brings change of velocity profile, generation of vortices and production of hydraulic losses. Flow simulation using CFD techniques were performed to understand these phenomena. Cases ranging from single elbow to coupled elbows in shapes of U, S and spatial right angle position with circular cross-section were modeled for Re = 60000. Spatial development of the flow was studied and consequently it was deduced that minor losses are connected with the transformation of pressure energy into kinetic energy and vice versa. This transformation is a dissipative process and is reflected in the amount of the energy irreversibly lost. Least loss coefficient is connected with flow in U-shape elbows, biggest one with flow in Sshape elbows. Finally, the extent of the flow domain influenced by presence of curvature was examined. This isimportant for proper placement of mano- and flowmeters during experimental tests. Simulations were verified with experimental results presented in literature.

  10. Improved capacitive melting curve measurements

    International Nuclear Information System (INIS)

    Sebedash, Alexander; Tuoriniemi, Juha; Pentti, Elias; Salmela, Anssi

    2009-01-01

    Sensitivity of the capacitive method for determining the melting pressure of helium can be enhanced by loading the empty side of the capacitor with helium at a pressure nearly equal to that desired to be measured and by using a relatively thin and flexible membrane in between. This way one can achieve a nanobar resolution at the level of 30 bar, which is two orders of magnitude better than that of the best gauges with vacuum reference. This extends the applicability of melting curve thermometry to lower temperatures and would allow detecting tiny anomalies in the melting pressure, which must be associated with any phenomena contributing to the entropy of the liquid or solid phases. We demonstrated this principle in measurements of the crystallization pressure of isotopic helium mixtures at millikelvin temperatures by using partly solid pure 4 He as the reference substance providing the best possible universal reference pressure. The achieved sensitivity was good enough for melting curve thermometry on mixtures down to 100 μK. Similar system can be used on pure isotopes by virtue of a blocked capillary giving a stable reference condition with liquid slightly below the melting pressure in the reference volume. This was tested with pure 4 He at temperatures 0.08-0.3 K. To avoid spurious heating effects, one must carefully choose and arrange any dielectric materials close to the active capacitor. We observed some 100 pW loading at moderate excitation voltages.

  11. Classical optics and curved spaces

    International Nuclear Information System (INIS)

    Bailyn, M.; Ragusa, S.

    1976-01-01

    In the eikonal approximation of classical optics, the unit polarization 3-vector of light satisfies an equation that depends only on the index, n, of refraction. It is known that if the original 3-space line element is d sigma 2 , then this polarization direction propagates parallely in the fictitious space n 2 d sigma 2 . Since the equation depends only on n, it is possible to invent a fictitious curved 4-space in which the light performs a null geodesic, and the polarization 3-vector behaves as the 'shadow' of a parallely propagated 4-vector. The inverse, namely, the reduction of Maxwell's equation, on a curve 'dielectric free) space, to a classical space with dielectric constant n=(-g 00 ) -1 / 2 is well known, but in the latter the dielectric constant epsilon and permeability μ must also equal (-g 00 ) -1 / 2 . The rotation of polarization as light bends around the sun by utilizing the reduction to the classical space, is calculated. This (non-) rotation may then be interpreted as parallel transport in the 3-space n 2 d sigma 2 [pt

  12. The energy-momentum operator in curved space-time

    International Nuclear Information System (INIS)

    Brown, M.R.; Ottewill, A.C.

    1983-01-01

    It is argued that the only meaningful geometrical measure of the energy-momentum of states of matter described by a free quantum field theory in a general curved space-time is that provided by a normal ordered energy-momentum operator. The finite expectation values of this operator are contrasted with the conventional renormalized expectation values and it is further argued that the use of renormalization theory is inappropriate in this context. (author)

  13. Equivalence of Lagrangian and Hamiltonian BRST quantizations

    International Nuclear Information System (INIS)

    Grigoryan, G.V.; Grigoryan, R.P.; Tyutin, I.V.

    1992-01-01

    Two approaches to the quantization of gauge theories using BRST symmetry are widely used nowadays: the Lagrangian quantization, developed in (BV-quantization) and Hamiltonian quantization, formulated in (BFV-quantization). For all known examples of field theory (Yang-Mills theory, gravitation etc.) both schemes give equivalent results. However the equivalence of these approaches in general wasn't proved. The main obstacle in comparing of these formulations consists in the fact, that in Hamiltonian approach the number of ghost fields is equal to the number of all first-class constraints, while in the Lagrangian approach the number of ghosts is equal to the number of independent gauge symmetries, which is equal to the number of primary first-class constraints only. This paper is devoted to the proof of the equivalence of Lagrangian and Hamiltonian quantizations for the systems with first-class constraints only. This is achieved by a choice of special gauge in the Hamiltonian approach. It's shown, that after integration over redundant variables on the functional integral we come to effective action which is constructed according to rules for construction of the effective action in Lagrangian quantization scheme

  14. Energy conservation and the principle of equivalence

    International Nuclear Information System (INIS)

    Haugan, M.P.

    1979-01-01

    If the equivalence principle is violated, then observers performing local experiments can detect effects due to their position in an external gravitational environment (preferred-location effects) or can detect effects due to their velocity through some preferred frame (preferred frame effects). We show that the principle of energy conservation implies a quantitative connection between such effects and structure-dependence of the gravitational acceleration of test bodies (violation of the Weak Equivalence Principle). We analyze this connection within a general theoretical framework that encompasses both non-gravitational local experiments and test bodies as well as gravitational experiments and test bodies, and we use it to discuss specific experimental tests of the equivalence principle, including non-gravitational tests such as gravitational redshift experiments, Eoetvoes experiments, the Hughes-Drever experiment, and the Turner-Hill experiment, and gravitational tests such as the lunar-laser-ranging ''Eoetvoes'' experiment, and measurements of anisotropies and variations in the gravitational constant. This framework is illustrated by analyses within two theoretical formalisms for studying gravitational theories: the PPN formalism, which deals with the motion of gravitating bodies within metric theories of gravity, and the THepsilonμ formalism that deals with the motion of charged particles within all metric theories and a broad class of non-metric theories of gravity

  15. Calculations of a wideband metamaterial absorber using equivalent medium theory

    Science.gov (United States)

    Huang, Xiaojun; Yang, Helin; Wang, Danqi; Yu, Shengqing; Lou, Yanchao; Guo, Ling

    2016-08-01

    Metamaterial absorbers (MMAs) have drawn increasing attention in many areas due to the fact that they can achieve electromagnetic (EM) waves with unity absorptivity. We demonstrate the design, simulation, experiment and calculation of a wideband MMA based on a loaded double-square-loop (DSL) array of chip resisters. For a normal incidence EM wave, the simulated results show that the absorption of the full width at half maximum is about 9.1 GHz, and the relative bandwidth is 87.1%. Experimental results are in agreement with the simulations. More importantly, equivalent medium theory (EMT) is utilized to calculate the absorptions of the DSL MMA, and the calculated absorptions based on EMT agree with the simulated and measured results. The method based on EMT provides a new way to analysis the mechanism of MMAs.

  16. Gauge equivalence of the Gross Pitaevskii equation and the equivalent Heisenberg spin chain

    Science.gov (United States)

    Radha, R.; Kumar, V. Ramesh

    2007-11-01

    In this paper, we construct an equivalent spin chain for the Gross-Pitaevskii equation with quadratic potential and exponentially varying scattering lengths using gauge equivalence. We have then generated the soliton solutions for the spin components S3 and S-. We find that the spin solitons for S3 and S- can be compressed for exponentially growing eigenvalues while they broaden out for decaying eigenvalues.

  17. Dosimetric characteristics of water equivalent for two solid water phantoms

    International Nuclear Information System (INIS)

    Wang Jianhua; Wang Xun; Ren Jiangping

    2011-01-01

    Objective: To investigate the water equivalent of two solid water phantoms. Methods: The X-ray and electron beam depth-ion curves were measured in water and two solid water phantoms, RW3 and Virtual Water. The water-equivalency correction factors for the two solid water phantoms were compared. We measured and calculated the range sealing factors and the fluence correction factors for the two solid water phantoms in the case of electron beams. Results: The average difference between the measured ionization in solid water phantoms and water was 0.42% and 0.16% on 6 MV X-ray (t=-6.15, P=0.001 and t=-1.65, P=0.419) and 0.21% and 0.31% on 10 MV X-ray (t=1.728, P=0.135 and t=-2.296, P=0.061), with 17.4% and 14.5% on 6 MeV electron beams (t=-1.37, P=0.208 and t=-1.47, P=0.179) and 7.0% and 6.0% on 15 MeV electron beams (t=-0.58, P=0.581 and t=-0.90, P=0.395). The water-equivalency correction factors for the two solid water phantoms varied slightly largely, F=58.54, P=0.000 on 6 MV X-ray, F=0.211, P=0.662 on 10 MV X-ray, F=0.97, P=0.353 on 6 MeV electron beams, F=0.14, P=0.717 on 15 MeV electron beams. However, they were almost equal to 1 near the reference depths. The two solid water phantoms showed a similar tread of C pl increasing (F=26.40, P=0.014) and h pl decreasing (F=7.45, P=0.072) with increasing energy. Conclusion: The solid water phantom should undergo a quality control test before being clinical use. (authors)

  18. Transition curves for highway geometric design

    CERN Document Server

    Kobryń, Andrzej

    2017-01-01

    This book provides concise descriptions of the various solutions of transition curves, which can be used in geometric design of roads and highways. It presents mathematical methods and curvature functions for defining transition curves. .

  19. Comparison and evaluation of mathematical lactation curve ...

    African Journals Online (AJOL)

    p2492989

    A mathematical model of the lactation curve provides summary information about culling and milking strategies ..... Table 2 Statistics of the edited data for first lactation Holstein cows ..... Application of different models to the lactation curves of.

  20. Normal foot and ankle

    International Nuclear Information System (INIS)

    Weissman, S.D.

    1989-01-01

    The foot may be thought of as a bag of bones tied tightly together and functioning as a unit. The bones re expected to maintain their alignment without causing symptomatology to the patient. The author discusses a normal radiograph. The bones must have normal shape and normal alignment. The density of the soft tissues should be normal and there should be no fractures, tumors, or foreign bodies

  1. Gelfond–Bézier curves

    KAUST Repository

    Ait-Haddou, Rachid; Sakane, Yusuke; Nomura, Taishin

    2013-01-01

    We show that the generalized Bernstein bases in Müntz spaces defined by Hirschman and Widder (1949) and extended by Gelfond (1950) can be obtained as pointwise limits of the Chebyshev–Bernstein bases in Müntz spaces with respect to an interval [a,1][a,1] as the positive real number a converges to zero. Such a realization allows for concepts of curve design such as de Casteljau algorithm, blossom, dimension elevation to be transferred from the general theory of Chebyshev blossoms in Müntz spaces to these generalized Bernstein bases that we termed here as Gelfond–Bernstein bases. The advantage of working with Gelfond–Bernstein bases lies in the simplicity of the obtained concepts and algorithms as compared to their Chebyshev–Bernstein bases counterparts.

  2. Bubble Collision in Curved Spacetime

    International Nuclear Information System (INIS)

    Hwang, Dong-il; Lee, Bum-Hoon; Lee, Wonwoo; Yeom, Dong-han

    2014-01-01

    We study vacuum bubble collisions in curved spacetime, in which vacuum bubbles were nucleated in the initial metastable vacuum state by quantum tunneling. The bubbles materialize randomly at different times and then start to grow. It is known that the percolation by true vacuum bubbles is not possible due to the exponential expansion of the space among the bubbles. In this paper, we consider two bubbles of the same size with a preferred axis and assume that two bubbles form very near each other to collide. The two bubbles have the same field value. When the bubbles collide, the collided region oscillates back-and-forth and then the collided region eventually decays and disappears. We discuss radiation and gravitational wave resulting from the collision of two bubbles

  3. Bacterial streamers in curved microchannels

    Science.gov (United States)

    Rusconi, Roberto; Lecuyer, Sigolene; Guglielmini, Laura; Stone, Howard

    2009-11-01

    Biofilms, generally identified as microbial communities embedded in a self-produced matrix of extracellular polymeric substances, are involved in a wide variety of health-related problems ranging from implant-associated infections to disease transmissions and dental plaque. The usual picture of these bacterial films is that they grow and develop on surfaces. However, suspended biofilm structures, or streamers, have been found in natural environments (e.g., rivers, acid mines, hydrothermal hot springs) and are always suggested to stem from a turbulent flow. We report the formation of bacterial streamers in curved microfluidic channels. By using confocal laser microscopy we are able to directly image and characterize the spatial and temporal evolution of these filamentous structures. Such streamers, which always connect the inner corners of opposite sides of the channel, are always located in the middle plane. Numerical simulations of the flow provide evidences for an underlying hydrodynamic mechanism behind the formation of the streamers.

  4. Gelfond–Bézier curves

    KAUST Repository

    Ait-Haddou, Rachid

    2013-02-01

    We show that the generalized Bernstein bases in Müntz spaces defined by Hirschman and Widder (1949) and extended by Gelfond (1950) can be obtained as pointwise limits of the Chebyshev–Bernstein bases in Müntz spaces with respect to an interval [a,1][a,1] as the positive real number a converges to zero. Such a realization allows for concepts of curve design such as de Casteljau algorithm, blossom, dimension elevation to be transferred from the general theory of Chebyshev blossoms in Müntz spaces to these generalized Bernstein bases that we termed here as Gelfond–Bernstein bases. The advantage of working with Gelfond–Bernstein bases lies in the simplicity of the obtained concepts and algorithms as compared to their Chebyshev–Bernstein bases counterparts.

  5. Sibling curves of quadratic polynomials | Wiggins | Quaestiones ...

    African Journals Online (AJOL)

    Sibling curves were demonstrated in [1, 2] as a novel way to visualize the zeroes of real valued functions. In [3] it was shown that a polynomial of degree n has n sibling curves. This paper focuses on the algebraic and geometric properites of the sibling curves of real and complex quadratic polynomials. Key words: Quadratic ...

  6. GLOBAL AND STRICT CURVE FITTING METHOD

    NARCIS (Netherlands)

    Nakajima, Y.; Mori, S.

    2004-01-01

    To find a global and smooth curve fitting, cubic B­Spline method and gathering­ line methods are investigated. When segmenting and recognizing a contour curve of character shape, some global method is required. If we want to connect contour curves around a singular point like crossing points,

  7. Trigonometric Characterization of Some Plane Curves

    Indian Academy of Sciences (India)

    IAS Admin

    (Figure 1). A relation between tan θ and tanψ gives the trigonometric equation of the family of curves. In this article, trigonometric equations of some known plane curves are deduced and it is shown that these equations reveal some geometric characteristics of the families of the curves under consideration. In Section 2,.

  8. M-curves and symmetric products

    Indian Academy of Sciences (India)

    Indranil Biswas

    2017-08-03

    Aug 3, 2017 ... is bounded above by g + 1, where g is the genus of X [11]. Curves which have exactly the maximum number (i.e., genus +1) of components of the real part are called M-curves. Classifying real algebraic curves up to homeomorphism is straightforward, however, classifying even planar non-singular real ...

  9. Holomorphic curves in exploded manifolds: Kuranishi structure

    OpenAIRE

    Parker, Brett

    2013-01-01

    This paper constructs a Kuranishi structure for the moduli stack of holomorphic curves in exploded manifolds. To avoid some technicalities of abstract Kuranishi structures, we embed our Kuranishi structure inside a moduli stack of curves. The construction also works for the moduli stack of holomorphic curves in any compact symplectic manifold.

  10. Automated Blazar Light Curves Using Machine Learning

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Spencer James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-07-27

    This presentation describes a problem and methodology pertaining to automated blazar light curves. Namely, optical variability patterns for blazars require the construction of light curves and in order to generate the light curves, data must be filtered before processing to ensure quality.

  11. Particles and Dirac-type operators on curved spaces

    International Nuclear Information System (INIS)

    Visinescu, Mihai

    2003-01-01

    We review the geodesic motion of pseudo-classical particles in curved spaces. Investigating the generalized Killing equations for spinning spaces, we express the constants of motion in terms of Killing-Yano tensors. Passing from the spinning spaces to the Dirac equation in curved backgrounds we point out the role of the Killing-Yano tensors in the construction of the Dirac-type operators. The general results are applied to the case of the four-dimensional Euclidean Taub-Newman-Unti-Tamburino space. From the covariantly constant Killing-Yano tensors of this space we construct three new Dirac-type operators which are equivalent with the standard Dirac operator. Finally the Runge-Lenz operator for the Dirac equation in this background is expressed in terms of the fourth Killing-Yano tensor which is not covariantly constant. As a rule the covariantly constant Killing-Yano tensors realize certain square roots of the metric tensor. Such a Killing-Yano tensor produces simultaneously a Dirac-type operator and the generator of a one-parameter Lie group connecting this operator with the standard Dirac one. On the other hand, the not covariantly constant Killing-Yano tensors are important in generating hidden symmetries. The presence of not covariantly constant Killing-Yano tensors implies the existence of non-standard supersymmetries in point particle theories on curved background. (author)

  12. Arctic curves in path models from the tangent method

    Science.gov (United States)

    Di Francesco, Philippe; Lapa, Matthew F.

    2018-04-01

    Recently, Colomo and Sportiello introduced a powerful method, known as the tangent method, for computing the arctic curve in statistical models which have a (non- or weakly-) intersecting lattice path formulation. We apply the tangent method to compute arctic curves in various models: the domino tiling of the Aztec diamond for which we recover the celebrated arctic circle; a model of Dyck paths equivalent to the rhombus tiling of a half-hexagon for which we find an arctic half-ellipse; another rhombus tiling model with an arctic parabola; the vertically symmetric alternating sign matrices, where we find the same arctic curve as for unconstrained alternating sign matrices. The latter case involves lattice paths that are non-intersecting but that are allowed to have osculating contact points, for which the tangent method was argued to still apply. For each problem we estimate the large size asymptotics of a certain one-point function using LU decomposition of the corresponding Gessel–Viennot matrices, and a reformulation of the result amenable to asymptotic analysis.

  13. Generating carbyne equivalents with photoredox catalysis

    Science.gov (United States)

    Wang, Zhaofeng; Herraiz, Ana G.; Del Hoyo, Ana M.; Suero, Marcos G.

    2018-02-01

    Carbon has the unique ability to bind four atoms and form stable tetravalent structures that are prevalent in nature. The lack of one or two valences leads to a set of species—carbocations, carbanions, radicals and carbenes—that is fundamental to our understanding of chemical reactivity. In contrast, the carbyne—a monovalent carbon with three non-bonded electrons—is a relatively unexplored reactive intermediate; the design of reactions involving a carbyne is limited by challenges associated with controlling its extreme reactivity and the lack of efficient sources. Given the innate ability of carbynes to form three new covalent bonds sequentially, we anticipated that a catalytic method of generating carbynes or related stabilized species would allow what we term an ‘assembly point’ disconnection approach for the construction of chiral centres. Here we describe a catalytic strategy that generates diazomethyl radicals as direct equivalents of carbyne species using visible-light photoredox catalysis. The ability of these carbyne equivalents to induce site-selective carbon-hydrogen bond cleavage in aromatic rings enables a useful diazomethylation reaction, which underpins sequencing control for the late-stage assembly-point functionalization of medically relevant agents. Our strategy provides an efficient route to libraries of potentially bioactive molecules through the installation of tailored chiral centres at carbon-hydrogen bonds, while complementing current translational late-stage functionalization processes. Furthermore, we exploit the dual radical and carbene character of the generated carbyne equivalent in the direct transformation of abundant chemical feedstocks into valuable chiral molecules.

  14. Method of construction spatial transition curve

    Directory of Open Access Journals (Sweden)

    S.V. Didanov

    2013-04-01

    Full Text Available Purpose. The movement of rail transport (speed rolling stock, traffic safety, etc. is largely dependent on the quality of the track. In this case, a special role is the transition curve, which ensures smooth insertion of the transition from linear to circular section of road. The article deals with modeling of spatial transition curve based on the parabolic distribution of the curvature and torsion. This is a continuation of research conducted by the authors regarding the spatial modeling of curved contours. Methodology. Construction of the spatial transition curve is numerical methods for solving nonlinear integral equations, where the initial data are taken coordinate the starting and ending points of the curve of the future, and the inclination of the tangent and the deviation of the curve from the tangent plane at these points. System solutions for the numerical method are the partial derivatives of the equations of the unknown parameters of the law of change of torsion and length of the transition curve. Findings. The parametric equations of the spatial transition curve are calculated by finding the unknown coefficients of the parabolic distribution of the curvature and torsion, as well as the spatial length of the transition curve. Originality. A method for constructing the spatial transition curve is devised, and based on this software geometric modeling spatial transition curves of railway track with specified deviations of the curve from the tangent plane. Practical value. The resulting curve can be applied in any sector of the economy, where it is necessary to ensure a smooth transition from linear to circular section of the curved space bypass. An example is the transition curve in the construction of the railway line, road, pipe, profile, flat section of the working blades of the turbine and compressor, the ship, plane, car, etc.

  15. Equivalent conserved currents and generalized Noether's theorem

    International Nuclear Information System (INIS)

    Gordon, T.J.

    1984-01-01

    A generalized Noether theorem is presented, relating symmetries and equivalence classes of local) conservation laws in classical field theories; this is contrasted with the standard theorem. The concept of a ''Noether'' field theory is introduced, being a theory for which the generalized theorem applies; not only does this include the cases of Lagrangian and Hamiltonian field theories, these structures are ''derived'' from the Noether property in a natural way. The generalized theorem applies to currents and symmetries that contain derivatives of the fields up to an arbitrarily high order

  16. European Equivalencies in Legal Interpreting and Translation

    DEFF Research Database (Denmark)

    Corsellis, Ann; Hertog, Erik; Martinsen, Bodil

    2002-01-01

    Within Europe there is increasing freedom of movement between countries and increasing inward migration. As a result, equivalent standards of legl interpreting and translation are required to allow reliable communication for judicial cooperation between member states, for criminal and civil matters...... which cross national borders and for the needs of multilingual populations. The European Convention of Human Rights (article 6, paragrph 3) is one of the main planks of relevant legislation. This international, two year project has been funded by the EU Grotius programme to set out what is required...

  17. Testing efficiency transfer codes for equivalence

    International Nuclear Information System (INIS)

    Vidmar, T.; Celik, N.; Cornejo Diaz, N.; Dlabac, A.; Ewa, I.O.B.; Carrazana Gonzalez, J.A.; Hult, M.; Jovanovic, S.; Lepy, M.-C.; Mihaljevic, N.; Sima, O.; Tzika, F.; Jurado Vargas, M.; Vasilopoulou, T.; Vidmar, G.

    2010-01-01

    Four general Monte Carlo codes (GEANT3, PENELOPE, MCNP and EGS4) and five dedicated packages for efficiency determination in gamma-ray spectrometry (ANGLE, DETEFF, GESPECOR, ETNA and EFFTRAN) were checked for equivalence by applying them to the calculation of efficiency transfer (ET) factors for a set of well-defined sample parameters, detector parameters and energies typically encountered in environmental radioactivity measurements. The differences between the results of the different codes never exceeded a few percent and were lower than 2% in the majority of cases.

  18. The equivalence principle in a quantum world

    DEFF Research Database (Denmark)

    Bjerrum-Bohr, N. Emil J.; Donoghue, John F.; El-Menoufi, Basem Kamal

    2015-01-01

    the energy is small, we now have the tools to address this conflict explicitly. Despite the violation of some classical concepts, the EP continues to provide the core of the quantum gravity framework through the symmetry - general coordinate invariance - that is used to organize the effective field theory......We show how modern methods can be applied to quantum gravity at low energy. We test how quantum corrections challenge the classical framework behind the equivalence principle (EP), for instance through introduction of nonlocality from quantum physics, embodied in the uncertainty principle. When...

  19. Equivalent Circuit Modeling of Hysteresis Motors

    Energy Technology Data Exchange (ETDEWEB)

    Nitao, J J; Scharlemann, E T; Kirkendall, B A

    2009-08-31

    We performed a literature review and found that many equivalent circuit models of hysteresis motors in use today are incorrect. The model by Miyairi and Kataoka (1965) is the correct one. We extended the model by transforming it to quadrature coordinates, amenable to circuit or digital simulation. 'Hunting' is an oscillatory phenomenon often observed in hysteresis motors. While several works have attempted to model the phenomenon with some partial success, we present a new complete model that predicts hunting from first principles.

  20. Money and bonds: an equivalence theorem

    OpenAIRE

    Narayana R. Kocherlakota

    2007-01-01

    This paper considers four models in which immortal agents face idiosyncratic shocks and trade only a single risk-free asset over time. The four models specify this single asset to be private bonds, public bonds, public money, or private money respectively. I prove that, given an equilibrium in one of these economies, it is possible to pick the exogenous elements in the other three economies so that there is an outcome-equivalent equilibrium in each of them. (The term ?exogenous variables? ref...

  1. Power forward curves: a managerial perspective

    International Nuclear Information System (INIS)

    Nagarajan, Shankar

    1999-01-01

    This chapter concentrates on managerial application of power forward curves, and examines the determinants of electricity prices such as transmission constraints, its inability to be stored in a conventional way, its seasonality and weather dependence, the generation stack, and the swing risk. The electricity forward curve, classical arbitrage, constructing a forward curve, volatilities, and electricity forward curve models such as the jump-diffusion model, the mean-reverting heteroscedastic volatility model, and an econometric model of forward prices are examined. A managerial perspective of the applications of the forward curve is presented covering plant valuation, capital budgeting, performance measurement, product pricing and structuring, asset optimisation, valuation of transmission options, and risk management

  2. Retrograde curves of solidus and solubility

    International Nuclear Information System (INIS)

    Vasil'ev, M.V.

    1979-01-01

    The investigation was concerned with the constitutional diagrams of the eutectic type with ''retrograde solidus'' and ''retrograde solubility curve'' which must be considered as diagrams with degenerate monotectic transformation. The solidus and the solubility curves form a retrograde curve with a common retrograde point representing the solubility maximum. The two branches of the Aetrograde curve can be described with the aid of two similar equations. Presented are corresponding equations for the Cd-Zn system and shown is the possibility of predicting the run of the solubility curve

  3. Right thoracic curvature in the normal spine

    Directory of Open Access Journals (Sweden)

    Masuda Keigo

    2011-01-01

    Full Text Available Abstract Background Trunk asymmetry and vertebral rotation, at times observed in the normal spine, resemble the characteristics of adolescent idiopathic scoliosis (AIS. Right thoracic curvature has also been reported in the normal spine. If it is determined that the features of right thoracic side curvature in the normal spine are the same as those observed in AIS, these findings might provide a basis for elucidating the etiology of this condition. For this reason, we investigated right thoracic curvature in the normal spine. Methods For normal spinal measurements, 1,200 patients who underwent a posteroanterior chest radiographs were evaluated. These consisted of 400 children (ages 4-9, 400 adolescents (ages 10-19 and 400 adults (ages 20-29, with each group comprised of both genders. The exclusion criteria were obvious chest and spinal diseases. As side curvature is minimal in normal spines and the range at which curvature is measured is difficult to ascertain, first the typical curvature range in scoliosis patients was determined and then the Cobb angle in normal spines was measured using the same range as the scoliosis curve, from T5 to T12. Right thoracic curvature was given a positive value. The curve pattern was organized in each collective three groups: neutral (from -1 degree to 1 degree, right (> +1 degree, and left ( Results In child group, Cobb angle in left was 120, in neutral was 125 and in right was 155. In adolescent group, Cobb angle in left was 70, in neutral was 114 and in right was 216. In adult group, Cobb angle in left was 46, in neutral was 102 and in right was 252. The curvature pattern shifts to the right side in the adolescent group (p Conclusions Based on standing chest radiographic measurements, a right thoracic curvature was observed in normal spines after adolescence.

  4. Reduced Calibration Curve for Proton Computed Tomography

    International Nuclear Information System (INIS)

    Yevseyeva, Olga; Assis, Joaquim de; Evseev, Ivan; Schelin, Hugo; Paschuk, Sergei; Milhoretto, Edney; Setti, Joao; Diaz, Katherin; Hormaza, Joel; Lopes, Ricardo

    2010-01-01

    The pCT deals with relatively thick targets like the human head or trunk. Thus, the fidelity of pCT as a tool for proton therapy planning depends on the accuracy of physical formulas used for proton interaction with thick absorbers. Although the actual overall accuracy of the proton stopping power in the Bethe-Bloch domain is about 1%, the analytical calculations and the Monte Carlo simulations with codes like TRIM/SRIM, MCNPX and GEANT4 do not agreed with each other. A tentative to validate the codes against experimental data for thick absorbers bring some difficulties: only a few data is available and the existing data sets have been acquired at different initial proton energies, and for different absorber materials. In this work we compare the results of our Monte Carlo simulations with existing experimental data in terms of reduced calibration curve, i.e. the range - energy dependence normalized on the range scale by the full projected CSDA range for given initial proton energy in a given material, taken from the NIST PSTAR database, and on the final proton energy scale - by the given initial energy of protons. This approach is almost energy and material independent. The results of our analysis are important for pCT development because the contradictions observed at arbitrary low initial proton energies could be easily scaled now to typical pCT energies.

  5. Folding of non-Euclidean curved shells

    Science.gov (United States)

    Bende, Nakul; Evans, Arthur; Innes-Gold, Sarah; Marin, Luis; Cohen, Itai; Santangelo, Christian; Hayward, Ryan

    2015-03-01

    Origami-based folding of 2D sheets has been of recent interest for a variety of applications ranging from deployable structures to self-folding robots. Though folding of planar sheets follows well-established principles, folding of curved shells involves an added level of complexity due to the inherent influence of curvature on mechanics. In this study, we use principles from differential geometry and thin shell mechanics to establish fundamental rules that govern folding of prototypical creased shells. In particular, we show how the normal curvature of a crease line controls whether the deformation is smooth or discontinuous, and investigate the influence of shell thickness and boundary conditions. We show that snap-folding of shells provides a route to rapid actuation on time-scales dictated by the speed of sound. The simple geometric design principles developed can be applied at any length-scale, offering potential for bio-inspired soft actuators for tunable optics, microfluidics, and robotics. This work was funded by the National Science Foundation through EFRI ODISSEI-1240441 with additional support to S.I.-G. through the UMass MRSEC DMR-0820506 REU program.

  6. The writhe of open and closed curves

    International Nuclear Information System (INIS)

    Berger, Mitchell A; Prior, Chris

    2006-01-01

    Twist and writhe measure basic geometric properties of a ribbon or tube. While these measures have applications in molecular biology, materials science, fluid mechanics and astrophysics, they are under-utilized because they are often considered difficult to compute. In addition, many applications involve curves with endpoints (open curves); but for these curves the definition of writhe can be ambiguous. This paper provides simple expressions for the writhe of closed curves, and provides a new definition of writhe for open curves. The open curve definition is especially appropriate when the curve is anchored at endpoints on a plane or stretches between two parallel planes. This definition can be especially useful for magnetic flux tubes in the solar atmosphere, and for isotropic rods with ends fixed to a plane

  7. Integrable topological billiards and equivalent dynamical systems

    Science.gov (United States)

    Vedyushkina, V. V.; Fomenko, A. T.

    2017-08-01

    We consider several topological integrable billiards and prove that they are Liouville equivalent to many systems of rigid body dynamics. The proof uses the Fomenko-Zieschang theory of invariants of integrable systems. We study billiards bounded by arcs of confocal quadrics and their generalizations, generalized billiards, where the motion occurs on a locally planar surface obtained by gluing several planar domains isometrically along their boundaries, which are arcs of confocal quadrics. We describe two new classes of integrable billiards bounded by arcs of confocal quadrics, namely, non-compact billiards and generalized billiards obtained by gluing planar billiards along non-convex parts of their boundaries. We completely classify non-compact billiards bounded by arcs of confocal quadrics and study their topology using the Fomenko invariants that describe the bifurcations of singular leaves of the additional integral. We study the topology of isoenergy surfaces for some non-convex generalized billiards. It turns out that they possess exotic Liouville foliations: the integral trajectories of the billiard that lie on some singular leaves admit no continuous extension. Such billiards appear to be leafwise equivalent to billiards bounded by arcs of confocal quadrics in the Minkowski metric.

  8. Twisted conformal field theories and Morita equivalence

    Energy Technology Data Exchange (ETDEWEB)

    Marotta, Vincenzo [Dipartimento di Scienze Fisiche, Universita di Napoli ' Federico II' and INFN, Sezione di Napoli, Compl. universitario M. Sant' Angelo, Via Cinthia, 80126 Napoli (Italy); Naddeo, Adele [CNISM, Unita di Ricerca di Salerno and Dipartimento di Fisica ' E.R. Caianiello' , Universita degli Studi di Salerno, Via Salvador Allende, 84081 Baronissi (Italy); Dipartimento di Scienze Fisiche, Universita di Napoli ' Federico II' , Compl. universitario M. Sant' Angelo, Via Cinthia, 80126 Napoli (Italy)], E-mail: adelenaddeo@yahoo.it

    2009-04-01

    The Morita equivalence for field theories on noncommutative two-tori is analysed in detail for rational values of the noncommutativity parameter {theta} (in appropriate units): an isomorphism is established between an Abelian noncommutative field theory (NCFT) and a non-Abelian theory of twisted fields on ordinary space. We focus on a particular conformal field theory (CFT), the one obtained by means of the m-reduction procedure [V. Marotta, J. Phys. A 26 (1993) 3481; V. Marotta, Mod. Phys. Lett. A 13 (1998) 853; V. Marotta, Nucl. Phys. B 527 (1998) 717; V. Marotta, A. Sciarrino, Mod. Phys. Lett. A 13 (1998) 2863], and show that it is the Morita equivalent of a NCFT. Finally, the whole m-reduction procedure is shown to be the image in the ordinary space of the Morita duality. An application to the physics of a quantum Hall fluid at Jain fillings {nu}=m/(2pm+1) is explicitly discussed in order to further elucidate such a correspondence and to clarify its role in the physics of strongly correlated systems. A new picture emerges, which is very different from the existing relationships between noncommutativity and many body systems [A.P. Polychronakos, arXiv: 0706.1095].

  9. Planck Constant Determination from Power Equivalence

    Science.gov (United States)

    Newell, David B.

    2000-04-01

    Equating mechanical to electrical power links the kilogram, the meter, and the second to the practical realizations of the ohm and the volt derived from the quantum Hall and the Josephson effects, yielding an SI determination of the Planck constant. The NIST watt balance uses this power equivalence principle, and in 1998 measured the Planck constant with a combined relative standard uncertainty of 8.7 x 10-8, the most accurate determination to date. The next generation of the NIST watt balance is now being assembled. Modification to the experimental facilities have been made to reduce the uncertainty components from vibrations and electromagnetic interference. A vacuum chamber has been installed to reduce the uncertainty components associated with performing the experiment in air. Most of the apparatus is in place and diagnostic testing of the balance should begin this year. Once a combined relative standard uncertainty of one part in 10-8 has been reached, the power equivalence principle can be used to monitor the possible drift in the artifact mass standard, the kilogram, and provide an accurate alternative definition of mass in terms of fundamental constants. *Electricity Division, Electronics and Electrical Engineering Laboratory, Technology Administration, U.S. Department of Commerce. Contribution of the National Institute of Standards and Technology, not subject to copyright in the U.S.

  10. Path integrals on curved manifolds

    International Nuclear Information System (INIS)

    Grosche, C.; Steiner, F.

    1987-01-01

    A general framework for treating path integrals on curved manifolds is presented. We also show how to perform general coordinate and space-time transformations in path integrals. The main result is that one has to subtract a quantum correction ΔV ∝ ℎ 2 from the classical Lagrangian L, i.e. the correct effective Lagrangian to be used in the path integral is L eff = L-ΔV. A general prescription for calculating the quantum correction ΔV is given. It is based on a canonical approach using Weyl-ordering and the Hamiltonian path integral defined by the midpoint prescription. The general framework is illustrated by several examples: The d-dimensional rotator, i.e. the motion on the sphere S d-1 , the path integral in d-dimensional polar coordinates, the exact treatment of the hydrogen atom in R 2 and R 3 by performing a Kustaanheimo-Stiefel transformation, the Langer transformation and the path integral for the Morse potential. (orig.)

  11. Page curves for tripartite systems

    International Nuclear Information System (INIS)

    Hwang, Junha; Lee, Deok Sang; Nho, Dongju; Oh, Jeonghun; Park, Hyosub; Zoe, Heeseung; Yeom, Dong-han

    2017-01-01

    We investigate information flow and Page curves for tripartite systems. We prepare a tripartite system (say, A , B , and C ) of a given number of states and calculate information and entropy contents by assuming random states. Initially, every particle was in A (this means a black hole), and as time goes on, particles move to either B (this means Hawking radiation) or C (this means a broadly defined remnant, including a non-local transport of information, the last burst, an interior large volume, or a bubble universe, etc). If the final number of states of the remnant is smaller than that of Hawking radiation, then information will be stored by both the radiation and the mutual information between the radiation and the remnant, while the remnant itself does not contain information. On the other hand, if the final number of states of the remnant is greater than that of Hawking radiation, then the radiation contains negligible information, while the remnant and the mutual information between the radiation and the remnant contain information. Unless the number of states of the remnant is large enough compared to the entropy of the black hole, Hawking radiation must contain information; and we meet the menace of black hole complementarity again. Therefore, this contrasts the tension between various assumptions and candidates of the resolution of the information loss problem. (paper)

  12. Vacuum polarization in curved spacetime

    International Nuclear Information System (INIS)

    Guy, R.W.

    1979-01-01

    A necessary step in the process of understanding the quantum theory of gravity is the calculation of the stress-energy tensor of quantized fields in curved space-times. The determination of the stress tensor, a formally divergent object, is made possible in this dissertation by utilizing the zeta-function method of regularization and renormalization. By employing this scheme's representation of the renormalized effective action functional, an expression of the stress tensor for a massless, conformally invariant scalar field, first given by DeWitt, is derived. The form of the renormalized stress tensor is first tested in various examples of flat space-times. It is shown to vanish in Minkowski space and to yield the accepted value of the energy density in the Casimir effect. Next, the stress tensor is calculated in two space-times of constant curvature, the Einstein universe and the deSitter universe, and the results are shown to agree with those given by an expression of the stress tensor that is valid in conformally flat space-times. This work culminates in the determination of the stress tensor on the horizon of a Schwarzschild black hole. This is accomplished by approximating the radial part of the eigen-functions and the metric in the vicinity of the horizon. The stress tensor at this level approximation is found to be pure trace. The approximated forms of the Schwarzschild metric describes a conformally flat space-time that possesses horizons

  13. Consistent Valuation across Curves Using Pricing Kernels

    Directory of Open Access Journals (Sweden)

    Andrea Macrina

    2018-03-01

    Full Text Available The general problem of asset pricing when the discount rate differs from the rate at which an asset’s cash flows accrue is considered. A pricing kernel framework is used to model an economy that is segmented into distinct markets, each identified by a yield curve having its own market, credit and liquidity risk characteristics. The proposed framework precludes arbitrage within each market, while the definition of a curve-conversion factor process links all markets in a consistent arbitrage-free manner. A pricing formula is then derived, referred to as the across-curve pricing formula, which enables consistent valuation and hedging of financial instruments across curves (and markets. As a natural application, a consistent multi-curve framework is formulated for emerging and developed inter-bank swap markets, which highlights an important dual feature of the curve-conversion factor process. Given this multi-curve framework, existing multi-curve approaches based on HJM and rational pricing kernel models are recovered, reviewed and generalised and single-curve models extended. In another application, inflation-linked, currency-based and fixed-income hybrid securities are shown to be consistently valued using the across-curve valuation method.

  14. Reliability Based Geometric Design of Horizontal Circular Curves

    Science.gov (United States)

    Rajbongshi, Pabitra; Kalita, Kuldeep

    2018-06-01

    Geometric design of horizontal circular curve primarily involves with radius of the curve and stopping sight distance at the curve section. Minimum radius is decided based on lateral thrust exerted on the vehicles and the minimum stopping sight distance is provided to maintain the safety in longitudinal direction of vehicles. Available sight distance at site can be regulated by changing the radius and middle ordinate at the curve section. Both radius and sight distance depend on design speed. Speed of vehicles at any road section is a variable parameter and therefore, normally the 98th percentile speed is taken as the design speed. This work presents a probabilistic approach for evaluating stopping sight distance, considering the variability of all input parameters of sight distance. It is observed that the 98th percentile sight distance value is much lower than the sight distance corresponding to 98th percentile speed. The distribution of sight distance parameter is also studied and found to follow a lognormal distribution. Finally, the reliability based design charts are presented for both plain and hill regions, and considering the effect of lateral thrust.

  15. Reliability Based Geometric Design of Horizontal Circular Curves

    Science.gov (United States)

    Rajbongshi, Pabitra; Kalita, Kuldeep

    2018-03-01

    Geometric design of horizontal circular curve primarily involves with radius of the curve and stopping sight distance at the curve section. Minimum radius is decided based on lateral thrust exerted on the vehicles and the minimum stopping sight distance is provided to maintain the safety in longitudinal direction of vehicles. Available sight distance at site can be regulated by changing the radius and middle ordinate at the curve section. Both radius and sight distance depend on design speed. Speed of vehicles at any road section is a variable parameter and therefore, normally the 98th percentile speed is taken as the design speed. This work presents a probabilistic approach for evaluating stopping sight distance, considering the variability of all input parameters of sight distance. It is observed that the 98th percentile sight distance value is much lower than the sight distance corresponding to 98th percentile speed. The distribution of sight distance parameter is also studied and found to follow a lognormal distribution. Finally, the reliability based design charts are presented for both plain and hill regions, and considering the effect of lateral thrust.

  16. Borel equivalence relations structure and classification

    CERN Document Server

    Kanovei, Vladimir

    2008-01-01

    Over the last 20 years, the theory of Borel equivalence relations and related topics have been very active areas of research in set theory and have important interactions with other fields of mathematics, like ergodic theory and topological dynamics, group theory, combinatorics, functional analysis, and model theory. The book presents, for the first time in mathematical literature, all major aspects of this theory and its applications. This book should be of interest to a wide spectrum of mathematicians working in set theory as well as the other areas mentioned. It provides a systematic exposition of results that so far have been only available in journals or are even unpublished. The book presents unified and in some cases significantly streamlined proofs of several difficult results, especially dichotomy theorems. It has rather minimal overlap with other books published in this subject.

  17. Characterization of Destrins with Different Dextrose Equivalents

    Directory of Open Access Journals (Sweden)

    Guanglei Li

    2010-07-01

    Full Text Available Dextrins are widely used for their functional properties and prepared by partial hydrolysis of starch using acid, enzymes, or combinations of both. The physiochemical properties of dextrins are dependent on their molecular distribution and oligosaccharide profiles. In this study, scanning electron microscopy (SEM, X-ray diffractometry (XRD, rapid viscoanalysis (RVA, high-performance Liquid Chromatography (HPLC and gel permeation chromatography (GPC were used to characterize dextrins prepared by common neutral and thermostable α-amylase hydrolysis. The dextrin granules displayed irregular surfaces and were badly damaged by the enzyme treatment. They displayed A-type X-ray diffraction patterns with a decrease of intensity of the characteristic diffraction peaks. The RVA profiles showed that the viscosity of dextrin decreased with the increase of its Dextrose Equivalent (DE value. According to HPLC analysis, the molecular weight, degree of polymerization and the composition of oligosaccharides in dextrins were different.

  18. Multiplicities of states od equivalent fermion shells

    International Nuclear Information System (INIS)

    Savukinas, A.Yu.; Glembotskij, I.I.

    1980-01-01

    Classification of states of three or four equivalent fermions has been studied, i.e. possible terms and their multiplicities have been determined. For this purpose either the group theory or evident expressions for the fractional-parentage coefficients have been used. In the first approach the formulas obtained by other authors for the multiplicities of terms through the characters of the transformation matrices of bond moments have been used. This approach happens to be more general as compared with the second one, as expressions for the fractional-parentage coefficients in many cases are not known. The multiplicities of separate terms have been determined. It has been shown that the number of terms of any multiplicity becomes constant when l or j is increased [ru

  19. Sample size allocation in multiregional equivalence studies.

    Science.gov (United States)

    Liao, Jason J Z; Yu, Ziji; Li, Yulan

    2018-06-17

    With the increasing globalization of drug development, the multiregional clinical trial (MRCT) has gained extensive use. The data from MRCTs could be accepted by regulatory authorities across regions and countries as the primary sources of evidence to support global marketing drug approval simultaneously. The MRCT can speed up patient enrollment and drug approval, and it makes the effective therapies available to patients all over the world simultaneously. However, there are many challenges both operationally and scientifically in conducting a drug development globally. One of many important questions to answer for the design of a multiregional study is how to partition sample size into each individual region. In this paper, two systematic approaches are proposed for the sample size allocation in a multiregional equivalence trial. A numerical evaluation and a biosimilar trial are used to illustrate the characteristics of the proposed approaches. Copyright © 2018 John Wiley & Sons, Ltd.

  20. Equivalence principle and the baryon acoustic peak

    Science.gov (United States)

    Baldauf, Tobias; Mirbabayi, Mehrdad; Simonović, Marko; Zaldarriaga, Matias

    2015-08-01

    We study the dominant effect of a long wavelength density perturbation δ (λL) on short distance physics. In the nonrelativistic limit, the result is a uniform acceleration, fixed by the equivalence principle, and typically has no effect on statistical averages due to translational invariance. This same reasoning has been formalized to obtain a "consistency condition" on the cosmological correlation functions. In the presence of a feature, such as the acoustic peak at ℓBAO, this naive expectation breaks down for λLexplicitly applied to the one-loop calculation of the power spectrum. Finally, the success of baryon acoustic oscillation reconstruction schemes is argued to be another empirical evidence for the validity of the results.

  1. Development of air equivalent gamma dose monitor

    International Nuclear Information System (INIS)

    Alex, Mary; Bhattacharya, Sadhana; Karpagam, R.; Prasad, D.N.; Jakati, R.K.; Mukhopadhyay, P.K.; Patil, R.K.

    2010-01-01

    The paper describes design and development of air equivalent gamma absorbed dose monitor. The monitor has gamma sensitivity of 84 pA/R/h for 60 Co source. The characterization of the monitor has been done to get energy dependence on gamma sensitivity and response to gamma radiation field from 1 R/hr to 5000 R/hr. The gamma sensitivity in the energy range of 0.06 to 1.25MeV relative to 137 Cs nuclide was within 2.5%. The linearity of the monitor response as a function of gamma field from 10 R/h to 3.8 kR/h was within 6%. The monitor has been designed for its application in harsh environment. It has been successfully qualified to meet environmental requirements of shock. (author)

  2. A new concept of equivalent homogenization method

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Jin; Pogoskekyan, Leonid; Kim, Young Il; Ju, Hyung Kook; Chang, Moon Hee [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1996-07-01

    A new concept of equivalent homogenization is proposed. The concept employs new set of homogenized parameters: homogenized cross sections (XS) and interface matrix (IM), which relates partial currents at the cell interfaces. The idea of interface matrix generalizes the idea of discontinuity factors (DFs), proposed and developed by K. Koebke and K. Smith. The offered concept covers both those of K. Koebke and K. Smith; both of them can be simulated within framework of new concept. Also, the offered concept covers Siemens KWU approach for baffle/reflector simulation, where the equivalent homogenized reflector XS are derived from the conservation of response matrix at the interface in 1D simi-infinite slab geometry. The IM and XS of new concept satisfy the same assumption about response matrix conservation in 1D semi-infinite slab geometry. It is expected that the new concept provides more accurate approximation of heterogeneous cell, especially in case of the steep flux gradients at the cell interfaces. The attractive shapes of new concept are: improved accuracy, simplicity of incorporation in the existing codes, equal numerical expenses in comparison to the K. Smith`s approach. The new concept is useful for: (a) explicit reflector/baffle simulation; (b) control blades simulation; (c) mixed UO{sub 2}/MOX core simulation. The offered model has been incorporated in the finite difference code and in the nodal code PANDOX. The numerical results show good accuracy of core calculations and insensitivity of homogenized parameters with respect to in-core conditions. 9 figs., 7 refs. (Author).

  3. Equivalence principle implications of modified gravity models

    International Nuclear Information System (INIS)

    Hui, Lam; Nicolis, Alberto; Stubbs, Christopher W.

    2009-01-01

    Theories that attempt to explain the observed cosmic acceleration by modifying general relativity all introduce a new scalar degree of freedom that is active on large scales, but is screened on small scales to match experiments. We demonstrate that if such screening occurs via the chameleon mechanism, such as in f(R) theory, it is possible to have order unity violation of the equivalence principle, despite the absence of explicit violation in the microscopic action. Namely, extended objects such as galaxies or constituents thereof do not all fall at the same rate. The chameleon mechanism can screen the scalar charge for large objects but not for small ones (large/small is defined by the depth of the gravitational potential and is controlled by the scalar coupling). This leads to order one fluctuations in the ratio of the inertial mass to gravitational mass. We provide derivations in both Einstein and Jordan frames. In Jordan frame, it is no longer true that all objects move on geodesics; only unscreened ones, such as test particles, do. In contrast, if the scalar screening occurs via strong coupling, such as in the Dvali-Gabadadze-Porrati braneworld model, equivalence principle violation occurs at a much reduced level. We propose several observational tests of the chameleon mechanism: 1. small galaxies should accelerate faster than large galaxies, even in environments where dynamical friction is negligible; 2. voids defined by small galaxies would appear larger compared to standard expectations; 3. stars and diffuse gas in small galaxies should have different velocities, even if they are on the same orbits; 4. lensing and dynamical mass estimates should agree for large galaxies but disagree for small ones. We discuss possible pitfalls in some of these tests. The cleanest is the third one where the mass estimate from HI rotational velocity could exceed that from stars by 30% or more. To avoid blanket screening of all objects, the most promising place to look is in

  4. Baby Poop: What's Normal?

    Science.gov (United States)

    ... I'm breast-feeding my newborn and her bowel movements are yellow and mushy. Is this normal for baby poop? Answers from Jay L. Hoecker, M.D. Yellow, mushy bowel movements are perfectly normal for breast-fed babies. Still, ...

  5. Construction of calibration curve for accountancy tank

    International Nuclear Information System (INIS)

    Kato, Takayuki; Goto, Yoshiki; Nidaira, Kazuo

    2009-01-01

    Tanks are equipped in a reprocessing plant for accounting solution of nuclear material. The careful measurement of volume in tanks is very important to implement rigorous accounting of nuclear material. The calibration curve relating the volume and level of solution needs to be constructed, where the level is determined by differential pressure of dip tubes. Several calibration curves are usually employed, but it's not explicitly decided how many segment are used, where to select segment, or what should be the degree of polynomial curve. These parameters, i.e., segment and degree of polynomial curve are mutually interrelated to give the better performance of calibration curve. Here we present the construction technique of giving optimum calibration curves and their characteristics. (author)

  6. MICA: Multiple interval-based curve alignment

    Science.gov (United States)

    Mann, Martin; Kahle, Hans-Peter; Beck, Matthias; Bender, Bela Johannes; Spiecker, Heinrich; Backofen, Rolf

    2018-01-01

    MICA enables the automatic synchronization of discrete data curves. To this end, characteristic points of the curves' shapes are identified. These landmarks are used within a heuristic curve registration approach to align profile pairs by mapping similar characteristics onto each other. In combination with a progressive alignment scheme, this enables the computation of multiple curve alignments. Multiple curve alignments are needed to derive meaningful representative consensus data of measured time or data series. MICA was already successfully applied to generate representative profiles of tree growth data based on intra-annual wood density profiles or cell formation data. The MICA package provides a command-line and graphical user interface. The R interface enables the direct embedding of multiple curve alignment computation into larger analyses pipelines. Source code, binaries and documentation are freely available at https://github.com/BackofenLab/MICA

  7. Inverse Diffusion Curves Using Shape Optimization.

    Science.gov (United States)

    Zhao, Shuang; Durand, Fredo; Zheng, Changxi

    2018-07-01

    The inverse diffusion curve problem focuses on automatic creation of diffusion curve images that resemble user provided color fields. This problem is challenging since the 1D curves have a nonlinear and global impact on resulting color fields via a partial differential equation (PDE). We introduce a new approach complementary to previous methods by optimizing curve geometry. In particular, we propose a novel iterative algorithm based on the theory of shape derivatives. The resulting diffusion curves are clean and well-shaped, and the final image closely approximates the input. Our method provides a user-controlled parameter to regularize curve complexity, and generalizes to handle input color fields represented in a variety of formats.

  8. Visual Memories Bypass Normalization.

    Science.gov (United States)

    Bloem, Ilona M; Watanabe, Yurika L; Kibbe, Melissa M; Ling, Sam

    2018-05-01

    How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores-neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.

  9. Temporary threshold shifts from exposures to equal equivalent continuous A-weighted sound pressure level

    DEFF Research Database (Denmark)

    Ordoñez, Rodrigo Pizarro; Hammershøi, Dorte

    2014-01-01

    the assumptions made using the A-weighting curve for the assessment of hearing damage. By modifying exposure ratings to compensate for the build-up of energy at mid and high-frequencies (above 1 kHz) due to the presence of the listener in the sound field and for the levels below an effect threshold that does...... not induce changes in hearing (equivalent quiet levels), ratings of the sound exposure that reflect the observed temporary changes in auditory function can be obtained.......According to existing methods for the assessment of hearing damage, signals with the same A-weighted equivalent level should pose the same hazard to the auditory system. As a measure of hazard, it is assumed that Temporary Thresholds Shifts (TTS) reflect the onset of alterations to the hearing...

  10. Method for linearizing the potentiometric curves of precipitation titration in nonaqueous and aqueous-organic solutions

    International Nuclear Information System (INIS)

    Bykova, L.N.; Chesnokova, O.Ya.; Orlova, M.V.

    1995-01-01

    The method for linearizing the potentiometric curves of precipitation titration is studied for its application in the determination of halide ions (Cl - , Br - , I - ) in dimethylacetamide, dimethylformamide, in which titration is complicated by additional equilibrium processes. It is found that the method of linearization permits the determination of the titrant volume at the end point of titration to high accuracy in the case of titration curves without a potential jump in the proximity of the equivalent point (5 x 10 -5 M). 3 refs., 2 figs., 3 tabs

  11. Implementation of the Master Curve method in ProSACC

    International Nuclear Information System (INIS)

    Feilitzen, Carl von; Sattari-Far, Iradj

    2012-03-01

    Cleavage fracture toughness data display normally large amount of statistical scatter in the transition region. The cleavage toughness data in this region is specimen size-dependent, and should be treated statistically rather than deterministically. Master Curve methodology is a procedure for mechanical testing and statistical analysis of fracture toughness of ferritic steels in the transition region. The methodology accounts for temperature and size dependence of fracture toughness. Using the Master Curve methodology for evaluation of the fracture toughness in the transition region releases the overconservatism that has been observed in using the ASME-KIC curve. One main advantage of using the Master Curve methodology is possibility to use small Charpy-size specimens to determine fracture toughness. Detailed description of the Master Curve methodology is given by Sattari-Far and Wallin [2005). ProSACC is a suitable program in using for structural integrity assessments of components containing crack like defects and for defect tolerance analysis. The program gives possibilities to conduct assessments based on deterministic or probabilistic grounds. The method utilized in ProSACC is based on the R6-method developed at Nuclear Electric plc, Milne et al [1988]. The basic assumption in this method is that fracture in a cracked body can be described by two parameters Kr and Lr. The parameter Kr is the ratio between the stress intensity factor and the fracture toughness of the material. The parameter Lr is the ratio between applied load and the plastic limit load of the structure. The ProSACC assessment results are therefore highly dependent on the applied fracture toughness value in the assessment. In this work, the main options of the Master Curve methodology are implemented in the ProSACC program. Different options in evaluating Master Curve fracture toughness from standard fracture toughness testing data or impact testing data are considered. In addition, the

  12. Implementation of the Master Curve method in ProSACC

    Energy Technology Data Exchange (ETDEWEB)

    Feilitzen, Carl von; Sattari-Far, Iradj [Inspecta Technology AB, Stockholm (Sweden)

    2012-03-15

    Cleavage fracture toughness data display normally large amount of statistical scatter in the transition region. The cleavage toughness data in this region is specimen size-dependent, and should be treated statistically rather than deterministically. Master Curve methodology is a procedure for mechanical testing and statistical analysis of fracture toughness of ferritic steels in the transition region. The methodology accounts for temperature and size dependence of fracture toughness. Using the Master Curve methodology for evaluation of the fracture toughness in the transition region releases the overconservatism that has been observed in using the ASME-KIC curve. One main advantage of using the Master Curve methodology is possibility to use small Charpy-size specimens to determine fracture toughness. Detailed description of the Master Curve methodology is given by Sattari-Far and Wallin [2005). ProSACC is a suitable program in using for structural integrity assessments of components containing crack like defects and for defect tolerance analysis. The program gives possibilities to conduct assessments based on deterministic or probabilistic grounds. The method utilized in ProSACC is based on the R6-method developed at Nuclear Electric plc, Milne et al [1988]. The basic assumption in this method is that fracture in a cracked body can be described by two parameters Kr and Lr. The parameter Kr is the ratio between the stress intensity factor and the fracture toughness of the material. The parameter Lr is the ratio between applied load and the plastic limit load of the structure. The ProSACC assessment results are therefore highly dependent on the applied fracture toughness value in the assessment. In this work, the main options of the Master Curve methodology are implemented in the ProSACC program. Different options in evaluating Master Curve fracture toughness from standard fracture toughness testing data or impact testing data are considered. In addition, the

  13. Rotating model for the equivalence principle paradox

    International Nuclear Information System (INIS)

    Wilkins, D.C.

    1975-01-01

    An idealized system is described in which two inertial frames rotate relative to one another. When a (scalar) dipole is locally at rest in one frame, a paradox arises as to whether or not it will radiate. Fluxes of energy and angular momentum and the time development of the system are discussed. Resolution of the paradox involves several unusual features, including (i) radiation by an unmoving charge, an effect discussed by Chitre, Price, and Sandberg, (ii) different power seen by relatively accelerated inertial observers, and (iii) radiation reaction due to gravitational backscattering of radiation, in agreement with the work of C. and B. DeWitt. These results are obtained, for the most part, without the complications of curved space--time

  14. String Sigma Models on Curved Supermanifolds

    Directory of Open Access Journals (Sweden)

    Roberto Catenacci

    2018-04-01

    Full Text Available We use the techniques of integral forms to analyze the easiest example of two-dimensional sigma models on a supermanifold. We write the action as an integral of a top integral form over a D = 2 supermanifold, and we show how to interpolate between different superspace actions. Then, we consider curved supermanifolds, and we show that the definitions used for flat supermanifolds can also be used for curved supermanifolds. We prove it by first considering the case of a curved rigid supermanifold and then the case of a generic curved supermanifold described by a single superfield E.

  15. Regional Marginal Abatement Cost Curves for NOx

    Data.gov (United States)

    U.S. Environmental Protection Agency — Data underlying the figures included in the manuscript "Marginal abatement cost curve for NOx incorporating controls, renewable electricity, energy efficiency and...

  16. Making nuclear 'normal'

    International Nuclear Information System (INIS)

    Haehlen, Peter; Elmiger, Bruno

    2000-01-01

    The mechanics of the Swiss NPPs' 'come and see' programme 1995-1999 were illustrated in our contributions to all PIME workshops since 1996. Now, after four annual 'waves', all the country has been covered by the NPPs' invitation to dialogue. This makes PIME 2000 the right time to shed some light on one particular objective of this initiative: making nuclear 'normal'. The principal aim of the 'come and see' programme, namely to give the Swiss NPPs 'a voice of their own' by the end of the nuclear moratorium 1990-2000, has clearly been attained and was commented on during earlier PIMEs. It is, however, equally important that Swiss nuclear energy not only made progress in terms of public 'presence', but also in terms of being perceived as a normal part of industry, as a normal branch of the economy. The message that Swiss nuclear energy is nothing but a normal business involving normal people, was stressed by several components of the multi-prong campaign: - The speakers in the TV ads were real - 'normal' - visitors' guides and not actors; - The testimonials in the print ads were all real NPP visitors - 'normal' people - and not models; - The mailings inviting a very large number of associations to 'come and see' activated a typical channel of 'normal' Swiss social life; - Spending money on ads (a new activity for Swiss NPPs) appears to have resulted in being perceived by the media as a normal branch of the economy. Today we feel that the 'normality' message has well been received by the media. In the controversy dealing with antinuclear arguments brought forward by environmental organisations journalists nowadays as a rule give nuclear energy a voice - a normal right to be heard. As in a 'normal' controversy, the media again actively ask themselves questions about specific antinuclear claims, much more than before 1990 when the moratorium started. The result is that in many cases such arguments are discarded by journalists, because they are, e.g., found to be

  17. 49 CFR 391.33 - Equivalent of road test.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 5 2010-10-01 2010-10-01 false Equivalent of road test. 391.33 Section 391.33... AND LONGER COMBINATION VEHICLE (LCV) DRIVER INSTRUCTORS Tests § 391.33 Equivalent of road test. (a) In place of, and as equivalent to, the road test required by § 391.31, a person who seeks to drive a...

  18. A bicategorical approach to Morita equivalence for Von Neumann algebras

    NARCIS (Netherlands)

    R.M. Brouwer (Rachel)

    2003-01-01

    textabstractWe relate Morita equivalence for von Neumann algebras to the ``Connes fusion'' tensor product between correspondences. In the purely algebraic setting, it is well known that rings are Morita equivalent if and only if they are equivalent objects in a bicategory whose 1-cells are

  19. Problems of Equivalence in Shona- English Bilingual Dictionaries

    African Journals Online (AJOL)

    rbr

    Page 1 ... translation equivalents in Shona-English dictionaries where lexicographers will be dealing with divergent languages and cultures, traditional practices of lexicography and the absence of reliable ... ideal in translation is to achieve structural and semantic equivalence. Absolute equivalence between any two ...

  20. The equivalence principle in classical mechanics and quantum mechanics

    OpenAIRE

    Mannheim, Philip D.

    1998-01-01

    We discuss our understanding of the equivalence principle in both classical mechanics and quantum mechanics. We show that not only does the equivalence principle hold for the trajectories of quantum particles in a background gravitational field, but also that it is only because of this that the equivalence principle is even to be expected to hold for classical particles at all.